KBR
KBR

hi there

[Kaggle Gen AI] Day 1 ๊ณผ์ œ ์†Œ๊ฐœ - LLM & Prompt Engineering ๐Ÿš€

[Kaggle Gen AI] Day 1 ๊ณผ์ œ ์†Œ๊ฐœ - LLM & Prompt Engineering ๐Ÿš€

์ฒซ ๋ฒˆ์งธ ๋‚ ์€ LLM์˜ ๊ธฐ๋ณธ ๊ฐœ๋…๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์„œ, ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง์˜ ๊ธฐ์ดˆ๋ฅผ ๋ฐฐ์šฐ๊ณ  ์‹ค์Šตํ•ด๋ณด๋Š” ํ๋ฆ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋‹ค.

์ง„ํ–‰ ์ˆœ์„œ๋Š” ์•„๋ž˜ ์ •๋ฆฌํ•œ ๋ชฉ๋ก๋Œ€๋กœ ๋”ฐ๋ผ๊ฐ€๋ฉด ๋˜๋Š”๋ฐ, ๊ฐ ํ•ญ๋ชฉ์—๋Š” ๊ด€๋ จ๋œ ํŒŸ์บ์ŠคํŠธ๋‚˜ ๋ฐฑ์„œ ๋งํฌ๋„ ํ•จ๊ป˜ ์ ์–ด๋†จ์œผ๋‹ˆ ๋ฐ”๋กœ ํ™•์ธ ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ฐœ์ธ์ ์œผ๋กœ๋Š” ํŒŸ์บ์ŠคํŠธ๊ฐ€ ๋ฐฑ์„œ ๋‚ด์šฉ์„ ์•„์ฃผ ์ž˜ ์š”์•ฝํ•ด์ฃผ๊ณ  ์žˆ๊ณ , ์„ค๋ช… ํ๋ฆ„๋„ ๋งค๋„๋Ÿฌ์›Œ์„œ ํ˜ธ๋‹ค๋‹ฅ ๊ณต๋ถ€ํ•ด์•ผ ํ•œ๋‹ค๋ฉด ํŒŸ์บ์ŠคํŠธ๋ฅผ ๊ต‰์žฅํžˆ ์ถ”์ฒœ!



๐ŸŽ’ Todayโ€™s Assignments

  1. Complete the Intro Unit โ€“ โ€œFoundational Large Language Models & Text Generationโ€:
    ์ธํŠธ๋กœ ์œ ๋‹› โ€“ โ€œ๊ธฐ์ดˆ ๋Œ€ํ˜• ์–ธ์–ด ๋ชจ๋ธ(LLM) & Text Generationโ€:
    • Listen to theย summary podcast episodeย for this unit.
      ์ธํŠธ๋กœ ์œ ๋‹› ์š”์•ฝ ํŒŸ์บ์ŠคํŠธ ๋“ฃ๊ธฐ
    • To complement the podcast, read the โ€œFoundational Large Language Models & Text Generationโ€ whitepaper.
      ํŒŸ์บ์ŠคํŠธ๋ฅผ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด, โ€œ๊ธฐ์ดˆ ๋Œ€ํ˜• ์–ธ์–ด ๋ชจ๋ธ & Text Generationโ€ ๋ฐฑ์„œ ์ฝ๊ธฐ
  2. Complete Unit 1 โ€“ โ€œPrompt Engineeringโ€:
    ์œ ๋‹› 1 โ€“ โ€œํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋งโ€:
    • Listen to theย summary podcast episodeย for this unit.
      ์œ ๋‹› 1์˜ ์š”์•ฝ ํŒŸ์บ์ŠคํŠธ ๋“ฃ๊ธฐ
    • To complement the podcast, read the โ€œPrompt Engineeringโ€ whitepaper.
      ํŒŸ์บ์ŠคํŠธ๋ฅผ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด, โ€œprompt engineeringโ€ ๋ฐฑ์„œ ์ฝ๊ธฐ
    • Complete these codelabs on Kaggle:
      kaggle์˜ ์ฝ”๋“œ๋žฉ ์‹ค์Šต ์ง„ํ–‰ํ•˜๊ธฐ



๐Ÿ’ก What Youโ€™ll Learn

Today youโ€™ll explore the evolution of LLMs, from transformers to techniques like fine-tuning and inference acceleration. Youโ€™ll also get trained in the art of prompt engineering for optimal LLM interaction and evaluating LLMs. The first codelab will walk you through getting started with the Gemini 2.0 API and cover several prompt techniques including how different parameters impact the prompts. In the second codelab, you will also learn how to evaluate the response of LLMs using autoraters and structured output.

๐Ÿง  Day 1์—์„œ๋Š” LLM์˜ ๋ฐœ์ „ ํ๋ฆ„์„ ์‚ดํŽด๋ณธ๋‹ค! ํŠธ๋žœ์Šคํฌ๋จธ ๊ตฌ์กฐ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์„œ, ํŒŒ์ธํŠœ๋‹(fine-tuning), ์ถ”๋ก  ์ตœ์ ํ™”(inference acceleration) ๊ฐ™์€ ๊ธฐ๋ฒ•๋“ค์ด ์–ด๋–ป๊ฒŒ ๋ฐœ์ „ํ•ด์™”๋Š”์ง€๋ฅผ ๊ฐ„๋‹จํžˆ ์ •๋ฆฌํ•ด๋ณธ๋‹ค.

๐Ÿ› ๏ธ ์ด์–ด์„œ ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง(Prompt Engineering)์˜ ๊ธฐ๋ณธ ๊ฐœ๋…๋„ ๋ฐฐ์šฐ๊ณ , LLM๊ณผ ํšจ๊ณผ์ ์œผ๋กœ ์ƒํ˜ธ์ž‘์šฉํ•˜๊ธฐ ์œ„ํ•ด ์–ด๋–ค ๋ฐฉ์‹์œผ๋กœ ์งˆ๋ฌธ(ํ”„๋กฌํ”„ํŠธ)์„ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ๊ทธ ๊ตฌ์กฐ์™€ ์Šคํƒ€์ผ์„ ๋‹ค๋ฃจ๋Š” ๋ฒ•์„ ์ตํžŒ๋‹ค.

๐Ÿงช ์ฒซ ๋ฒˆ์งธ ์ฝ”๋“œ๋žฉ์—์„œ๋Š” Gemini 2.0 API๋ฅผ ์‚ฌ์šฉํ•ด ์‹ค์Šต์„ ์ง„ํ–‰ํ•œ๋‹ค.
ํ”„๋กฌํ”„ํŠธ์— ์–ด๋–ค ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ฃผ๋Š”์ง€์— ๋”ฐ๋ผ ์ถœ๋ ฅ ๊ฒฐ๊ณผ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š”์ง€๋ฅผ ์ง์ ‘ ์‹คํ—˜ํ•ด๋ณผ ์ˆ˜ ์žˆ๋‹ค.

๐Ÿงพ ๋‘ ๋ฒˆ์งธ ์ฝ”๋“œ๋žฉ์—์„œ๋Š” LLM ์‘๋‹ต ํ‰๊ฐ€์™€ ๊ตฌ์กฐํ™”๋œ ์ถœ๋ ฅ ๋งŒ๋“ค๊ธฐ์— ๋Œ€ํ•ด ๋‹ค๋ฃฌ๋‹ค.
์ž๋™ ํ‰๊ฐ€(auto-rating) ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šฐ๊ณ , ๋ชจ๋ธ์˜ ์‘๋‹ต์„ ์ข€ ๋” ๊ตฌ์กฐํ™”๋œ ํ˜•ํƒœ๋กœ ๋ฐ›๋Š” ์—ฐ์Šต๋„ ํ•ด๋ณธ๋‹ค.