DS 6051
Decoding Large Language Models
Course Description
Evolution of language models, from encoding words to simple vectors to training LLMs. Train and build LLM, understand concepts like self- and cross-attention in LLMs and their applications, review research on Tokenizers, Retrieval Augmented Generation (RAG), Prompt Engineering, Fine-tuning LLMs using Low-Rank Adapters (LoRA), Quantization in LLMs, QLoRA, In-context Learning (ICL) and Chain-of-Thought (CoT) reasoning. Using Python libraries.
Instructors
No instructors this semester
This course isn't being taught this semester. Try viewing the last 5 years.