COURSE OBJECTIVE:
Working with an engaging, hands-on learning environment, and guided by an expert instructor, students will learn the basics of Large Language Models (LLMs) and how to use them for inference to build AI powered applications.
• Understand the basics of Natural Language Processing
• Implement text preprocessing and tokenization techniques using NLTK
• Explain word embeddings and the evolution of language models
• Use RNNs and LSTMs for handling sequential data
• Describe what transformers are and use key models like BERT and GPT
• Understand the risks and limitations of LLMs
• Use pre-trained models from Hugging Face to implement NLP tasks
• Understand the basics of Retrieval-Augmented Generation (RAG) systems
TARGET AUDIENCE:
– AI/ML Enthusiasts interested in learning about NLP (Natural Language Processing) and Large Language Models (LLMs).
– Data Scientists/Engineers interesting in using LLMs for inference and finetuning
– Software Developers wanting basic practical experience with NLP frameworks and LLMs
– Students and Professionals curious about the basics of transformers and how they power AI models
COURSE PREREQUISITES:
• Proficiency in Python programming
• Familiarity with data analysis using Pandas
COURSE CONTENT:
1) Introduction to NLP
• What is NLP?
• NLP Basics: Text Preprocessing and Tokenization
• NLP Basics: Word Embeddings
• Introducing Traditional NLP Libraries
• A brief history of modeling language
• Introducing PyTorch and HuggingFace for Text Preprocessing
• Neural Networks and Text Data
• Building Language Models using RNNs and LSTMs
2) Transformers and LLMs
• Introduction to Transformers
• Using Hugging Face's Transformers for inference
• LLMs and Generative AI
• Current LLM Options
• Fine tuning GPT
• Aligning LLMs with Human Values
• Retrieval-Augmented Generation (RAG) Systems
FOLLOW ON COURSES:
Not available. Please contact.