Toptube Video Search Engine



Title:Fine-Tuning Llama 3 on a Custom Dataset: Training LLM for a RAG Q&A Use Case on a Single GPU
Duration:33:24
Viewed:2,809
Published:01-07-2024
Source:Youtube

Are you happy with your Large Language Model (LLM) performance on a specific task? If not, fine-tuning might be the answer. Even a simpler, smaller model can outperform a larger one if it's fine-tuned correctly for a specific task. In this video, you'll learn how to fine-tune Llama 3 on a custom dataset. Model on HF: https://huggingface.co/curiousily/Llama-3-8B-Instruct-Finance-RAG Philipp Schmid Post: https://www.philschmid.de/fine-tune-llms-in-2024-with-trl Follow me on X: https://twitter.com/venelin_valkov AI Bootcamp: https://www.mlexpert.io/bootcamp Discord: https://discord.gg/UaNPxVD6tv Subscribe: http://bit.ly/venelin-subscribe GitHub repository: https://github.com/curiousily/AI-Bootcamp 👍 Don't Forget to Like, Comment, and Subscribe for More Tutorials! 00:00 - Why fine-tuning? 00:25 - Text tutorial on MLExpert.io 00:53 - Fine-tuning process overview 02:19 - Dataset 02:56 - Lllama 3 8B Instruct 03:53 - Google Colab Setup 05:30 - Loading model and tokenizer 08:18 - Create custom dataset 14:30 - Establish baseline 17:37 - Training on completions 19:04 - LoRA setup 22:25 - Training 26:42 - Load model and push to HuggingFace hub 28:43 - Evaluation (comparing vs the base model) 32:50 - Conclusion Join this channel to get access to the perks and support my work: https://www.youtube.com/channel/UCoW_WzQNJVAjxo4osNAxd_g/join #llama3 #llm #rag #finetuning #promptengineering #chatgpt #chatbot #langchain #gpt4



SHARE TO YOUR FRIENDS


Download Server 1


DOWNLOAD MP4

Download Server 2


DOWNLOAD MP4

Alternative Download :