This repository contains instructions/examples/tutorials for getting started with Deep Learning using PyTorch and Hugging Face libraries like transformers, datasets.
- Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers
- Fine-tune FLAN-T5 for chat & dialogue summarization
- Fine-tune Falcon 180B with DeepSpeed ZeRO, LoRA & Flash Attention
- Getting started with Transformers and TPU using PyTorch
- Extended Guide: Instruction-tune Llama 2
- Quantize open LLMs using optimum and GPTQ
- Fine-tune Embedding models for RAG
- Fine-tune LLMs in 2024 with TRL
- Fine-tune LLMs in 2025
- Fine-tune Multimodal LLMs with TRL
- RLHF in 2024 with DPO & Hugging Face
- Fine-tune Gemma with ChatML
- Efficiently scale distributed training with FSDP & Q-LoRA