Fine-tuned LLaMA 3.2 model for customer support using Unsloth optimization. Trained on Bitext's 27K customer service dataset. Optimized for M1 Mac.
- 2x faster training with Unsloth optimization
- LoRA fine-tuning
- Memory-efficient for M1 Macs
- Automated customer support responses
- Order cancellation and status handling
LLaMA-Customer-Support-Assistant/
├── fine_tuning.ipynb # Training notebook
├── test_model.ipynb # Testing notebook
├── customer_support_model/ # Model outputs
├── dataset/ # CSV dataset
└── README.md
# Create conda environment
conda create -n llm_env python=3.10
conda activate llm_env
# Install PyTorch for M1
conda install pytorch torchvision torchaudio -c pytorch
# Install dependencies
pip install transformers datasets accelerate peft
pip install unsloth
- Start Jupyter:
jupyter notebook
- Execute notebooks:
- Run
fine_tuning.ipynb
for training - Run
test_model.ipynb
for testing
- Base Model: unsloth/Llama-3.2-3B-Instruct
- Optimization: Unsloth + MPS backend
- Fine-tuning: LoRA
- Training Parameters:
- Batch size: 1
- Learning rate: 1e-4
- Epochs: 1
- Max sequence length: 256
- M1/M2 Mac
- 8GB+ RAM
- Python 3.10+
- PyTorch with MPS support
MIT
@misc{bitext2023customer,
title={Customer Support LLM Training Dataset},
author={Bitext},
year={2023},
publisher={GitHub},
url={https://github.com/bitext/customer-support-llm-chatbot-training-dataset}
}