A simple custom QLoRA implementation for fine-tuning a language model (LLM) with basic tools such as PyTorch and Bitsandbytes, completely decoupled from Hugging Face.
-
Updated
Jan 29, 2024 - Python
A simple custom QLoRA implementation for fine-tuning a language model (LLM) with basic tools such as PyTorch and Bitsandbytes, completely decoupled from Hugging Face.
🎋🌿🌟 Instruction Fine-Tuning of Meta Llama 3.2-3B Instruct on Kannada Conversations 🌟🌿🎋 Tailoring the model to follow specific instructions in Kannada, enhancing its ability to generate relevant, context-aware responses based on conversational inputs. 🚀✨ Using the Kannada Instruct dataset for fine-tuning! Happy Finetuning🎇🎉
Text Generation on Pre-Trained NousResearch’s Llama-2-7b-chat-hf using guanaco-llama2-1k dataset
Add a description, image, and links to the 4bit-quantize topic page so that developers can more easily learn about it.
To associate your repository with the 4bit-quantize topic, visit your repo's landing page and select "manage topics."