Omid-sar commited on
Commit
f12b69d
·
1 Parent(s): 95d7115
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -18,5 +18,30 @@ The following `bitsandbytes` quantization config was used during training:
18
  - bnb_4bit_compute_dtype: float16
19
  ### Framework versions
20
 
 
21
 
22
- - PEFT 0.6.0.dev0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - bnb_4bit_compute_dtype: float16
19
  ### Framework versions
20
 
21
+ Fine-tuning Llama-2-7b using QLoRA in French on Google Colab
22
 
23
+
24
+ ## Goal
25
+
26
+ The goal of this project is to adapt the Llama-2-7b model, which initially might not have proficiency in French, to understand and respond accurately to queries in the French language. This adaptation involves fine-tuning the model on a dataset of French novels, allowing it to comprehend the nuances, syntax, and semantics of the French language. By leveraging the PEFT library from the Hugging Face ecosystem and QLoRA for more memory-efficient fine-tuning on a single T4 GPU provided by Google Colab, we aim to create a chatbot that can effectively answer questions posed in French.
27
+
28
+
29
+ ## Overview
30
+
31
+ This project involves several steps including setting up the environment, loading the dataset and model, configuring QLoRA and training parameters, training the model, and finally testing and pushing the fine-tuned model to Hugging Face.
32
+
33
+ ## Features
34
+
35
+ - **Dataset Loading**: Load and process a French novels dataset using Hugging Face datasets library.
36
+ - **Model Quantization**: Quantize the base Llama-2-7b model into 4-bit using bitsandbytes.
37
+ - **Configuration for QLoRA**: Apply the QLoRA configuration for more memory-efficient fine-tuning using the PEFT library.
38
+ - **Training**: Use the SFTTrainer from the TRL library for instruction-based fine-tuning.
39
+ - **Testing and Pushing to Hugging Face**: Test the fine-tuned model and push it to Hugging Face.
40
+
41
+ ## Prerequisites
42
+
43
+ - Google Colab with T4 GPU
44
+ - Python libraries: trl, transformers, accelerate, peft, datasets, bitsandbytes, einops
45
+
46
+
47
+ -