vishanoberoi commited on
Commit
bce7ec0
verified
1 Parent(s): a3c09ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -10,6 +10,9 @@ tags:
10
 
11
  This model is a fine-tuned version of Llama-2-Chat-7b on company-specific question-answers data. It is designed for efficient performance while maintaining high-quality output, suitable for conversational AI applications.
12
 
 
 
 
13
  ## Model Details
14
  It was finetuned using QLORA and PEFT. After fine-tuning, the adapters were merged with the base model and then quantized to GGUF.
15
  - **Developed by:** Vishan Oberoi and Dev Chandan.
@@ -30,8 +33,7 @@ It was finetuned using QLORA and PEFT. After fine-tuning, the adapters were merg
30
 
31
 
32
  This model is optimized for direct use in conversational AI, particularly for generating responses based on company-specific data. It can be utilized effectively in customer service bots, FAQ bots, and other applications where accurate and contextually relevant answers are required.
33
- ## Full Tutorial on Cheap Finetuning
34
- https://github.com/VishanOberoi/FineTuningForTheGPUPoor?tab=readme-ov-file
35
 
36
  #### Example with `ctransformers`:
37
 
 
10
 
11
  This model is a fine-tuned version of Llama-2-Chat-7b on company-specific question-answers data. It is designed for efficient performance while maintaining high-quality output, suitable for conversational AI applications.
12
 
13
+ ## Full Tutorial on Cheap Finetuning
14
+ https://github.com/VishanOberoi/FineTuningForTheGPUPoor?tab=readme-ov-file
15
+
16
  ## Model Details
17
  It was finetuned using QLORA and PEFT. After fine-tuning, the adapters were merged with the base model and then quantized to GGUF.
18
  - **Developed by:** Vishan Oberoi and Dev Chandan.
 
33
 
34
 
35
  This model is optimized for direct use in conversational AI, particularly for generating responses based on company-specific data. It can be utilized effectively in customer service bots, FAQ bots, and other applications where accurate and contextually relevant answers are required.
36
+
 
37
 
38
  #### Example with `ctransformers`:
39