Text Generation
PEFT
Safetensors
English
instruction-tuning
qlora
code-llama
conversational
mingyue0101 commited on
Commit
06ecae2
·
verified ·
1 Parent(s): 04d3a3f

Update README.md

Browse files

update the model name.

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,7 +14,7 @@ datasets:
14
  license: apache-2.0
15
  ---
16
 
17
- # Model Card for super-cool-instruct
18
 
19
  This model is a fine-tuned version of `codellama/CodeLlama-7b-Instruct-hf` designed to enhance instruction-following capabilities. It was developed as part of a Master's thesis project.
20
 
@@ -22,7 +22,7 @@ This model is a fine-tuned version of `codellama/CodeLlama-7b-Instruct-hf` desig
22
 
23
  ### Model Description
24
 
25
- The `super-cool-instruct` model is a large language model fine-tuned using the QLoRA (4-bit Quantization + LoRA) technique. The goal of this model was to adapt the base CodeLlama model to better follow user instructions while maintaining its coding and reasoning capabilities.
26
 
27
  - **Developed by:** mingyue0101
28
  - **Model type:** Causal Language Model (Fine-tuned with PEFT/LoRA)
@@ -32,7 +32,7 @@ The `super-cool-instruct` model is a large language model fine-tuned using the Q
32
 
33
  ### Model Sources
34
 
35
- - **Repository:** https://huggingface.co/mingyue0101/super-cool-instruct
36
  - **Dataset:** https://huggingface.co/datasets/mingyue0101/prompt_code_parquet
37
 
38
  ## Uses
@@ -63,7 +63,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
63
  from peft import PeftModel
64
 
65
  model_id = "codellama/CodeLlama-7b-Instruct-hf"
66
- peft_model_id = "mingyue0101/super-cool-instruct"
67
 
68
  # Load 4-bit configuration
69
  bnb_config = BitsAndBytesConfig(
 
14
  license: apache-2.0
15
  ---
16
 
17
+ # Model Card for codellama-7b-matplotlib-assistant
18
 
19
  This model is a fine-tuned version of `codellama/CodeLlama-7b-Instruct-hf` designed to enhance instruction-following capabilities. It was developed as part of a Master's thesis project.
20
 
 
22
 
23
  ### Model Description
24
 
25
+ The `codellama-7b-matplotlib-assistant` model is a large language model fine-tuned using the QLoRA (4-bit Quantization + LoRA) technique. The goal of this model was to adapt the base CodeLlama model to better follow user instructions while maintaining its coding and reasoning capabilities.
26
 
27
  - **Developed by:** mingyue0101
28
  - **Model type:** Causal Language Model (Fine-tuned with PEFT/LoRA)
 
32
 
33
  ### Model Sources
34
 
35
+ - **Repository:** https://huggingface.co/mingyue0101/codellama-7b-matplotlib-assistant
36
  - **Dataset:** https://huggingface.co/datasets/mingyue0101/prompt_code_parquet
37
 
38
  ## Uses
 
63
  from peft import PeftModel
64
 
65
  model_id = "codellama/CodeLlama-7b-Instruct-hf"
66
+ peft_model_id = "mingyue0101/codellama-7b-matplotlib-assistant"
67
 
68
  # Load 4-bit configuration
69
  bnb_config = BitsAndBytesConfig(