AhmedSSoliman
commited on
Commit
•
4d606c7
1
Parent(s):
cdf0b51
Update README.md
Browse files
README.md
CHANGED
@@ -17,24 +17,12 @@ widget:
|
|
17 |
---
|
18 |
|
19 |
# LlaMa2-CodeGen
|
20 |
-
This model is **LlaMa-2 7b** fine-tuned on the **CodeSearchNet dataset
|
21 |
|
22 |
# Model Trained on Google Colab Pro Using AutoTrain, PEFT and QLoRA
|
23 |
|
24 |
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
## Llama-2 description
|
29 |
-
|
30 |
-
[Llama-2](https://huggingface.co/meta-llama/Llama-2-7b)
|
31 |
-
|
32 |
-
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.
|
33 |
-
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
### Example
|
39 |
```py
|
40 |
|
|
|
17 |
---
|
18 |
|
19 |
# LlaMa2-CodeGen
|
20 |
+
This model is **LlaMa-2 7b** [Llama-2](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned on the **CodeSearchNet dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
|
21 |
|
22 |
# Model Trained on Google Colab Pro Using AutoTrain, PEFT and QLoRA
|
23 |
|
24 |
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
### Example
|
27 |
```py
|
28 |
|