hamishivi commited on
Commit
c0631a3
1 Parent(s): 44caa36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -6
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: cc-by-sa-3.0
3
  datasets:
4
  - databricks/databricks-dolly-15k
5
  language:
@@ -13,6 +12,8 @@ This model is a 7B LLaMa model finetuned on the Dolly dataset. *please note this
13
  This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](arxiv.org/abs/xxxx).
14
  The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
15
 
 
 
16
  ## Usage
17
 
18
  We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
@@ -48,7 +49,7 @@ Here is the performance of this model across benchmarks explored in our paper [H
48
  | 0.380 | 0.358 | 0.050 | 0.070 | 0.272 | 0.244 | 43.569 | 8.718 | 0.111 | 0.221 | 12.67 | 20.7 |
49
 
50
 
51
- If you use this model, please cite our work and the original dataset:
52
 
53
  ```
54
  @article{camelevaluation,
@@ -58,6 +59,17 @@ If you use this model, please cite our work and the original dataset:
58
  }
59
  ```
60
 
 
 
 
 
 
 
 
 
 
 
 
61
  ```
62
  @misc{dolly,
63
  author = {Databricks},
@@ -68,7 +80,4 @@ If you use this model, please cite our work and the original dataset:
68
  howpublished = {Blog post},
69
  url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
70
  }
71
- ```
72
-
73
-
74
-
 
1
  ---
 
2
  datasets:
3
  - databricks/databricks-dolly-15k
4
  language:
 
12
  This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](arxiv.org/abs/xxxx).
13
  The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
14
 
15
+ This model is licensed under a modified LlaMa license, see License.txt for details.
16
+
17
  ## Usage
18
 
19
  We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
 
49
  | 0.380 | 0.358 | 0.050 | 0.070 | 0.272 | 0.244 | 43.569 | 8.718 | 0.111 | 0.221 | 12.67 | 20.7 |
50
 
51
 
52
+ If you use this model, please cite our work, the llama paper, and the original dataset:
53
 
54
  ```
55
  @article{camelevaluation,
 
59
  }
60
  ```
61
 
62
+ ```
63
+ @misc{touvron2023llama,
64
+ title={LLaMA: Open and Efficient Foundation Language Models},
65
+ author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
66
+ year={2023},
67
+ eprint={2302.13971},
68
+ archivePrefix={arXiv},
69
+ primaryClass={cs.CL}
70
+ }
71
+ ```
72
+
73
  ```
74
  @misc{dolly,
75
  author = {Databricks},
 
80
  howpublished = {Blog post},
81
  url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
82
  }
83
+ ```