Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
29 |
### Helpful links
|
30 |
|
31 |
* Model license: Llama 2 Community License Agreement
|
32 |
-
* Basic usage: [notebook](assets/
|
33 |
* Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-70b-dolphin-peft.py)
|
34 |
* Loss curves: [plot](https://huggingface.co/dfurman/llama-2-70b-dolphin-peft#finetuning-description)
|
35 |
* Runtime stats: [table](https://huggingface.co/dfurman/llama-2-70b-dolphin-peft#runtime-tests)
|
@@ -152,7 +152,7 @@ While great efforts have been taken to clean the pretraining data, it is possibl
|
|
152 |
|
153 |
## How to use
|
154 |
|
155 |
-
Basic usage: [notebook](assets/
|
156 |
|
157 |
Install and import the package dependencies:
|
158 |
|
|
|
29 |
### Helpful links
|
30 |
|
31 |
* Model license: Llama 2 Community License Agreement
|
32 |
+
* Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
|
33 |
* Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-70b-dolphin-peft.py)
|
34 |
* Loss curves: [plot](https://huggingface.co/dfurman/llama-2-70b-dolphin-peft#finetuning-description)
|
35 |
* Runtime stats: [table](https://huggingface.co/dfurman/llama-2-70b-dolphin-peft#runtime-tests)
|
|
|
152 |
|
153 |
## How to use
|
154 |
|
155 |
+
Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
|
156 |
|
157 |
Install and import the package dependencies:
|
158 |
|