jpalmer18 commited on
Commit
8a43ae1
·
verified ·
1 Parent(s): dd1494f

End of training

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: meta-llama/Llama-3.1-8B-Instruct
3
  library_name: transformers
4
- model_name: llama3.1-8B-instruct-ft-extract-2025-01-02-14
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,7 +9,7 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for llama3.1-8B-instruct-ft-extract-2025-01-02-14
13
 
14
  This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
@@ -20,7 +20,7 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="jpalmer18/llama3.1-8B-instruct-ft-extract-2025-01-02-14", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
 
1
  ---
2
  base_model: meta-llama/Llama-3.1-8B-Instruct
3
  library_name: transformers
4
+ model_name: llama3.1-8B-instruct-ft-extract-2025-01-03
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for llama3.1-8B-instruct-ft-extract-2025-01-03
13
 
14
  This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="jpalmer18/llama3.1-8B-instruct-ft-extract-2025-01-03", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```