pantelnm commited on
Commit
c1c29aa
1 Parent(s): 0c64cdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -8
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: peft
3
- base_model: unsloth/llama-3-8b-bnb-4bit
4
  license: apache-2.0
5
  datasets:
6
  - yahma/alpaca-cleaned
@@ -27,8 +27,6 @@ Llama-3-8b-Alpaca-Finetuned is a state-of-the-art NLP model finetuned on the Lla
27
 
28
 
29
  - **Developed by:** Meta
30
- - **Funded by [optional]:** None
31
- - **Shared by [optional]:** None
32
  - **Model type:** Llama 3 8b
33
  - **Language(s) (NLP):** English
34
  - **License:** Apache License 2.0
@@ -38,9 +36,7 @@ Llama-3-8b-Alpaca-Finetuned is a state-of-the-art NLP model finetuned on the Lla
38
 
39
  <!-- Provide the basic links for the model. -->
40
 
41
- - **Repository:** [More Information Needed]
42
- - **Paper [optional]:** [More Information Needed]
43
- - **Demo [optional]:** [More Information Needed]
44
 
45
  ## Uses
46
 
@@ -95,8 +91,8 @@ Use the code below to get started with the model.
95
  ```py
96
  from transformers import AutoModelForCausalLM, AutoTokenizer
97
 
98
- tokenizer = AutoTokenizer.from_pretrained("openai/llama-3-8b-alpaca-finetuned")
99
- model = AutoModelForCausalLM.from_pretrained("openai/llama-3-8b-alpaca-finetuned")
100
 
101
  input_text = "Provide a summary of the latest research in AI."
102
  inputs = tokenizer(input_text, return_tensors="pt")
 
1
  ---
2
  library_name: peft
3
+ base_model: meta-llama/Meta-Llama-3-8B
4
  license: apache-2.0
5
  datasets:
6
  - yahma/alpaca-cleaned
 
27
 
28
 
29
  - **Developed by:** Meta
 
 
30
  - **Model type:** Llama 3 8b
31
  - **Language(s) (NLP):** English
32
  - **License:** Apache License 2.0
 
36
 
37
  <!-- Provide the basic links for the model. -->
38
 
39
+ - **Repository:** pantelnm/Llama-3-8b-Alpaca-Finetuned-GGUF
 
 
40
 
41
  ## Uses
42
 
 
91
  ```py
92
  from transformers import AutoModelForCausalLM, AutoTokenizer
93
 
94
+ tokenizer = AutoTokenizer.from_pretrained("pantelnm/Llama-3-8b-Alpaca-Finetuned-GGUF")
95
+ model = AutoModelForCausalLM.from_pretrained("pantelnm/Llama-3-8b-Alpaca-Finetuned-GGUF")
96
 
97
  input_text = "Provide a summary of the latest research in AI."
98
  inputs = tokenizer(input_text, return_tensors="pt")