Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ datasets:
|
|
13 |
</div>
|
14 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
16 |
-
<p><a href="https://discord.gg/
|
17 |
</div>
|
18 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
19 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
@@ -78,7 +78,7 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
|
|
78 |
|
79 |
For further support, and discussions on these models and AI in general, join us at:
|
80 |
|
81 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
82 |
|
83 |
## Thanks, and how to contribute.
|
84 |
|
@@ -88,14 +88,14 @@ I've had a lot of people ask if they can contribute. I enjoy providing models an
|
|
88 |
|
89 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
90 |
|
91 |
-
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits
|
92 |
|
93 |
* Patreon: https://patreon.com/TheBlokeAI
|
94 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
95 |
|
96 |
-
**Patreon special mentions**: Aemon Algiz
|
97 |
|
98 |
-
Thank you to all my generous patrons and donaters
|
99 |
<!-- footer end -->
|
100 |
|
101 |
# Original model card: Chaoyi Wi's PMC_LLAMA 7B
|
@@ -104,8 +104,8 @@ This repo contains PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers i
|
|
104 |
|
105 |
The model was trained with the following hyperparameters:
|
106 |
|
107 |
-
* Epochs: 5
|
108 |
-
* Batch size: 128
|
109 |
* Cutoff length: 512
|
110 |
* Learning rate: 2e-5
|
111 |
|
@@ -118,13 +118,13 @@ import transformers
|
|
118 |
import torch
|
119 |
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
|
120 |
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
|
121 |
-
sentence = 'Hello, doctor'
|
122 |
batch = tokenizer(
|
123 |
sentence,
|
124 |
-
return_tensors="pt",
|
125 |
add_special_tokens=False
|
126 |
)
|
127 |
with torch.no_grad():
|
128 |
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
|
129 |
print('model predict: ',tokenizer.decode(generated[0]))
|
130 |
-
```
|
|
|
13 |
</div>
|
14 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
16 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
17 |
</div>
|
18 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
19 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
|
|
78 |
|
79 |
For further support, and discussions on these models and AI in general, join us at:
|
80 |
|
81 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
82 |
|
83 |
## Thanks, and how to contribute.
|
84 |
|
|
|
88 |
|
89 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
90 |
|
91 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
92 |
|
93 |
* Patreon: https://patreon.com/TheBlokeAI
|
94 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
95 |
|
96 |
+
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
|
97 |
|
98 |
+
Thank you to all my generous patrons and donaters!
|
99 |
<!-- footer end -->
|
100 |
|
101 |
# Original model card: Chaoyi Wi's PMC_LLAMA 7B
|
|
|
104 |
|
105 |
The model was trained with the following hyperparameters:
|
106 |
|
107 |
+
* Epochs: 5
|
108 |
+
* Batch size: 128
|
109 |
* Cutoff length: 512
|
110 |
* Learning rate: 2e-5
|
111 |
|
|
|
118 |
import torch
|
119 |
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
|
120 |
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
|
121 |
+
sentence = 'Hello, doctor'
|
122 |
batch = tokenizer(
|
123 |
sentence,
|
124 |
+
return_tensors="pt",
|
125 |
add_special_tokens=False
|
126 |
)
|
127 |
with torch.no_grad():
|
128 |
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
|
129 |
print('model predict: ',tokenizer.decode(generated[0]))
|
130 |
+
```
|