bhavinjawade
commited on
Commit
•
c73eb76
1
Parent(s):
7ad3798
Update README.md
Browse files
README.md
CHANGED
@@ -18,9 +18,9 @@ To use the model `bhavinjawade/SOLAR-10B-OrcaDPO-Jawade`, follow these steps:
|
|
18 |
|
19 |
```python
|
20 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
21 |
-
|
22 |
model = AutoModelForCausalLM.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
|
23 |
-
tokenizer = AutoTokenizer.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
|
|
|
24 |
|
25 |
2. **Format the Prompt**
|
26 |
Format the chat input as a list of messages, each with a role ('system' or 'user') and content.
|
@@ -30,7 +30,8 @@ Format the chat input as a list of messages, each with a role ('system' or 'user
|
|
30 |
{"role": "system", "content": "You are a helpful assistant chatbot."},
|
31 |
{"role": "user", "content": "Is the universe real? or is it a simulation? whats your opinion?"}
|
32 |
]
|
33 |
-
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
|
|
|
34 |
|
35 |
3. **Create a Pipeline**
|
36 |
Set up a pipeline for text generation with the loaded model and tokenizer.
|
@@ -40,12 +41,14 @@ Set up a pipeline for text generation with the loaded model and tokenizer.
|
|
40 |
"text-generation",
|
41 |
model=model,
|
42 |
tokenizer=tokenizer
|
43 |
-
)
|
|
|
44 |
|
45 |
4. **Generate Text**
|
46 |
Use the pipeline to generate a sequence of text based on the prompt. You can adjust parameters like temperature and top_p for different styles of responses.
|
47 |
|
48 |
-
```
|
|
|
49 |
prompt,
|
50 |
do_sample=True,
|
51 |
temperature=0.7,
|
@@ -53,7 +56,8 @@ Use the pipeline to generate a sequence of text based on the prompt. You can adj
|
|
53 |
num_return_sequences=1,
|
54 |
max_length=200,
|
55 |
)
|
56 |
-
print(sequences[0]['generated_text'])
|
|
|
57 |
|
58 |
This setup allows you to utilize the capabilities of the **bhavinjawade/SOLAR-10B-OrcaDPO-Jawade** model for generating responses to chat inputs.
|
59 |
|
|
|
18 |
|
19 |
```python
|
20 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
21 |
model = AutoModelForCausalLM.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
|
22 |
+
tokenizer = AutoTokenizer.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
|
23 |
+
```
|
24 |
|
25 |
2. **Format the Prompt**
|
26 |
Format the chat input as a list of messages, each with a role ('system' or 'user') and content.
|
|
|
30 |
{"role": "system", "content": "You are a helpful assistant chatbot."},
|
31 |
{"role": "user", "content": "Is the universe real? or is it a simulation? whats your opinion?"}
|
32 |
]
|
33 |
+
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
|
34 |
+
```
|
35 |
|
36 |
3. **Create a Pipeline**
|
37 |
Set up a pipeline for text generation with the loaded model and tokenizer.
|
|
|
41 |
"text-generation",
|
42 |
model=model,
|
43 |
tokenizer=tokenizer
|
44 |
+
)
|
45 |
+
```
|
46 |
|
47 |
4. **Generate Text**
|
48 |
Use the pipeline to generate a sequence of text based on the prompt. You can adjust parameters like temperature and top_p for different styles of responses.
|
49 |
|
50 |
+
```python
|
51 |
+
sequences = pipeline(
|
52 |
prompt,
|
53 |
do_sample=True,
|
54 |
temperature=0.7,
|
|
|
56 |
num_return_sequences=1,
|
57 |
max_length=200,
|
58 |
)
|
59 |
+
print(sequences[0]['generated_text'])
|
60 |
+
```
|
61 |
|
62 |
This setup allows you to utilize the capabilities of the **bhavinjawade/SOLAR-10B-OrcaDPO-Jawade** model for generating responses to chat inputs.
|
63 |
|