bhavinjawade commited on
Commit
5e91a78
1 Parent(s): 02a4971

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -9,6 +9,55 @@ datasets:
9
  ### Overview
10
  This model card is instruction finetuned version of `upstage/SOLAR-10.7B-Instruct-v1.0` model. Trained on the Intel DPO Orca dataset using LoRA.
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ### License
13
  - **Type**: MIT License
14
  - **Details**: This license permits reuse, modification, and distribution for both private and commercial purposes under the terms of the MIT License.
 
9
  ### Overview
10
  This model card is instruction finetuned version of `upstage/SOLAR-10.7B-Instruct-v1.0` model. Trained on the Intel DPO Orca dataset using LoRA.
11
 
12
+ ## How to Use This Model
13
+
14
+ To use the model `bhavinjawade/SOLAR-10B-OrcaDPO-Jawade`, follow these steps:
15
+
16
+ 1. **Import and Load the Model and Tokenizer**
17
+ Begin by importing the model and tokenizer. Load them using the `from_pretrained` method.
18
+
19
+ ```python
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+
22
+ model = AutoModelForCausalLM.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
23
+ tokenizer = AutoTokenizer.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
24
+
25
+ 2. **Format the Prompt**
26
+ Format the chat input as a list of messages, each with a role ('system' or 'user') and content.
27
+
28
+ ```python
29
+ message = [
30
+ {"role": "system", "content": "You are a helpful assistant chatbot."},
31
+ {"role": "user", "content": "Is the universe real? or is it a simulation? whats your opinion?"}
32
+ ]
33
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
34
+
35
+ 3. **Create a Pipeline**
36
+ Set up a pipeline for text generation with the loaded model and tokenizer.
37
+
38
+ ```python
39
+ pipeline = transformers.pipeline(
40
+ "text-generation",
41
+ model=model,
42
+ tokenizer=tokenizer
43
+ )
44
+
45
+ 4. **Generate Text**
46
+ Use the pipeline to generate a sequence of text based on the prompt. You can adjust parameters like temperature and top_p for different styles of responses.
47
+
48
+ ```python
49
+ sequences = pipeline(
50
+ prompt,
51
+ do_sample=True,
52
+ temperature=0.7,
53
+ top_p=0.9,
54
+ num_return_sequences=1,
55
+ max_length=200,
56
+ )
57
+ print(sequences[0]['generated_text'])
58
+
59
+ This setup allows you to utilize the capabilities of the **bhavinjawade/SOLAR-10B-OrcaDPO-Jawade** model for generating responses to chat inputs.
60
+
61
  ### License
62
  - **Type**: MIT License
63
  - **Details**: This license permits reuse, modification, and distribution for both private and commercial purposes under the terms of the MIT License.