macadeliccc commited on
Commit
4751885
1 Parent(s): d0ecaa0

Added code examples that correspond to each prompt format

Browse files

Output from Single-turn demo:

GPT4 Correct User: Hello, how are you? GPT4 Correct Assistant: I'm doing great, thank you for asking! How can I assist you today?

Output from multi-turn demo:

GPT4 Correct User: Hello GPT4 Correct Assistant: GPT4 Correct User: How are you today? GPT4 Correct Assistant: I'm doing great, thank you for asking! How about you?

Output from coding demo:

Coding conversation response: Code User: Implement quicksort using C++ Code Assistant: Here's an example of how you can implement quicksort in C++:

```cpp
#include <iostream>
using namespace std;

void quickSort(int arr[], int left, int right) {
int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];

/* partition */
while (i <= j) {
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};

/* recursion */
if (left < j)
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}
```

Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -78,7 +78,46 @@ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 42
78
  tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
79
  assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
80
  ```
 
81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  ## License
84
  The dataset, model and online demo is a research preview intended for non-commercial use only, subject to the data distillation [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
 
78
  tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
79
  assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
80
  ```
81
+ ## Code Examples
82
 
83
+ ```python
84
+ import transformers
85
+
86
+ tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
87
+ model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
88
+
89
+ def generate_response(prompt):
90
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
91
+ outputs = model.generate(
92
+ input_ids,
93
+ max_length=256,
94
+ pad_token_id=tokenizer.pad_token_id,
95
+ eos_token_id=tokenizer.eos_token_id,
96
+ )
97
+ response_ids = outputs[0]
98
+ response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
99
+ return response_text
100
+
101
+ # Single-turn conversation
102
+ prompt = "Hello, how are you?"
103
+ single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
104
+ response_text = generate_response(single_turn_prompt)
105
+ print("Response:", response_text)
106
+
107
+ ## Multi-turn conversation
108
+ prompt = "Hello"
109
+ follow_up_question = "How are you today?"
110
+ response = ""
111
+ multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
112
+ response_text = generate_response(multi_turn_prompt)
113
+ print("Multi-turn conversation response:", response_text)
114
+
115
+ ### Coding conversation
116
+ prompt = "Implement quicksort using C++"
117
+ coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
118
+ response = generate_response(coding_prompt)
119
+ print("Coding conversation response:", response)
120
+ ```
121
 
122
  ## License
123
  The dataset, model and online demo is a research preview intended for non-commercial use only, subject to the data distillation [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.