nicholasKluge commited on
Commit
d34616b
1 Parent(s): 283c340

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -110,7 +110,7 @@ responses = model.generate(**inputs, num_return_sequences=2)
110
  print(f"Pergunta: 👤 {question}\n")
111
 
112
  for i, response in enumerate(responses):
113
- print(f'Resposta {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
114
  ```
115
 
116
  The model will output something like:
@@ -122,6 +122,24 @@ The model will output something like:
122
  >>>Response 2: 🤖 A capital do Brasil é Brasília.
123
  ```
124
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
  ## Limitations
126
 
127
  Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
@@ -207,4 +225,4 @@ This repository was built as part of the RAIES ([Rede de Inteligência Artificia
207
 
208
  ## License
209
 
210
- TeenyTinyLlama-460m-Chat is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
 
110
  print(f"Pergunta: 👤 {question}\n")
111
 
112
  for i, response in enumerate(responses):
113
+ print(f'Resposta {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
114
  ```
115
 
116
  The model will output something like:
 
122
  >>>Response 2: 🤖 A capital do Brasil é Brasília.
123
  ```
124
 
125
+ The chat template for this model is:
126
+
127
+ ```bash
128
+ {{bos_token}}
129
+ {% for message in messages %}
130
+ {% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
131
+ {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
132
+ {% endif %}
133
+ {% if message['role'] == 'user' %}
134
+ {{ '<instruction>' + message['content'].strip() + '</instruction>'}}
135
+ {% elif message['role'] == 'assistant' %}
136
+ {{ message['content'].strip() + eos_token}}
137
+ {% else %}
138
+ {{ raise_exception('Only user and assistant roles are supported!') }}
139
+ {% endif %}
140
+ {% endfor %}
141
+ ```
142
+
143
  ## Limitations
144
 
145
  Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
 
225
 
226
  ## License
227
 
228
+ TeenyTinyLlama-460m-Chat is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.