jianguozhang commited on
Commit
5f17790
β€’
1 Parent(s): ff84de1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: apache-2.0
3
  ---
4
 
5
  <div align="center">
@@ -11,7 +11,7 @@ alt="drawing" width="510"/>
11
 
12
  πŸŽ‰ Paper: https://arxiv.org/abs/2402.15506
13
 
14
- πŸŽ‰ License: apache-2.0
15
 
16
  If you already know [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), xLAM-v0.1 is a significant upgrade and better at many things. For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.
17
 
@@ -156,5 +156,4 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
156
  <tr><td>Vicuna-13B-16K </td><td>0.033</td><td>0.343</td></tr>
157
  <tr><td>Llama-2-70B </td><td>0.000</td><td>0.483</td></tr>
158
  </tbody>
159
- </table>
160
-
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  ---
4
 
5
  <div align="center">
 
11
 
12
  πŸŽ‰ Paper: https://arxiv.org/abs/2402.15506
13
 
14
+ License: cc-by-nc-4.0
15
 
16
  If you already know [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), xLAM-v0.1 is a significant upgrade and better at many things. For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.
17
 
 
156
  <tr><td>Vicuna-13B-16K </td><td>0.033</td><td>0.343</td></tr>
157
  <tr><td>Llama-2-70B </td><td>0.000</td><td>0.483</td></tr>
158
  </tbody>
159
+ </table>