jianguozhang commited on
Commit
5af6579
1 Parent(s): 5f17790

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -13,7 +13,14 @@ alt="drawing" width="510"/>
13
 
14
  License: cc-by-nc-4.0
15
 
16
- If you already know [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), xLAM-v0.1 is a significant upgrade and better at many things. For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.
 
 
 
 
 
 
 
17
 
18
  ```python
19
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -33,6 +40,8 @@ outputs = model.generate(inputs, max_new_tokens=512)
33
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
34
  ```
35
 
 
 
36
 
37
 
38
  # Benchmarks
 
13
 
14
  License: cc-by-nc-4.0
15
 
16
+ If you already know [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), xLAM-v0.1 is a significant upgrade and better at many things.
17
+ For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.
18
+
19
+ xLAM-v0.1-r represents the version 0.1 of the Large Action Model series, with the "-r" indicating it's tagged for research.
20
+ This model is compatible with VLLM and FastChat platforms.
21
+
22
+
23
+
24
 
25
  ```python
26
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
40
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
41
  ```
42
 
43
+ You may need to tune the Temperature setting for different applications. Typically, a lower Temperature is helpful for tasks that require deterministic outcomes.
44
+
45
 
46
 
47
  # Benchmarks