superagi-team commited on
Commit
ce1fb6a
1 Parent(s): 5a0fc27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -4,13 +4,13 @@ language:
4
  - en
5
  ---
6
  # Model Card
7
- SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size. SAM-7B has outperformed existing SoTA models on various reasoning benchmarks, including GSM-8K and ARC-C.
8
 
9
  For full details of this model please read our [release blog post](https://superagi.com/introducing-sam-small-agentic-model/).
10
 
11
  # Key Contributions
12
- - SAM-7B outperforms GPT 3.5, Orca, and several other 70B models on multiple reasoning benchmarks, including ARC-C and GSM-8K.
13
- - Interestingly, despite being trained on a 97% smaller dataset, SAM-7B surpasses Orca-13B on GSM-8K.
14
  - All responses in our fine-tuning dataset are generated by open-source models without any assistance from state-of-the-art models like GPT-3.5 or GPT-4.
15
 
16
  ## Training
@@ -40,6 +40,8 @@ These benchmarks show that our model has improved reasoning as compared to orca
40
  Despite being smaller in size, we show better multi-hop reasoning, as shown below:
41
  <img src = "https://superagi.com/wp-content/uploads/2023/12/image-932.png" alt="Reasoning Benchmark Performance" width="700">
42
 
 
 
43
  ## Run the model
44
 
45
  ```python
@@ -50,10 +52,10 @@ tokenizer = AutoTokenizer.from_pretrained(model_id)
50
 
51
  model = AutoModelForCausalLM.from_pretrained(model_id)
52
 
53
- text = "Hello my name is"
54
  inputs = tokenizer(text, return_tensors="pt")
55
 
56
- outputs = model.generate(**inputs, max_new_tokens=20)
57
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
58
  ```
59
 
@@ -61,7 +63,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
61
  ## Limitations
62
 
63
  SAM is a demonstration that better reasoning can be induced using less but high-quality data generated using OpenSource LLMs.
64
- The model is not suitable for conversations and Q&A, it performs better in task breakdown and reasoning only.
65
  It does not have any moderation mechanisms. Therefore, the model is not suitable for production usage as it doesn't have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models.
66
 
67
  ## The SuperAGI AI Team
 
4
  - en
5
  ---
6
  # Model Card
7
+ SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size. SAM-7B has outperformed existing SoTA models on various reasoning benchmarks, including GSM8k and ARC-C.
8
 
9
  For full details of this model please read our [release blog post](https://superagi.com/introducing-sam-small-agentic-model/).
10
 
11
  # Key Contributions
12
+ - SAM-7B outperforms GPT 3.5, Orca, and several other 70B models on multiple reasoning benchmarks, including ARC-C and GSM8k.
13
+ - Interestingly, despite being trained on a 97% smaller dataset, SAM-7B surpasses Orca-13B on GSM8k.
14
  - All responses in our fine-tuning dataset are generated by open-source models without any assistance from state-of-the-art models like GPT-3.5 or GPT-4.
15
 
16
  ## Training
 
40
  Despite being smaller in size, we show better multi-hop reasoning, as shown below:
41
  <img src = "https://superagi.com/wp-content/uploads/2023/12/image-932.png" alt="Reasoning Benchmark Performance" width="700">
42
 
43
+ Note: Temperature=0.3 is the suggested for optimal performance
44
+
45
  ## Run the model
46
 
47
  ```python
 
52
 
53
  model = AutoModelForCausalLM.from_pretrained(model_id)
54
 
55
+ text = "Can elephants fly?"
56
  inputs = tokenizer(text, return_tensors="pt")
57
 
58
+ outputs = model.generate(**inputs, max_new_tokens=200)
59
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
60
  ```
61
 
 
63
  ## Limitations
64
 
65
  SAM is a demonstration that better reasoning can be induced using less but high-quality data generated using OpenSource LLMs.
66
+ The model is not suitable for conversations and simple Q&A, it performs better in task breakdown and reasoning only.
67
  It does not have any moderation mechanisms. Therefore, the model is not suitable for production usage as it doesn't have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models.
68
 
69
  ## The SuperAGI AI Team