superagi-team commited on
Commit
5a0fc27
1 Parent(s): 6d5c30e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
+ # Model Card
7
+ SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size. SAM-7B has outperformed existing SoTA models on various reasoning benchmarks, including GSM-8K and ARC-C.
8
+
9
+ For full details of this model please read our [release blog post](https://superagi.com/introducing-sam-small-agentic-model/).
10
+
11
+ # Key Contributions
12
+ - SAM-7B outperforms GPT 3.5, Orca, and several other 70B models on multiple reasoning benchmarks, including ARC-C and GSM-8K.
13
+ - Interestingly, despite being trained on a 97% smaller dataset, SAM-7B surpasses Orca-13B on GSM-8K.
14
+ - All responses in our fine-tuning dataset are generated by open-source models without any assistance from state-of-the-art models like GPT-3.5 or GPT-4.
15
+
16
+ ## Training
17
+ - Trained by: SuperAGI Team
18
+ - Hardware: NVIDIA 6 x H100 SxM (80GB)
19
+ - Model used: Mistral 7B
20
+ - Duration of finetuning: 4 hours
21
+ - Number of epochs: 1
22
+ - Batch size: 16
23
+ - Learning Rate: 2e-5
24
+ - Warmup Ratio: 0.1
25
+ - Optmizer: AdamW
26
+ - Scheduler: Cosine
27
+
28
+ ## Example Prompt
29
+
30
+ The template used to build a prompt for the Instruct model is defined as follows:
31
+ ```
32
+ <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
33
+ ```
34
+ Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
35
+
36
+
37
+ ## Evaluation
38
+
39
+ These benchmarks show that our model has improved reasoning as compared to orca 2-7b, orca 2-13b and GPT-3.5.
40
+ Despite being smaller in size, we show better multi-hop reasoning, as shown below:
41
+ <img src = "https://superagi.com/wp-content/uploads/2023/12/image-932.png" alt="Reasoning Benchmark Performance" width="700">
42
+
43
+ ## Run the model
44
+
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+
48
+ model_id = "SuperAGI/SAM"
49
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
50
+
51
+ model = AutoModelForCausalLM.from_pretrained(model_id)
52
+
53
+ text = "Hello my name is"
54
+ inputs = tokenizer(text, return_tensors="pt")
55
+
56
+ outputs = model.generate(**inputs, max_new_tokens=20)
57
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
58
+ ```
59
+
60
+
61
+ ## Limitations
62
+
63
+ SAM is a demonstration that better reasoning can be induced using less but high-quality data generated using OpenSource LLMs.
64
+ The model is not suitable for conversations and Q&A, it performs better in task breakdown and reasoning only.
65
+ It does not have any moderation mechanisms. Therefore, the model is not suitable for production usage as it doesn't have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models.
66
+
67
+ ## The SuperAGI AI Team
68
+ Anmol Gautam, Arkajit Datta, Rajat Chawla, Ayush Vatsal, Sukrit Chatterjee, Adarsh Jha, Abhijeet Sinha, Rakesh Krishna, Adarsh Deep, Ishaan Bhola, Mukunda NS, Nishant Gaurav.