echoctx commited on
Commit
eaf7a49
1 Parent(s): a281315

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # 🚀 **BTLM-7B**
5
+ BTLM is a collection of pretrained generative text models. This is the repository for the 7B pretrained model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.
6
+
7
+ ### Model Details
8
+
9
+ Bittensor's decentralized subnet 9 facilitated the development and release of the first version of the BTLM-7B model. This initial release comprises a sophisticated large language model designed for a variety of applications.In creating this model, significant effort was made to ensure its effectiveness and safety, setting a new standard in the decentralized open-source AI community.
10
+
11
+ ⛔ **This is a pretrained model, which should be further finetuned for most usecases.**
12
+
13
+ **Model Developer** Bittensor Network
14
+
15
+ [**Subnet 9 Network Leaderboard**](https://huggingface.co/spaces/RaoFoundation/pretraining-leaderboard)
16
+
17
+ [**Top Bittensor Model Checkpoint**](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-1)
18
+
19
+ ### Inference
20
+
21
+ ```python
22
+ from transformers import AutoTokenizer, AutoModelForCausalLM
23
+ import transformers
24
+ import torch
25
+
26
+ model = "CortexLM/btlm-v1-7b-base"
27
+
28
+ tokenizer = AutoTokenizer.from_pretrained(model)
29
+ pipeline = transformers.pipeline(
30
+ "text-generation",
31
+ model=model,
32
+ tokenizer=tokenizer,
33
+ torch_dtype=torch.bfloat16,
34
+ )
35
+ sequences = pipeline(
36
+ "Tell me about decentralization.",
37
+ max_length=200,
38
+ do_sample=True,
39
+ top_k=10,
40
+ num_return_sequences=1,
41
+ eos_token_id=tokenizer.eos_token_id,
42
+ )
43
+ for seq in sequences:
44
+ print(f"Result: {seq['generated_text']}")
45
+
46
+ ```
47
+
48
+ ### Benchmark
49
+
50
+ | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
51
+ | --- | --- | --- | --- | --- | --- | --- |
52
+ | 43.32 | 45.65 | 58.29 | 44.26 | 30.45 | 70.88 | 10.39 |
53
+
54
+ [LM Evaluation Harness Repository](https://github.com/EleutherAI/lm-evaluation-harness)
55
+
56
+ ## License
57
+ BTLM-7B is licensed under the [MIT License](https://opensource.org/license/mit), a permissive license that allows for reuse with virtually no restrictions.
58
+
59
+
60
+
61
+
62
+