Edit model card
drawing

πŸš€ BTLM-7B v0.1

BTLM (Bittensor Language Model) is a collection of pretrained generative text models. This is the repository for the 7B pretrained model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.

Model Details

Bittensor's decentralized subnet 9 facilitated the development and release of the first version of the BTLM-7B model. This initial release comprises a sophisticated large language model designed for a variety of applications.In creating this model, significant effort was made to ensure its effectiveness and safety, setting a new standard in the decentralized open-source AI community.

β›” This is a pretrained model, which should be further finetuned for most usecases.

Training subnetwork : 9

Checkpoint : 03-05-2024

Subnet 9 Network Leaderboard

Top Bittensor Model Checkpoint

Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "CortexLM/btlm-v1-7b-base"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
)
sequences = pipeline(
   "Tell me about decentralization.",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Benchmark

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
43.32 45.65 58.29 44.26 30.45 70.88 10.39

LM Evaluation Harness Repository

License

BTLM-7B is licensed under the MIT License, a permissive license that allows for reuse with virtually no restrictions.

Downloads last month
1,688
Safetensors
Model size
6.89B params
Tensor type
BF16
Β·

Dataset used to train CortexLM/btlm-v1-7b-base-v0.1