File size: 1,785 Bytes
1d1afac f6764e5 1d1afac f6764e5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
- zh
inference: false
---
# Model Card for Mobius-12B-base-m1
The Mobius-12B-base-m1 Large Language Model (LLM) is a pretrained model based on RWKV v5 arch. We use
## Warning
This repo contains weights that are not compatible with Hugging Face [transformers](https://github.com/huggingface/transformers) library yet. But you can try this[PR]() as well.
[RWKV runner]() or [AI00 server]() also work.
## Instruction|Chat format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
User: {Instruction|prompt}\n\nAssistant:
```
## Run the model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("TimeMobius/Mobius-12B-base-m1", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("TimeMobius/Mobius-12B-base-m1", trust_remote_code=True)
text = "x"
prompt = f'Question: {text.strip()}\n\nAnswer:'
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
## Limitations
The Mobius base m1 is the base model can be easily fine-tuned to achieve compelling performance.
### Benchmark
| Mobius-12B-base-m1 | |
|--------------------|----------|
| ppl | 3.41 |
| piqa | 0.78 |
| hellaswag | 0.71 |
| winogrande | 0.68 |
| arc_challenge | 0.42 |
| arc_easy | 0.73 |
| openbookqa | 0.40 |
| sciq | 0.93 |
# @TimeMobius |