File size: 949 Bytes
e47f18a d3e1943 e47f18a d3e1943 e47f18a d3e1943 e47f18a d3e1943 e47f18a d3e1943 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: mit
datasets:
- wikipedia
language:
- ja
- en
---
# tiny-lm
This repository provides a tiny 16M parameters language model for debugging and testing purposes.
Trained on English and Japanese Wikipedia data.
## How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("sbintuitions/tiny-lm", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/tiny-lm")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(generator("Hello", max_length=30, do_sample=True, top_k=100))
```
## Model architecture
A 4-layer, 512-hidden-size transformer-based language model.
## Training
The model was trained on English Wikipedia and Japanese Wikipedia to optimize a traditional language modelling objective for 25B tokens.
## License
[MIT License](https://huggingface.co/sbintuitions/tiny-lm/resolve/main/LICENSE)
|