|
--- |
|
license: gpl-3.0 |
|
language: |
|
- en |
|
pipeline_tag: text2text-generation |
|
--- |
|
# NanoLM-70M-Instruct-v1 |
|
|
|
|
|
English | [简体中文](README_zh-CN.md) |
|
|
|
|
|
## Introduction |
|
|
|
In order to explore the potential of small models, I have attempted to build a series of them, which are available in the [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2). |
|
|
|
This is NanoLM-70M-Instruct-v1. The model currently supports **English only**. |
|
|
|
|
|
|
|
## Model Details |
|
|
|
| Nano LMs | Non-emb Params | Arch | Layers | Dim | Heads | Seq Len | |
|
| :----------: | :------------------: | :---: | :----: | :-------: | :---: | :---: | |
|
| 25M | 15M | MistralForCausalLM | 12 | 312 | 12 |2K| |
|
| **70M** | **42M** | **LlamaForCausalLM** | **12** | **576** | **9** | **2K** | |
|
| 0.3B | 180M | Qwen2ForCausalLM | 12 | 896 | 14 |4K| |
|
| 1B | 840M | Qwen2ForCausalLM | 18 | 1536 | 12 |4K| |
|
|
|
The tokenizer and model architecture of NanoLM-70M-Instruct-v1 are the same as [SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M), but the number of layers has been reduced from 30 to 12. |
|
|
|
Essentially, it is a pure LLaMA architecture, specifically LlamaForCausalLM. |
|
|
|
As a result, NanoLM-70M-Instruct-v1 has only 70 million parameters. |
|
|
|
Despite this, NanoLM-70M-Instruct-v1 still demonstrates instruction-following capabilities. |
|
|
|
|
|
|
|
## How to use |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_path = 'Mxode/NanoLM-70M-Instruct-v1' |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_path).to('cuda:0', torch.bfloat16) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
|
|
|
|
text = "Why is it important for entrepreneurs to prioritize financial management?" |
|
prompt = tokenizer.apply_chat_template( |
|
[ |
|
{'role': 'system', 'content': 'You are a helpful assistant.'}, |
|
{'role': 'user', 'content': text} |
|
], |
|
add_generation_prompt=True, |
|
tokenize=True, |
|
return_tensors='pt' |
|
).to('cuda:0') |
|
|
|
|
|
outputs = model.generate( |
|
prompt, |
|
max_new_tokens=1024, |
|
do_sample=True, |
|
temperature=0.7, |
|
repetition_penalty=1.1, |
|
eos_token_id=tokenizer.eos_token_id, |
|
) |
|
response = tokenizer.decode(outputs[0]) |
|
print(response) |
|
``` |