File size: 2,292 Bytes
5f4129b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: apache-2.0
---
# NEO

[🤗Neo-Models](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [🤗Neo-Datasets](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [Github](https://github.com/multimodal-art-projection/MAP-NEO)

Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.

## Model

| Model | Describe | Download | 
|---|---|---|
neo_7b| This repository contains the base model of neo_7b  | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b)
neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate)
neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase.  | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay)
neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M)
neo_scalinglaw_460M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_460M)
neo_scalinglaw_250M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_250M)
neo_2b_general | This repo contains ckpts of 2b model trained using common domain knowledge | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_2b_general)

### Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = '<your-hf-model-path-with-tokenizer>'

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

input_text = "A long, long time ago,"

input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(response)
```