dododododo commited on
Commit
5baba80
1 Parent(s): 0d6cbfa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # NEO
5
+
6
+ [🤗Neo-Models](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [🤗Neo-Datasets](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [Github](https://github.com/multimodal-art-projection/MAP-NEO)
7
+
8
+ Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.
9
+
10
+ ## Model
11
+
12
+ | Model | Describe | Download |
13
+ |---|---|---|
14
+ neo_7b| This repository contains the base model of neo_7b | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b)
15
+ neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate)
16
+ neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay)
17
+ neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M)
18
+ neo_scalinglaw_460M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_460M)
19
+ neo_scalinglaw_250M | This repo contains ckpts related to scalinglaw experiments | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_250M)
20
+ neo_2b_general | This repo contains ckpts of 2b model trained using common domain knowledge | • [🤗 Hugging Face](https://huggingface.co/m-a-p/neo_2b_general)
21
+
22
+ ### Usage
23
+
24
+ ```python
25
+ from transformers import AutoModelForCausalLM, AutoTokenizer
26
+
27
+ model_path = '<your-hf-model-path-with-tokenizer>'
28
+
29
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
30
+
31
+ model = AutoModelForCausalLM.from_pretrained(
32
+ model_path,
33
+ device_map="auto",
34
+ torch_dtype='auto'
35
+ ).eval()
36
+
37
+ input_text = "A long, long time ago,"
38
+
39
+ input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device)
40
+ output_ids = model.generate(**input_ids, max_new_tokens=20)
41
+ response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
42
+
43
+ print(response)
44
+ ```
45
+
46
+