PY007 commited on
Commit
098830e
1 Parent(s): 3ed1471

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -22,13 +22,13 @@ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion to
22
  We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
23
 
24
  #### This Model
25
- This is an intermediate checkpoint with 240K steps and 503B tokens. **We suggest you not use this directly for inference.** The [chat model](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.1) is always preferred **
26
 
27
 
28
  #### How to use
29
  You will need the transformers>=4.31
30
  Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
31
- ```
32
  from transformers import AutoTokenizer
33
  import transformers
34
  import torch
 
22
  We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
23
 
24
  #### This Model
25
+ This is an intermediate checkpoint with 480K steps and 1007B tokens.
26
 
27
 
28
  #### How to use
29
  You will need the transformers>=4.31
30
  Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
31
+ ```python
32
  from transformers import AutoTokenizer
33
  import transformers
34
  import torch