MengniWang commited on
Commit
b6c9459
1 Parent(s): 0e0902d
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,14 +20,14 @@ tags:
20
 
21
  GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
22
 
23
- This int8 model is generated with [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model is from this [repo](https://huggingface.co/OWG/gpt-j-6B).
24
 
25
 
26
  # How to use
27
 
28
- Download the model and script by cloning the repository by
29
  ```shell
30
  git clone https://huggingface.co/Intel/gpt-j-6B-int8-static`
31
  ```
32
 
33
- Then you do inference based on the model and script.
 
20
 
21
  GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
22
 
23
+ This int8 model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model is from this [repo](https://huggingface.co/OWG/gpt-j-6B).
24
 
25
 
26
  # How to use
27
 
28
+ Download the model and script by cloning the repository:
29
  ```shell
30
  git clone https://huggingface.co/Intel/gpt-j-6B-int8-static`
31
  ```
32
 
33
+ Then you can do inference based on the model and script 'evaluation.ipynb'.