MengniWang commited on
Commit
0f8d314
1 Parent(s): b6c9459

add result

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -16,14 +16,20 @@ tags:
16
  - neural-compressor
17
  ---
18
 
19
- ## Model Details
20
 
21
  GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
22
 
23
- This int8 model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model is from this [repo](https://huggingface.co/OWG/gpt-j-6B).
24
 
 
25
 
26
- # How to use
 
 
 
 
 
27
 
28
  Download the model and script by cloning the repository:
29
  ```shell
 
16
  - neural-compressor
17
  ---
18
 
19
+ # INT8 GPT-J 6B
20
 
21
  GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
22
 
23
+ This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model is from this [repo](https://huggingface.co/OWG/gpt-j-6B).
24
 
25
+ ## Test result
26
 
27
+ | |INT8|FP32|
28
+ |---|:---:|:---:|
29
+ | **Model size (GB)** |13|23|
30
+
31
+
32
+ ## How to use
33
 
34
  Download the model and script by cloning the repository:
35
  ```shell