Transformers
ctranslate2
int8
float16
Composer
MosaicML
llm-foundry
michaelfeil commited on
Commit
09e91a7
1 Parent(s): cc8bde5

Upload mosaicml/mpt-7b-chat ctranslate fp16 weights

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -20,14 +20,14 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
20
 
21
  quantized version of [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)
22
  ```bash
23
- pip install hf-hub-ctranslate2>=2.0.8
24
  ```
25
- Converted on 2023-05-30 using
26
  ```
27
- ct2-transformers-converter --model mosaicml/mpt-7b-chat --output_dir /home/michael/tmp-ct2fast-mpt-7b-chat --force --copy_files configuration_mpt.py meta_init_context.py tokenizer.json hf_prefixlm_converter.py README.md tokenizer_config.json blocks.py adapt_tokenizer.py attention.py norm.py generation_config.json flash_attn_triton.py special_tokens_map.json param_init_fns.py .gitattributes --quantization float16 --trust_remote_code
28
  ```
29
 
30
- Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
31
  - `compute_type=int8_float16` for `device="cuda"`
32
  - `compute_type=int8` for `device="cpu"`
33
 
@@ -46,7 +46,8 @@ model = GeneratorCT2fromHfHub(
46
  )
47
  outputs = model.generate(
48
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
49
- max_length=64
 
50
  )
51
  print(outputs)
52
  ```
 
20
 
21
  quantized version of [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)
22
  ```bash
23
+ pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
24
  ```
25
+ Converted on 2023-05-31 using
26
  ```
27
+ ct2-transformers-converter --model mosaicml/mpt-7b-chat --output_dir /home/michael/tmp-ct2fast-mpt-7b-chat --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16 --trust_remote_code
28
  ```
29
 
30
+ Checkpoint compatible to [ctranslate2>=3.14.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
31
  - `compute_type=int8_float16` for `device="cuda"`
32
  - `compute_type=int8` for `device="cpu"`
33
 
 
46
  )
47
  outputs = model.generate(
48
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
49
+ max_length=64,
50
+ include_prompt_in_result=False
51
  )
52
  print(outputs)
53
  ```