Transformers
English
ctranslate2
int8
float16
code
Inference Endpoints
michaelfeil commited on
Commit
71eae22
1 Parent(s): 6f8e823

Upload HuggingFaceH4/starchat-alpha ctranslate fp16 weights

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -17,9 +17,9 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
17
 
18
  quantized version of [HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha)
19
  ```bash
20
- pip install hf-hub-ctranslate2>=2.0.8
21
  ```
22
- Converted on 2023-05-30 using
23
  ```
24
  ct2-transformers-converter --model HuggingFaceH4/starchat-alpha --output_dir /home/michael/tmp-ct2fast-starchat-alpha --force --copy_files merges.txt all_results.json training_args.bin tokenizer.json README.md dialogue_template.json tokenizer_config.json eval_results.json vocab.json TRAINER_README.md train_results.json generation_config.json trainer_state.json special_tokens_map.json added_tokens.json requirements.txt .gitattributes --quantization float16 --trust_remote_code
25
  ```
@@ -43,7 +43,8 @@ model = GeneratorCT2fromHfHub(
43
  )
44
  outputs = model.generate(
45
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
46
- max_length=64
 
47
  )
48
  print(outputs)
49
  ```
 
17
 
18
  quantized version of [HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha)
19
  ```bash
20
+ pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
21
  ```
22
+ Converted on 2023-05-31 using
23
  ```
24
  ct2-transformers-converter --model HuggingFaceH4/starchat-alpha --output_dir /home/michael/tmp-ct2fast-starchat-alpha --force --copy_files merges.txt all_results.json training_args.bin tokenizer.json README.md dialogue_template.json tokenizer_config.json eval_results.json vocab.json TRAINER_README.md train_results.json generation_config.json trainer_state.json special_tokens_map.json added_tokens.json requirements.txt .gitattributes --quantization float16 --trust_remote_code
25
  ```
 
43
  )
44
  outputs = model.generate(
45
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
46
+ max_length=64,
47
+ include_prompt_in_result=False
48
  )
49
  print(outputs)
50
  ```