saikatkumardey commited on
Commit
bc214c2
1 Parent(s): 2645e4c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -10,13 +10,23 @@ It was quantized using [CTranslate2](https://opennmt.net/CTranslate2/guides/tran
10
  ct2-transformers-converter --model MBZUAI/LaMini-Flan-T5-783M --output_dir lamini-flan-t5-783m-int8_float16 --quantization int8_float16
11
  ```
12
 
13
- ## Example
 
 
 
 
 
 
 
 
 
 
14
 
15
  ```python
16
  import ctranslate2
17
  import transformers
18
 
19
- # download the model files from 🤗 into model_dir
20
  model_dir = "lamini-flan-t5-783m_int8_float16"
21
  translator = ctranslate2.Translator(
22
  model_dir, compute_type="auto", inter_threads=4, intra_threads=4
 
10
  ct2-transformers-converter --model MBZUAI/LaMini-Flan-T5-783M --output_dir lamini-flan-t5-783m-int8_float16 --quantization int8_float16
11
  ```
12
 
13
+ # How to use it?
14
+
15
+
16
+ ## Clone the model
17
+
18
+ ```
19
+ git lfs install
20
+ git clone git@hf.co:saikatkumardey/lamini-flan-t5-783m_int8_float16
21
+ ```
22
+
23
+ ## Code example
24
 
25
  ```python
26
  import ctranslate2
27
  import transformers
28
 
29
+
30
  model_dir = "lamini-flan-t5-783m_int8_float16"
31
  translator = ctranslate2.Translator(
32
  model_dir, compute_type="auto", inter_threads=4, intra_threads=4