ikeno-ada commited on
Commit
d30c9c2
1 Parent(s): 46e1a91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -30
README.md CHANGED
@@ -476,19 +476,19 @@ Find below some example scripts on how to use the model:
476
 
477
  ## Using the Pytorch model with `transformers`
478
 
479
- ### Running the model on a CPU or GPU
480
 
481
  <details>
482
  <summary> Click to expand </summary>
483
 
484
  First, install the Python packages that are required:
485
 
486
- `pip install transformers accelerate sentencepiece`
487
 
488
  ```python
489
  from transformers import T5ForConditionalGeneration, T5Tokenizer
490
 
491
- model_name = 'jbochi/madlad400-3b-mt'
492
  model = T5ForConditionalGeneration.from_pretrained(model_name, device_map="auto")
493
  tokenizer = T5Tokenizer.from_pretrained(model_name)
494
 
@@ -502,33 +502,6 @@ tokenizer.decode(outputs[0], skip_special_tokens=True)
502
 
503
  </details>
504
 
505
- ## Running the model with Candle
506
-
507
- <details>
508
- <summary> Click to expand </summary>
509
-
510
- Usage with [candle](https://github.com/huggingface/candle):
511
-
512
- ```bash
513
- $ cargo run --example t5 --release -- \
514
- --model-id "jbochi/madlad400-3b-mt" \
515
- --prompt "<2de> How are you, my friend?" \
516
- --decode --temperature 0
517
- ```
518
-
519
- We also provide a quantized model (1.65 GB vs the original 11.8 GB file):
520
-
521
- ```
522
- cargo run --example quantized-t5 --release -- \
523
- --model-id "jbochi/madlad400-3b-mt" --weight-file "model-q4k.gguf" \
524
- --prompt "<2de> How are you, my friend?" \
525
- --temperature 0
526
- ...
527
- Wie geht es dir, mein Freund?
528
- ```
529
-
530
- </details>
531
-
532
 
533
  # Uses
534
 
 
476
 
477
  ## Using the Pytorch model with `transformers`
478
 
479
+ ### Running the model on a GPU
480
 
481
  <details>
482
  <summary> Click to expand </summary>
483
 
484
  First, install the Python packages that are required:
485
 
486
+ `pip install transformers accelerate sentencepiece bitsandbytes`
487
 
488
  ```python
489
  from transformers import T5ForConditionalGeneration, T5Tokenizer
490
 
491
+ model_name = 'ikeno-ada/madlad400-3b-mt-bitsandbytes-4bit'
492
  model = T5ForConditionalGeneration.from_pretrained(model_name, device_map="auto")
493
  tokenizer = T5Tokenizer.from_pretrained(model_name)
494
 
 
502
 
503
  </details>
504
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
505
 
506
  # Uses
507