Text Generation
Transformers
Safetensors
llama
conversational
Inference Endpoints
text-generation-inference
jondurbin commited on
Commit
78a34ff
1 Parent(s): d2a2a51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -757,19 +757,25 @@ print(tokenizer.apply_chat_template(chat, tokenize=False))
757
 
758
  ## Renting instances to run the model
759
 
760
- ### MassedCompute
761
 
762
  [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
763
 
764
- 1) For this model rent the [Jon Durbin 2xA6000](https://shop.massedcompute.com/products/jon-durbin-2x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine use the code 'JonDurbin' for 50% your rental
765
- 2) After you start your rental you will receive an email with instructions on how to Login to the VM
766
- 3) Once inside the VM, open the terminal and run `conda activate text-generation-inference`
767
- 4) Then `cd Desktop/text-generation-inference/`
768
- 5) Run `volume=$PWD/data`
769
- 6) Run `model=jondurbin/bagel-20b-v04-llama`
770
- 7) `sudo docker run --gpus '"device=0,1"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
771
- 8) The model will take some time to load...
772
- 9) Once loaded the model will be available on port 8080
 
 
 
 
 
 
773
 
774
  Sample command within the VM
775
  ```
 
757
 
758
  ## Renting instances to run the model
759
 
760
+ ### Massed Compute Virtual Machine
761
 
762
  [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
763
 
764
+ 1) For this model, [create an account](https://bit.ly/jon-durbin) in Massed Compute. When renting a Virtual Machine use the code 'JonDurbin' for 50% your rental.
765
+ 2) After you created your account update your billing and navigate to the deploy page.
766
+ 3) Select the following
767
+ - GPU Type: A6000
768
+ - GPU Quantity: 1
769
+ - Category: Creator
770
+ - Image: Jon Durbin
771
+ - Coupon Code: JonDurbin
772
+ 4) Deploy the VM!
773
+ 5) Navigate to 'Running Instances' to retrieve instructions to login to the VM
774
+ 6) Once inside the VM, open the terminal and run `volume=$PWD/data`
775
+ 7) Run `model=jondurbin/bagel-20b-v04-llama`
776
+ 8) `sudo docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
777
+ 9) The model will take some time to load...
778
+ 10) Once loaded the model will be available on port 8080
779
 
780
  Sample command within the VM
781
  ```