Text Generation
Transformers
Safetensors
mixtral
conversational
Inference Endpoints
text-generation-inference
jondurbin nic-mc commited on
Commit
6b6e2a6
1 Parent(s): a569ef9

Update Massed Compute rental. New Coupon Code (#3)

Browse files

- Update Massed Compute rental. New Coupon Code (7d40c95ab0506f66ee7e5222db047c8a065fc153)


Co-authored-by: Nic Baughman <nic-mc@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -105,13 +105,13 @@ Only the train splits were used (if a split was provided), and an additional pas
105
 
106
  [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
107
 
108
- 1) For this model rent the [Jon Durbin 4xA6000](https://shop.massedcompute.com/products/jon-durbin-4x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine
109
  2) After you start your rental you will receive an email with instructions on how to Login to the VM
110
  3) Once inside the VM, open the terminal and run `conda activate text-generation-inference`
111
  4) Then `cd Desktop/text-generation-inference/`
112
  5) Run `volume=$PWD/data`
113
- 6) Run`model=jondurbin/bagel-8x7b-v0.2`
114
- 7) `sudo docker run --gpus '"device=0,1"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
115
  8) The model will take some time to load...
116
  9) Once loaded the model will be available on port 8080
117
 
 
105
 
106
  [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
107
 
108
+ 1) For this model rent the [Jon Durbin 4xA6000](https://shop.massedcompute.com/products/jon-durbin-4x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine use the code 'JonDurbin' for 50% your rental
109
  2) After you start your rental you will receive an email with instructions on how to Login to the VM
110
  3) Once inside the VM, open the terminal and run `conda activate text-generation-inference`
111
  4) Then `cd Desktop/text-generation-inference/`
112
  5) Run `volume=$PWD/data`
113
+ 6) Run `model=jondurbin/bagel-8x7b-v0.2`
114
+ 7) `sudo docker run --gpus '"device=0,1,2,3"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
115
  8) The model will take some time to load...
116
  9) Once loaded the model will be available on port 8080
117