Transformers
PyTorch
clip
Inference Endpoints
visheratin commited on
Commit
08362d0
1 Parent(s): 87c2778

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -41,3 +41,7 @@ hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base")
41
 
42
  outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)
43
  ```
 
 
 
 
 
41
 
42
  outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)
43
  ```
44
+
45
+ ## Acknowledgements
46
+
47
+ I thank [Lambda Cloud](https://lambdalabs.com/) for providing compute resources to train the model.