Deci
/

Text Generation
Transformers
Safetensors
English
deci
Deci AI
DeciLM
custom_code
Eval Results

Inference Endpoints Performance

#2
by philschmid HF staff - opened

Hey Philipp here,

Congrats on the new model release looks awesome! But I am curious to know how you created the benchmark numbers for inference Endpoints. Since it seems that the model needs "remote_trust_code" and there is no handler.py to deploy it.

Hey Philipp!
Apologies for the oversight. The baseline numbers in the README file refer to benchmarking results without a managed solution - just HF + PyTorch, not HF Inference Endpoints. We've updated the README to clarify. Thanks for pointing it out
Also, a heartfelt thank you for all your open-source contributions!

deleted
This comment has been hidden
itay-levy changed discussion status to closed

Sign up or log in to comment