mol2pro
You can download this model to directly perform inference using my description on GitHub. https://github.com/LDornfeld/mol2pro
The repo is currently private. So just ask me for access. It contains the decoder and encoder tokenizer and the model at the last training save (87K).
The model contains the right generation configuration already, so for the generation you can disable the flag --setup_generationconfig
.
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.