File size: 910 Bytes
8cee81a 0f18a68 3d1ac56 b05a80e d7193c5 3e0b949 d7193c5 7c37ce6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: other
metrics:
- character
library_name: transformers
tags:
- art
language:
- en
pipeline_tag: conversational
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Its the Portal Space Core
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [ELMatero]
- **Shared by [optional]:** [ELMatero]
- **Model type:** [conversational]
- **Language(s) (NLP):** [English]
- **License:** [Other]
- **Finetuned from model [optional]:** [DialoGPT-Small]
## Uses
The Hosted Inference API Breaks it, I haven't figured out a way to limit its responses so its hard capped at 512 in the Generation_Config.json file. Just change that back to 1024 and you are good
so if someone knows please send a pull request or a edit or something!
### Direct Use
Just use it like you would usually for DialoGPT-small. |