Spaces:
Runtime error
Apply for community grant: Academic project
Hi, dear HF team. I'm the author of the paper: "GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation" (https://arxiv.org/abs/2211.10330v1). We want to set a demo to demonstrate our model to researchers.
The GENIUS model can generate text based on a sketch (key information consisting of textual spans, phrases, or words), which shows superior performances over a bunch of baselines. We believe this research can inspire some of the future work for conditional text generation.
Currently I've set up a Space, but I found it too slow. One question that I can't figure out is that, the Space inference (https://huggingface.co/spaces/beyond/genius, takes about 15-20s) is much slower than the Model Card Inference API (https://huggingface.co/beyond/genius-large, takes about 3s). Actually, I don't need a GPU to make the inference that fast, but I want to make the Space inference as fast at the Model Card Inference API.
Great thanks if you could offer some help!
assigned cpu upgrade as discussed
Thanks Ahsen!