Back to all models
text-generation mask_token:
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

google/roberta2roberta_L-24_discofuse google/roberta2roberta_L-24_discofuse
115 downloads
last 30 days

pytorch

tf

Contributed by

Google AI company
3 team members Β· 54 models

How to use this model directly from the πŸ€—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_discofuse")

Roberta2Roberta_L-24_discofuse EncoderDecoder model

The model was introduced in this paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository.

The model is an encoder-decoder model that was initialized on the roberta-large checkpoints for both the encoder and decoder and fine-tuned on sentencefusion on the discofuse dataset, which is linked above.

Disclaimer: The model card has been written by the Hugging Face team.

How to use

You can use this model for sentence fusion, e.g.

IMPORTANT: The model was not trained on the " (double quotation mark) character -> so the before tokenizing the text, it is advised to replace all " (double quotation marks) with a single ` (single back tick).

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_discofuse")

discofuse = """As a run-blocker, Zeitler moves relatively well. Zeitler often struggles at the point of contact in space."""

input_ids = tokenizer(discofuse, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# As a run-blocker, Zeitler moves relatively well. However, Zeitler often struggles at the point of contact in space.