|
--- |
|
license: mit |
|
datasets: |
|
- tatsu-lab/alpaca |
|
tags: |
|
- generated_from_trainer |
|
- text2text-generation |
|
model-index: |
|
- name: T5R-base |
|
results: [] |
|
pipeline_tag: text2text-generation |
|
language: |
|
- en |
|
widget: |
|
- text: | |
|
Instruction: X |
|
Output: Adolf Hitler (German: [ˈadɔlf ˈhɪtlɐ] (listen); 20 April 1889 – 30 April 1945) was an Austrian-born German politician who was the dictator of Germany from 1933 until his suicide in 1945. He rose to power as the leader of the Nazi Party,[a] becoming the chancellor in 1933 and then taking the title of Führer und Reichskanzler in 1934.[b] During his dictatorship, he initiated World War II in Europe by invading Poland on 1 September 1939. He was closely involved in military operations throughout the war and was central to the perpetration of the Holocaust: the genocide of about six million Jews and millions of other victims. |
|
X: |
|
example_title: Example 1 |
|
- text: | |
|
Instruction: X |
|
Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish. |
|
What kind of instruction could this be the answer to? |
|
X: |
|
example_title: Example 2 |
|
--- |
|
|
|
# T5-Reverse (T5R) |
|
|
|
This model can generate prompts (instructions) for any text! |
|
|
|
This model is an instruction-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) but in **reverse format**! |
|
|
|
## How to Use the Model |
|
|
|
You can use the `transformers` library to load and utilize the T5-Reverse (T5R) model for generating prompts based on text. Here's an example of how to do it: |
|
|
|
```python |
|
>>> # Import required libraries |
|
>>> import torch |
|
>>> from transformers import pipeline |
|
|
|
>>> # Load the model and tokenizer using the pipeline from Hugging Face Hub |
|
>>> inference = pipeline("text2text-generation", model="kargaranamir/T5R-base") |
|
|
|
>>> # Example instruction and prompt |
|
>>> sample = ''' |
|
>>> Instruction: X |
|
>>> Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish. |
|
>>> What kind of instruction could this be the answer to? |
|
>>> X: |
|
>>> ''' |
|
|
|
>>> # Generate a response using the model |
|
>>> res = inference(sample) |
|
|
|
>>> # Print the generated response |
|
>>> print(res) |
|
|
|
[{'generated_text': 'Instruction: Generate three recommendations for a healthy diet.'}] |
|
``` |
|
|
|
|
|
## Citation |
|
|
|
If you find this model/approach useful, make a link to the huggingface model. |