File size: 3,724 Bytes
19d34ec 9040126 f39bcc9 5010964 f39bcc9 80aebf2 f39bcc9 80aebf2 f39bcc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
license: apache-2.0
pipeline_tag: text2text-generation
language:
- en
library_name: transformers
tags:
- code
- keyword-generation
- english
- t5
---
# KeywordGen-v2 Model
KeywordGen-v1 is a T5-based model fine-tuned for keyword generation from a piece of text. Given an input text, the model will return relevant keywords.
## Model Description
This model, "KeywordGen-v2", is the second version of the "KeywordGen" series. It is fine-tuned based on the T5 base model, specifically for the generation of keywords from text inputs, with a special focus on product reviews.
This model can provide useful insights by extracting key points or themes from product reviews. The output is expected to contain keywords ranging from 2 to 8 words. The model performs better when the input is at least 2-3 sentences long.
## How to use
You can use this model directly with a pipeline for text generation. When using the model, please prefix your input with "Keyword: " for the best results.
Here's how to use this model in Python with the Hugging Face Transformers library:
### FOR SINGLE INPUT
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = T5Tokenizer.from_pretrained("mrutyunjay-patil/keywordGen-v2")
model = T5ForConditionalGeneration.from_pretrained("mrutyunjay-patil/keywordGen-v2")
# Define your input sequence, prefixing with "Keyword: "
input_sequence = "Keyword: I purchased the new Android smartphone last week and I've been thoroughly impressed. The display is incredibly vibrant and sharp, and the battery life is surprisingly good, easily lasting a full day with heavy usage."
# Encode the input sequence
input_ids = tokenizer.encode(input_sequence, return_tensors="pt")
# Generate output
outputs = model.generate(input_ids)
output_sequence = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output_sequence)
```
### FOR MULTIPLE INPUT
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = T5Tokenizer.from_pretrained("mrutyunjay-patil/keywordGen-v2")
model = T5ForConditionalGeneration.from_pretrained("mrutyunjay-patil/keywordGen-v2")
# Define the prefix
task_prefix = "Keyword: "
# Define your list of input sequences
inputs = [
"Absolutely love this tablet. It has a clear, sharp screen and runs apps smoothly without any hiccups.",
"The headphones are fantastic with great sound quality, but the build quality could be better.",
"Bought this smartwatch last week, and I'm thrilled with its performance. Battery life is impressive.",
"This laptop exceeded my expectations. Excellent speed, plenty of storage, and light weight. Perfect for my needs.",
"The camera quality on this phone is exceptional. It captures detailed and vibrant photos. However, battery life is not the best."
]
# Loop through each input and generate keywords
for sample in inputs:
input_sequence = task_prefix + sample
input_ids = tokenizer.encode(input_sequence, return_tensors="pt")
outputs = model.generate(input_ids)
output_sequence = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(sample, "\n --->", output_sequence)
```
## Training
This model was trained on a custom dataset. The base model used was the T5 base model.
## Limitations and Future Work
As with any machine learning model, the outputs of this keyword generator depend on the data it was trained on. It is possible that the model might generate inappropriate or biased keywords if the input text contains such content. Future iterations of the model will aim to improve its robustness and fairness, and to minimize potential bias. |