language: en
thumbnail: >-
https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true
tags:
- exbert
- huggingtweets
widget:
- text: My dream is
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
![](https://hf-dinosaur.huggingface.co/exbert/button.png)
How does it work?
The model uses the following pipeline.
To understand how the model was developed, check the W&B report.
Training data
The model was trained on @julien_c's tweets.
Data | Quantity |
---|---|
Tweets downloaded | 3213 |
Retweets | 874 |
Short tweets | 224 |
Tweets kept | 2115 |
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @julien_c's tweets for 4 epochs.
Hyperparameters and metrics are recorded in the W&B training run.
Intended uses & limitations
How to use
You can use this model directly with a pipeline for text generation:
from transformers import pipeline
generator = pipeline('text-generation', model='huggingtweets/julien_c')
generator("My dream is", max_length=50, num_return_sequences=5)
Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
Built by Boris Dayma
For more details, visit the project repository.