File size: 1,448 Bytes
072572c
fe59d59
 
072572c
7b67577
 
9b8e8be
 
235dc47
 
072572c
 
 
 
 
 
 
06b4fd2
 
072572c
fe59d59
 
 
 
072572c
fe59d59
 
 
072572c
fe59d59
072572c
fe59d59
 
 
072572c
fe59d59
 
072572c
fe59d59
 
072572c
fe59d59
 
072572c
fe59d59
 
072572c
fe59d59
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language:
- es
tags:
- spanish
- sentiment
datasets:
- muchocine
widget:
- "Increíble pelicula. ¡Altamente recomendado!"
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# electricidad-base-muchocine-finetuned

This model fine-tunes [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on [muchocine](https://huggingface.co/datasets/muchocine) dataset for sentiment classification to predict *star_rating*.


### How to use
The model can be used directly with the HuggingFace `pipeline`.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead

tokenizer = AutoTokenizer.from_pretrained("shahp7575/gpt2-horoscopes")
model = AutoModelWithLMHead.from_pretrained("shahp7575/gpt2-horoscopes")
```

### Examples

```python
from transformers import pipeline
clf = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)

clf('¡Qué película tan fantástica! ¡Me alegro de haberlo visto!')
>>> [{'label': '5', 'score': 0.9156607389450073}]

clf("La historia y el casting fueron geniales.")
>>> [{'label': '4', 'score': 0.6666394472122192}]

clf("Me gustó pero podría ser mejor.")
>>> [{'label': '3', 'score': 0.7013391852378845}]

clf("dinero tirado en esta pelicula")
>>> [{'label': '2', 'score': 0.7564149498939514}]

```