sadhaklal's picture
added two widget examples to README.md
0806b8b verified
|
raw
history blame
1.3 kB
metadata
license: apache-2.0
datasets:
  - sst2
language:
  - en
metrics:
  - accuracy
library_name: transformers
pipeline_tag: text-classification
widget:
  - text: >-
      this film 's relationship to actual tension is the same as what
      christmas-tree flocking in a spray can is to actual snow : a poor -- if
      durable -- imitation .
  - text: director rob marshall went out gunning to make a great one .

bert-base-uncased-finetuned-sst2-v2

"bert-base-uncased" finetuned on SST-2.

This model pertains to the "Try it out!" exercise in section 4 of chapter 3 of the Hugging Face "NLP Course" (https://huggingface.co/learn/nlp-course/chapter3/4).

It was trained using a custom PyTorch loop without Hugging Face Accelerate.

Code: https://github.com/sambitmukherjee/hf-nlp-course-exercises/blob/main/chapter3/section4.ipynb

Experiment tracking: https://wandb.ai/sadhaklal/bert-base-uncased-finetuned-sst2-v2

Usage

from transformers import pipeline

classifier = pipeline("text-classification", model="sadhaklal/bert-base-uncased-finetuned-sst2-v2")
print(classifier("uneasy mishmash of styles and genres ."))
print(classifier("by the end of no such thing the audience , like beatrice , has a watchful affection for the monster ."))

Metric

Accuracy on the 'validation' split of SST-2: 0.9278