Reggie commited on
Commit
8de4019
1 Parent(s): c9c30a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -1
README.md CHANGED
@@ -1,3 +1,55 @@
1
  ---
2
- license: openrail
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ widget:
6
+ - text: >-
7
+ A nervous passenger is about to book a flight ticket, and he asks the
8
+ airlines' ticket seller, 'I hope your planes are safe. Do they have a good
9
+ track record for safety?' The airline agent replies, 'Sir, I can guarantee
10
+ you, we've never had a plane that has crashed more than once.'
11
+ example_title: A joke
12
+ - text: >-
13
+ Let me, however, hasten to assure that I am the same Gandhi as I was in
14
+ 1920. I have not changed in any fundamental respect. I attach the same
15
+ importance to nonviolence that I did then. If at all, my emphasis on it has
16
+ grown stronger. There is no real contradiction between the present
17
+ resolution and my previous writings and utterances.
18
+ example_title: Not a joke
19
+ tags:
20
+ - deberta
21
  ---
22
+
23
+ ### What is this?
24
+ This model has been developed to detect "narrative-style" jokes, stories and anecdotes (i.e. they are narrated as a story) spoken during speeches or conversations etc. It works best when jokes/anecdotes are at least 40 words or longer. It is based on Facebook's [RoBerta-MUPPET](https://huggingface.co/facebook/muppet-roberta-base).
25
+
26
+ The training dataset was a private collection of around 2000 jokes. This model has not been trained or tested on one-liners, puns or Reddit-style language-manipulation jokes such as knock-knock, Q&A jokes etc.
27
+
28
+ See the example in the inference widget or How to use section for what constitues a narrative-style joke.
29
+
30
+ For a slightly less accurate model (0.4% less) that is 65% faster at inference, see the [Roberta model](Reggie/muppet-roberta-base-joke_detector).
31
+
32
+ ### Install these first
33
+ You'll need to pip install transformers & maybe sentencepiece
34
+
35
+ ### How to use
36
+ ```python
37
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
38
+ import torch, time
39
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
40
+ model_name = 'Reggie/muppet-roberta-base-joke_detector'
41
+ max_seq_len = 510
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained(model_name, model_max_length=max_seq_len)
44
+ model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
45
+
46
+ premise = """A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, "I hope your planes are safe. Do they have a good track record for safety?" The airline agent replies, "Sir, I can guarantee you, we've never had a plane that has crashed more than once." """
47
+ hypothesis = ""
48
+
49
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
50
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
51
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
52
+ is_joke = True if prediction[0] < prediction[1] else False
53
+
54
+ print(is_joke)
55
+ ```