simonhughes22 commited on
Commit
d5ff4ed
1 Parent(s): fe02e8c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -5
README.md CHANGED
@@ -8,14 +8,32 @@ This model was trained using [SentenceTransformers](https://sbert.net) [Cross-En
8
  The model was trained on the NLI data and a variety of datasets evaluating summarization accuracy for factual consistency, including [FEVER](https://huggingface.co/datasets/fever), [Vitamin C](https://huggingface.co/datasets/tals/vitaminc) and [PAWS](https://huggingface.co/datasets/paws).
9
 
10
  ## Performance
11
- TODO
 
 
 
 
 
12
  ## Usage
13
 
14
- Pre-trained models can be used like this:
 
15
  ```python
16
  from sentence_transformers import CrossEncoder
17
  model = CrossEncoder('vectara/hallucination_evaluation_model')
18
- scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
 
 
 
 
 
 
 
 
 
 
 
 
19
  ```
20
 
21
  ## Usage with Transformers AutoModel
@@ -27,9 +45,25 @@ import torch
27
  model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
28
  tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
29
 
30
- features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
 
 
 
 
 
 
 
 
 
31
 
32
  model.eval()
33
  with torch.no_grad():
34
- scores = model(**features)
 
 
 
 
 
 
 
35
  ```
 
8
  The model was trained on the NLI data and a variety of datasets evaluating summarization accuracy for factual consistency, including [FEVER](https://huggingface.co/datasets/fever), [Vitamin C](https://huggingface.co/datasets/tals/vitaminc) and [PAWS](https://huggingface.co/datasets/paws).
9
 
10
  ## Performance
11
+
12
+ TRUE Dataset (Minus Vitamin C, FEVER and PAWS) - 0.872 AUC Score
13
+ SummaC Benchmark (Test) - 0.764 Balanced Accuracy
14
+ SummaC Benchmark (Test) - 0.831 AUC Score
15
+ [AnyScale Ranking Test](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
16
+
17
  ## Usage
18
 
19
+ The model can be used like this:
20
+
21
  ```python
22
  from sentence_transformers import CrossEncoder
23
  model = CrossEncoder('vectara/hallucination_evaluation_model')
24
+ model.predict([
25
+ ["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
26
+ ["A person on a horse jumps over a broken down airplane.", "A person is at a diner, ordering an omelette."],
27
+ ["A person on a horse jumps over a broken down airplane.", "A person is outdoors, on a horse."],
28
+ ["A boy is jumping on skateboard in the middle of a red bridge.", "The boy skates down the sidewalk on a blue bridge"],
29
+ ["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond drinking water in public."],
30
+ ["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond man wearing a brown shirt is reading a book."],
31
+ ])
32
+ ```
33
+
34
+ This returns a numpy array:
35
+ ```
36
+ array([6.1051625e-01, 4.7493601e-04, 9.9639291e-01, 2.1221593e-04, 9.9599433e-01, 1.4126947e-03], dtype=float32)
37
  ```
38
 
39
  ## Usage with Transformers AutoModel
 
45
  model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
46
  tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
47
 
48
+ pairs = [
49
+ ["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
50
+ ["A person on a horse jumps over a broken down airplane.", "A person is at a diner, ordering an omelette."],
51
+ ["A person on a horse jumps over a broken down airplane.", "A person is outdoors, on a horse."],
52
+ ["A boy is jumping on skateboard in the middle of a red bridge.", "The boy skates down the sidewalk on a blue bridge"],
53
+ ["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond drinking water in public."],
54
+ ["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond man wearing a brown shirt is reading a book."],
55
+ ]
56
+
57
+ inputs = tokenizer.batch_encode_plus(pairs, return_tensors='pt', padding=True)
58
 
59
  model.eval()
60
  with torch.no_grad():
61
+ outputs = model(**inputs)
62
+ logits = outputs.logits.cpu().detach().numpy()
63
+ scores = 1 / (1 + np.exp(-logits)).flatten()
64
+ ```
65
+
66
+ This returns a numpy array:
67
+ ```
68
+ array([6.1051559e-01, 4.7493709e-04, 9.9639291e-01, 2.1221573e-04, 9.9599433e-01, 1.4127002e-03], dtype=float32)
69
  ```