simonhughes22 commited on
Commit
fe02e8c
1 Parent(s): 9afe510

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -11
README.md CHANGED
@@ -14,12 +14,8 @@ TODO
14
  Pre-trained models can be used like this:
15
  ```python
16
  from sentence_transformers import CrossEncoder
17
- model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
18
  scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
19
-
20
- #Convert scores to labels
21
- label_mapping = ['contradiction', 'entailment', 'neutral']
22
- labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
23
  ```
24
 
25
  ## Usage with Transformers AutoModel
@@ -28,15 +24,12 @@ You can use the model also directly with Transformers library (without SentenceT
28
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
29
  import torch
30
 
31
- model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
32
- tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
33
 
34
  features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
35
 
36
  model.eval()
37
  with torch.no_grad():
38
- scores = model(**features).logits
39
- label_mapping = ['contradiction', 'entailment', 'neutral']
40
- labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
41
- print(labels)
42
  ```
 
14
  Pre-trained models can be used like this:
15
  ```python
16
  from sentence_transformers import CrossEncoder
17
+ model = CrossEncoder('vectara/hallucination_evaluation_model')
18
  scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
 
 
 
 
19
  ```
20
 
21
  ## Usage with Transformers AutoModel
 
24
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
25
  import torch
26
 
27
+ model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
28
+ tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
29
 
30
  features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
31
 
32
  model.eval()
33
  with torch.no_grad():
34
+ scores = model(**features)
 
 
 
35
  ```