5roop commited on
Commit
4b8f830
β€’
1 Parent(s): 3bb29b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -96,4 +96,26 @@ Output:
96
  array([0.11633301, 3.63671875, 4.203125, 5.30859375]),
97
  array([0.11633301, 3.63671875, 4.203125, 5.30859375])
98
  )
99
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  array([0.11633301, 3.63671875, 4.203125, 5.30859375]),
97
  array([0.11633301, 3.63671875, 4.203125, 5.30859375])
98
  )
99
+ ```
100
+ ## Large scale use
101
+
102
+ [Bojan](https://huggingface.co/Bojan) tested the example above on a large dataset. He reports execution time can be improved by a factor of five with the use of `transformers` as follows:
103
+
104
+ ```python
105
+
106
+ from transformers import AutoModelForSequenceClassification, TextClassificationPipeline, AutoTokenizer,                                                                                               AutoConfig
107
+
108
+ MODEL = "classla/xlm-r-parlasent"
109
+ tokenizer = AutoTokenizer.from_pretrained(MODEL)
110
+ config = AutoConfig.from_pretrained(MODEL)
111
+ model = AutoModelForSequenceClassification.from_pretrained(MODEL)
112
+
113
+ pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True,
114
+ task='sentiment_analysis', device=0, function_to_apply="none")
115
+ pipe([
116
+ "I fully disagree with this argument.",
117
+ "The ministers are entering the chamber.",
118
+ "Things can always be improved in the future.",
119
+ "These are great news."
120
+ ])
121
+ ```