LeonardPuettmann commited on
Commit
6f84468
1 Parent(s): b1ec03d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ widget:
3
+ - text: "thank you for the help :)"
4
+ example_title: "Positive example"
5
+ - text: "I will have a look. You can find more info in the documentation."
6
+ example_title: "Neutral example"
7
+ - text: "I hate this new tool, this is bad."
8
+ example_title: "Negative example"
9
+ ---
10
+
11
+
12
+ # Finetuned BERT model for classifying community posts
13
+
14
+ This distilbert model was fine-tuned on ~20.000 community postings using the HuggingFace adapter from Kern AI refinery.
15
+ The postings consistet of comments and posts from various forums and social media sites.
16
+ For the finetuning, a single NVidia K80 was used for about two hours.
17
+
18
+ Join our Discord if you have questions about this model: https://discord.gg/MdZyqSxKbe
19
+
20
+ BERT, which stands for Bidirectional Encoder Representations from Transformers, is a language model introduced by Google researchers in 2018.
21
+ It’s designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers2.
22
+
23
+ BERT is based on the transformer architecture and uses WordPiece to convert each English word into an integer code.
24
+ This model has a classification head on top of it, which means that this BERT model is specifically made for text classification.
25
+
26
+ ## Features
27
+
28
+ - The model can handle various text classification tasks, especially when it comes to postings made in forums and community sites.
29
+ - The output of the model are the three classes "positive", "neutral" and "negative" plus the models respective confidence score of the class.
30
+ - The model was fine-tuned on a custom datasets that was curated by Kern AI and labeled in our tool refinery.
31
+ - The model is currently supported by the PyTorch framework and can be easily deployed on various platforms using the HuggingFace Pipeline API.
32
+
33
+ ## Usage
34
+
35
+ To use the model, you need to install the HuggingFace Transformers library:
36
+
37
+ ```bash
38
+ pip install transformers
39
+ ```
40
+ Then you can load the model and the tokenizer from the HuggingFace Hub:
41
+
42
+ ```python
43
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
44
+
45
+ model = AutoModelForSequenceClassification.from_pretrained("KernAI/community-sentiment-bert")
46
+ tokenizer = AutoTokenizer.from_pretrained("KernAI/community-sentiment-bert")
47
+ ```
48
+ To classify a single sentence or a sentence pair, you can use the HuggingFace Pipeline API:
49
+
50
+ ```python
51
+ from transformers import pipeline
52
+
53
+ classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
54
+ result = classifier("This is a positive sentence.")
55
+ print(result)
56
+ # [{'label': 'Positive', 'score': 0.9998656511306763}]
57
+ ```