JeswinMS4 commited on
Commit
9f5f73f
1 Parent(s): 58af741

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -18
README.md CHANGED
@@ -10,7 +10,7 @@ language:
10
  ---
11
  # BERT Base Intent model
12
  This is a fine tuned model based on Bert-Base-Uncased model. This model is used to classify intent into 3 categories- Fintech, Out of Scope and Abusive.
13
- Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
14
  [this paper](https://arxiv.org/abs/1810.04805) and first released in
15
  [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
16
  between english and English.
@@ -35,23 +35,17 @@ The following hyperparameters were used during training:
35
 
36
  ### Model Description
37
 
38
- BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
39
- was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
40
- publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
41
- was pretrained with two objectives:
42
-
43
- - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
44
- the entire masked sentence through the model and has to predict the masked words. This is different from traditional
45
- recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
46
- GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
47
- sentence.
48
- - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
49
- they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
50
- predict if the two sentences were following each other or not.
51
-
52
- This way, the model learns an inner representation of the English language that can then be used to extract features
53
- useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
54
- classifier using the features produced by the BERT model as inputs.
55
 
56
  - **Developed by:** Jeswin MS, Venkatesh R, Kushal S Ballari
57
  - **Model type:** Intent Classification
 
10
  ---
11
  # BERT Base Intent model
12
  This is a fine tuned model based on Bert-Base-Uncased model. This model is used to classify intent into 3 categories- Fintech, Out of Scope and Abusive.
13
+ The base line model is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
14
  [this paper](https://arxiv.org/abs/1810.04805) and first released in
15
  [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
16
  between english and English.
 
35
 
36
  ### Model Description
37
 
38
+ The finetuned Hugging Face model is a variant of the BERT-base-uncased architecture, trained for intent classification
39
+ with three labels: fintech, abusive, and out of scope. The model has undergone a fine-tuning process, where it has been
40
+ trained on a large corpus of annotated data using a supervised learning approach. The objective of the model is to
41
+ classify incoming text data into one of the three predefined classes based on the underlying intent of the text.
42
+
43
+ The performance of the model was evaluated on a held-out test set, and it achieved high accuracy and F1 scores
44
+ for all three classes. The model's high accuracy and robustness make it suitable for use in real-world applications,
45
+ such as chatbots, customer service automation, and social media monitoring.
46
+ Overall, the finetuned Hugging Face model provides an effective and reliable solution for intent classification
47
+ with three labels: fintech, abusive, and out of scope.
48
+
 
 
 
 
 
 
49
 
50
  - **Developed by:** Jeswin MS, Venkatesh R, Kushal S Ballari
51
  - **Model type:** Intent Classification