usmiva commited on
Commit
b9c2048
1 Parent(s): 5962cf0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -15
README.md CHANGED
@@ -109,21 +109,14 @@ gpt_web_bg("По професия той е ")
109
 
110
  The process of creating a diverse, bias-proof, and ethically fair dataset requires a meticulous and effective approach to clean the raw text data extracted from the internet. To address this challenge, we propose a specialized, multi-step procedure organized into the following stages:
111
 
112
- #### Deduplication
113
- Duplicate text sequences, often caused by web scraping, are removed from the dataset, thus ensuring that each entry contributes unique information to the training data.
114
- #### Topic Classification
115
- To guarantee diverse subject matter and reduce the risk of topic bias, topic classification is employed to categorize text entries based on their content.
116
- #### Sentiment Classification
117
- By categorizing entries with sentiment, the dataset diversity is further enhanced, enabling models to better interpret and handle the inherent emotional aspects of human language.
118
- #### Hate-Speech Detection
119
- To exclude content promoting hate speech from the dataset, automatic detection methods for Bulgarian are utilized.
120
- #### Balancing Topics and Sentiment in the Data
121
- The emphasis is placed on ensuring an adequate balance between topics and sentiment classes, as an imbalanced dataset can lead to biased results. By carefully redistributing instances across topics and sentiment categories, a more representative and inclusive dataset can be assembled, resulting in more robust and adaptable models.
122
- #### Cleaning Abusive Content
123
- To further refine the dataset, abusive content, including profanities, vulgar language, and other offensive expressions were cleaned from the text utilizing algorithms for abusive language detection.
124
- #### Minimum Sentence Threshold
125
- To ensure that the dataset includes meaningful and coherent text instances, a minimum sentence threshold is imposed, requiring that each entry contains at least five sentences. This condition ensures that models are trained on richer linguistic contexts and promotes more accurate and nuanced text generation.
126
- #### Cleaning non Bulgarian content
127
 
128
 
129
  #### Training Hyperparameters
 
109
 
110
  The process of creating a diverse, bias-proof, and ethically fair dataset requires a meticulous and effective approach to clean the raw text data extracted from the internet. To address this challenge, we propose a specialized, multi-step procedure organized into the following stages:
111
 
112
+ **Deduplication** - Duplicate text sequences, often caused by web scraping, are removed from the dataset, thus ensuring that each entry contributes unique information to the training data.
113
+ **Topic Classification** - To guarantee diverse subject matter and reduce the risk of topic bias, topic classification is employed to categorize text entries based on their content.
114
+ **Sentiment Classification** - By categorizing entries with sentiment, the dataset diversity is further enhanced, enabling models to better interpret and handle the inherent emotional aspects of human language.
115
+ **Hate-Speech Detection** - To exclude content promoting hate speech from the dataset, automatic detection methods for Bulgarian are utilized.
116
+ **Balancing Topics and Sentiment in the Data** - The emphasis is placed on ensuring an adequate balance between topics and sentiment classes, as an imbalanced dataset can lead to biased results. By carefully redistributing instances across topics and sentiment categories, a more representative and inclusive dataset can be assembled, resulting in more robust and adaptable models.
117
+ **Cleaning Abusive Content** - To further refine the dataset, abusive content, including profanities, vulgar language, and other offensive expressions were cleaned from the text utilizing algorithms for abusive language detection.
118
+ **Minimum Sentence Threshold** - To ensure that the dataset includes meaningful and coherent text instances, a minimum sentence threshold is imposed, requiring that each entry contains at least five sentences. This condition ensures that models are trained on richer linguistic contexts and promotes more accurate and nuanced text generation.
119
+ **Cleaning non Bulgarian content**
 
 
 
 
 
 
 
120
 
121
 
122
  #### Training Hyperparameters