faizaulia commited on
Commit
21a7fba
1 Parent(s): 17e4aa0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -2,4 +2,43 @@
2
  library_name: transformers
3
  language:
4
  - id
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
3
  language:
4
  - id
5
+ ---
6
+ # Model description
7
+ This model is a fine-tuned model of [```intfloat/multilingual-e5-large```](https://huggingface.co/intfloat/multilingual-e5-large), trained with Indonesian police news data.
8
+ # How to use this model:
9
+ ```python
10
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
11
+
12
+ tokenizer = AutoTokenizer.from_pretrained("faizaulia/e5-fine-tune-polri-news-emotion")
13
+ model = AutoModelForSequenceClassification.from_pretrained("faizaulia/e5-fine-tune-polri-news-emotion")
14
+ ```
15
+ # Label description:
16
+ 0: Angry, 1: Fear, 2: Sad, 3: Neutral, 4: Happy, 5: Love
17
+ # Input text example:
18
+ >LAMPUNG, KOMPAS.com - Komplotan perampok yang menyekap satu keluarga di Kabupaten Lampung Timur ditembak aparat kepolisian. Komplotan ini menggondol uang sebanyak Rp 50 juta milik korban. Kapolres Lampung Timur, AKBP M Rizal Muchtar mengatakan, tiga dari empat pelaku ini telah ditangkap pada Senin (27/2/2023) dini hari.
19
+ # Preprocesssing:
20
+ ```python
21
+ nltk.download('stopwords')
22
+ nltk.download('wordnet')
23
+
24
+ stop_words = set(stopwords.words('indonesian'))
25
+
26
+ def remove_stopwords(text):
27
+ words = text.split()
28
+ words = [word for word in words if word not in stop_words]
29
+ return ' '.join(words)
30
+
31
+ def clean_texts(text):
32
+ text = re.sub('\n',' ',text) # Remove every '\n'
33
+ text = re.sub(' +', ' ', text) # Remove extra spaces
34
+ text = re.sub('[\u2013\u2014]', '-', text) # Sub — and – char to -
35
+ text = re.sub('(.{0,40})-', '', text) # Remove news website/location at the beginning
36
+ text = re.sub(r'[^a-zA-Z\s]', '', text) # Remove non alphanbet characters
37
+ return text
38
+
39
+ def preprocess_text(text):
40
+ text = text.lower()
41
+ text = clean_texts(text)
42
+ text = remove_stopwords(text)
43
+ return text
44
+ ```