ElKulako commited on
Commit
b9156a2
1 Parent(s): 376dd88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  datasets:
3
- - ElKulako/StockTwits-crypto
4
 
5
  ---
6
 
@@ -9,18 +9,30 @@ CryptoBERT is a pre-trained NLP model to analyse the language and sentiments of
9
  ## Classification Training
10
  The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
11
 
12
- CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts, bootstrapped from [ElKulako/StockTwits-crypto](https://huggingface.co/datasets/ElKulako/StockTwits-crypto).
13
 
14
  CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
15
 
16
  # Classification Example
 
 
 
 
 
 
 
 
 
 
17
 
18
 
 
 
19
  ## Training Corpus
20
  CryptoBERT was trained on 3.2M social media posts regarding various cryptocurrencies. Only non-duplicate posts of length above 4 words were considered. The following communities were used as sources for our corpora:
21
 
22
 
23
- (1) StockTwits - 1.875M posts about the top 100 cryptos by trading volume. Posts were collected from the 1st of November 2021 to the 16th of June 2022. [ElKulako/StockTwits-crypto](https://huggingface.co/datasets/ElKulako/StockTwits-crypto)
24
 
25
  (2) Telegram - 664K posts from top 5 telegram groups: [Binance](https://t.me/binanceexchange), [Bittrex](https://t.me/BittrexGlobalEnglish), [huobi global](https://t.me/huobiglobalofficial), [Kucoin](https://t.me/Kucoin_Exchange), [OKEx](https://t.me/OKExOfficial_English).
26
  Data from 16.11.2020 to 30.01.2021. Courtesy of [Anton](https://www.kaggle.com/datasets/aagghh/crypto-telegram-groups).
 
1
  ---
2
  datasets:
3
+ - ElKulako/stocktwits-crypto
4
 
5
  ---
6
 
 
9
  ## Classification Training
10
  The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
11
 
12
+ CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts, bootstrapped from [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto).
13
 
14
  CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
15
 
16
  # Classification Example
17
+ ```python
18
+ >>> from transformers import TextClassificationPipeline, AutoModelForSequenceClassification, AutoTokenizer
19
+ >>> from datasets import load_dataset
20
+ >>> dataset_name = "ElKulako/stocktwits-crypto"
21
+ >>> dataset = load_dataset(dataset_name)
22
+ >>> model_name = "ElKulako/cryptobert"
23
+ >>> tokenizer_ = AutoTokenizer.from_pretrained(model_name, use_fast=True)
24
+ >>> model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels = 3)
25
+ >>> pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, batch_size=64, max_length=64, truncation=True, padding = 'max_length')
26
+ >>> preds = pipe(df_posts)
27
 
28
 
29
+ ```
30
+
31
  ## Training Corpus
32
  CryptoBERT was trained on 3.2M social media posts regarding various cryptocurrencies. Only non-duplicate posts of length above 4 words were considered. The following communities were used as sources for our corpora:
33
 
34
 
35
+ (1) StockTwits - 1.875M posts about the top 100 cryptos by trading volume. Posts were collected from the 1st of November 2021 to the 16th of June 2022. [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto)
36
 
37
  (2) Telegram - 664K posts from top 5 telegram groups: [Binance](https://t.me/binanceexchange), [Bittrex](https://t.me/BittrexGlobalEnglish), [huobi global](https://t.me/huobiglobalofficial), [Kucoin](https://t.me/Kucoin_Exchange), [OKEx](https://t.me/OKExOfficial_English).
38
  Data from 16.11.2020 to 30.01.2021. Courtesy of [Anton](https://www.kaggle.com/datasets/aagghh/crypto-telegram-groups).