Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    DatasetGenerationError
Message:      An error occurred while generating the dataset
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
sequence
_format_kwargs
dict
_format_type
null
_indexes
dict
_output_all_columns
bool
_split
null
[ { "filename": "dataset.arrow" } ]
6f6a46094968e7bb
[ "tags", "tokens" ]
{}
null
{}
false
null
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Stocks NER 2000 Sample Test Dataset for Named Entity Recognition

This dataset has been automatically processed by AutoTrain for the project stocks-ner-2000-sample-test, and is perfect for training models for Named Entity Recognition (NER) in the stock market domain.

Dataset Description

The dataset includes 2000 samples of stock market related text, with each sample consisting of a sequence of tokens and their corresponding named entity tags. The language of the dataset is English (BCP-47 code: 'en').

Dataset Structure

The dataset is structured as a list of data instances, where each instance includes the following fields:

  • tokens: a sequence of strings representing the text in the sample.
  • tags: a sequence of integers representing the named entity tags for each token in the sample. There are a total of 12 named entities in the dataset, including 'NANA', 'btst', 'delivery', 'enter', 'entry_momentum', 'exit', 'exit2', 'exit3', 'intraday', 'sl', 'symbol', and 'touched'.

Each sample in the dataset looks like this:

[
  {
    "tokens": [
      "MAXVIL",
      " : CONVERGENCE OF AVERAGES HAPPENING,  VOLUMES ABOVE AVERAGE  RSI FULLY BREAK OUT "
    ],
    "tags": [
      10,
      0
    ]
  },
  {
    "tokens": [
      "INTRADAY",
      " : BUY ",
      "CAMS",
      " ABOVE ",
      "2625",
      " SL ",
      "2595",
      " TARGET ",
      "2650",
      " - ",
      "2675",
      " - ",
      "2700",
      " "
    ],
    "tags": [
      8,
      0,
      10,
      0,
      3,
      0,
      9,
      0,
      5,
      0,
      6,
      0,
      7,
      0
    ]
  }
]

Dataset Splits

The dataset is split into a train and validation split, with 1261 samples in the train split and 480 samples in the validation split.

This dataset is designed to train models for Named Entity Recognition in the stock market domain and can be used for natural language processing (NLP) research and development. Download this dataset now and take the first step towards building your own state-of-the-art NER model for stock market text.

GitHub Link to this project : Telegram Trade Msg Backtest ML

Need custom model for your application? : Place a order on hjLabs.in : Custom Token Classification or Named Entity Recognition (NER) model as in Natural Language Processing (NLP) Machine Learning

What this repository contains? :

  1. Label data using LabelStudio NER(Named Entity Recognition or Token Classification) tool. Screenshot from 2022-09-30 12-28-50 convert to Screenshot from 2022-09-30 18-59-14

  2. Convert LabelStudio CSV or JSON to HuggingFace-autoTrain dataset conversion script Screenshot from 2022-10-01 10-36-03

  3. Train NER model on Hugginface-autoTrain. Screenshot from 2022-10-01 10-38-24

  4. Use Hugginface-autoTrain model to predict labels on new data in LabelStudio using LabelStudio-ML-Backend. Screenshot from 2022-10-01 10-41-07 Screenshot from 2022-10-01 10-42-36 Screenshot from 2022-10-01 10-44-56

  5. Define python function to predict labels using Hugginface-autoTrain model. Screenshot from 2022-10-01 10-47-08 Screenshot from 2022-10-01 10-47-25

  6. Only label new data from newly predicted-labels-dataset that has falsified labels. Screenshot from 2022-09-30 22-47-23

  7. Backtest Truely labelled dataset against real historical data of the stock using zerodha kiteconnect and jugaad_trader. Screenshot from 2022-10-01 00-05-55

  8. Evaluate total gained percentage since inception summation-wise and compounded and plot. Screenshot from 2022-10-01 00-06-59

  9. Listen to telegram channel for new LIVE messages using telegram API for algotrading. Screenshot from 2022-10-01 00-09-29

  10. Serve the app as flask web API for web request and respond to it as labelled tokens. Screenshot from 2022-10-01 00-12-12

  11. Outperforming or underperforming results of the telegram channel tips against exchange index by percentage. Screenshot from 2022-10-01 11-16-27

Place a custom order on hjLabs.in : https://hjLabs.in


Social Media :


Checkout Our Other Repositories

Checkout Our Other Products

Some Cool Arduino and ESP8266 (or NodeMCU) IoT projects:

Checkout Our Awesome 3D GrabCAD Models:

Our HuggingFace Models :

Our HuggingFace Datasets :

We sell Gigs on Fiverr :

Downloads last month
5

Models trained or fine-tuned on hemangjoshi37a/autotrain-data-stocks-ner-2000-sample-test