Datasets:
Multilinguality:
monolingual
Size Categories:
10B<n<100B
Language Creators:
found
Tags:
continual learning
Dataset Viewer
View in Dataset Viewer
Viewer
The dataset viewer is not available for this dataset.
The dataset tries to import a module that is not installed.
Error code: DatasetModuleNotInstalledError Exception: ImportError Message: To be able to use hieuhocnlp/deep-research, you need to install the following dependencies: boto3, botocore. Please install them using 'pip install boto3 botocore' for instance. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1481, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1202, in get_module local_imports = _download_additional_modules( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 294, in _download_additional_modules raise ImportError( ImportError: To be able to use hieuhocnlp/deep-research, you need to install the following dependencies: boto3, botocore. Please install them using 'pip install boto3 botocore' for instance.
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Card for "deep-research"
Dataset Summary
Briefly summarize the dataset...
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances.
train.json
- Size of downloaded dataset files: 181.42 MB
- Size of the generated dataset: 522.66 MB
- Total amount of disk used: 704.07 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{'id': '5733be284776f41900661182',
'title': 'University_of_Notre_Dame',
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary...',
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]}
}
dev.json
- Size of downloaded dataset files: 183.09 MB
- Size of the generated dataset: 523.97 MB
- Total amount of disk used: 707.06 MB
An example of 'devepopment' looks as follows.
This example was too long and was cropped:
{'id': '5733be284776f41900661182',
'title': 'University_of_Notre_Dame',
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary...',
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]}
}
Data Fields
id
: ID of the context, question unittitle
: Title of the question ...
Data Splits
train | development | test | |
---|---|---|---|
Input Sentences | |||
Average Sentence Length |
Dataset Creation
Curation Rationale
Source Data
Annotations
Personal and Sensitive Information
Considerations for Using the Data
Additional Information
Licensing Information
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@inproceedings{cheng-etal-2021-multimodal,
title = "Multimodal Phased Transformer for Sentiment Analysis",
author = "Cheng, Junyan and
Fostiropoulos, Iordanis and
Boehm, Barry and
Soleymani, Mohammad",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.189",
doi = "10.18653/v1/2021.emnlp-main.189",
pages = "2447--2458",
abstract = "Multimodal Transformers achieve superior performance in multimodal learning tasks. However, the quadratic complexity of the self-attention mechanism in Transformers limits their deployment in low-resource devices and makes their inference and training computationally expensive. We propose multimodal Sparse Phased Transformer (SPT) to alleviate the problem of self-attention complexity and memory footprint. SPT uses a sampling function to generate a sparse attention matrix and compress a long sequence to a shorter sequence of hidden states. SPT concurrently captures interactions between the hidden states of different modalities at every layer. To further improve the efficiency of our method, we use Layer-wise parameter sharing and Factorized Co-Attention that share parameters between Cross Attention Blocks, with minimal impact on task performance. We evaluate our model with three sentiment analysis datasets and achieve comparable or superior performance compared with the existing methods, with a 90{\%} reduction in the number of parameters. We conclude that (SPT) along with parameter sharing can capture multimodal interactions with reduced model size and improved sample efficiency.",
}
Contributions
Thanks to @github-username for adding this dataset.
- Downloads last month
- 0