datasetId
stringlengths
2
81
card
stringlengths
20
977k
IlyaGusev/ru_stackoverflow
--- license: other task_categories: - text-generation - question-answering language: - ru size_categories: - 100K<n<1M dataset_info: features: - name: question_id dtype: uint32 - name: url dtype: string - name: answer_count dtype: uint32 - name: text_html dtype: string - name: text_markdown dtype: string - name: score dtype: int32 - name: title dtype: string - name: tags sequence: string - name: views dtype: uint64 - name: author dtype: string - name: timestamp dtype: uint64 - name: comments sequence: - name: text dtype: string - name: author dtype: string - name: comment_id dtype: uint32 - name: score dtype: int32 - name: timestamp dtype: uint64 - name: answers sequence: - name: answer_id dtype: uint32 - name: is_accepted dtype: uint8 - name: text_html dtype: string - name: text_markdown dtype: string - name: score dtype: int32 - name: author dtype: string - name: timestamp dtype: uint64 - name: comments sequence: - name: text dtype: string - name: author dtype: string - name: comment_id dtype: uint32 - name: score dtype: int32 - name: timestamp dtype: uint64 splits: - name: train num_bytes: 3013377174 num_examples: 437604 download_size: 670468664 dataset_size: 3013377174 --- # Russian StackOverflow dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Description](#description) - [Usage](#usage) - [Data Instances](#data-instances) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Licensing Information](#licensing-information) ## Description **Summary:** Dataset of questions, answers, and comments from [ru.stackoverflow.com](https://ru.stackoverflow.com/). **Script:** [create_stackoverflow.py](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py) **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu) **Languages:** The dataset is in Russian with some programming code. ## Usage Prerequisites: ```bash pip install datasets zstandard jsonlines pysimdjson ``` Loading: ```python from datasets import load_dataset dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train") for example in dataset: print(example["text_markdown"]) print() ``` ## Data Instances ``` { "question_id": 11235, "answer_count": 1, "url": "https://ru.stackoverflow.com/questions/11235", "score": 2, "tags": ["c++", "сериализация"], "title": "Извлечение из файла, запись в файл", "views": 1309, "author": "...", "timestamp": 1303205289, "text_html": "...", "text_markdown": "...", "comments": { "text": ["...", "...", "author": ["...", "..."], "comment_id": [11236, 11237], "score": [0, 0], "timestamp": [1303205411, 1303205678] }, "answers": { "answer_id": [11243, 11245], "timestamp": [1303207791, 1303207792], "is_accepted": [1, 0], "text_html": ["...", "..."], "text_markdown": ["...", "..."], "score": [3, 0], "author": ["...", "..."], "comments": { "text": ["...", "..."], "author": ["...", "..."], "comment_id": [11246, 11249], "score": [0, 0], "timestamp": [1303207961, 1303207800] } } } ``` You can use this little helper to unflatten sequences: ```python def revert_flattening(records): fixed_records = [] for key, values in records.items(): if not fixed_records: fixed_records = [{} for _ in range(len(values))] for i, value in enumerate(values): fixed_records[i][key] = value return fixed_records ``` The original JSONL is already unflattened. ## Source Data * The data source is the [Russian StackOverflow](https://ru.stackoverflow.com/) website. * Original XMLs: [ru.stackoverflow.com.7z](https://ia600107.us.archive.org/27/items/stackexchange/ru.stackoverflow.com.7z). * Processing script is [here](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py). ## Personal and Sensitive Information The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible. ## Licensing Information According to the license of original data, this dataset is distributed under [CC BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/).
rubentito/mp-docvqa
--- pretty_name: MP-DocVQA (Multipage Document Visual Question Answering) license: mit task_categories: - question-answering - document-question-answering - document-visual-question-answering language: - en multilinguality: - monolingual source_datasets: - Single Page Document Visual Question Answering --- # Dataset Card for Multipage Document Visual Question Answering (MP-DocVQA) ## Dataset Description - **Homepage: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=introduction)** - **Repository: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=downloads)** - **Paper: [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935.pdf])** - **Leaderboard: [Task 4 of DocVQA on the Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4)** ### Dataset Summary The dataset is aimed to perform Visual Question Answering on multipage industry scanned documents. The questions and answers are reused from Single Page DocVQA (SP-DocVQA) dataset. The images also corresponds to the same in original dataset with previous and posterior pages with a limit of up to 20 pages per document. ### Download the Dataset The dataset is not integrated with Huggingface yet. But you can download it from the [DocVQA Challenge](https://rrc.cvc.uab.es/?ch=17) in the RRC Portal, [Downloads section](https://rrc.cvc.uab.es/?ch=17&com=downloads). ### Leaderboard You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | | Train | Validation | Test | Total | |----------|:-----:|:-----------:|:------:|:-------:| |**Questions** |36230 | 5187 |5019 | 46436 | |**Documents** |5131 | 927 |959 | 5929 | |**Pages / Images** |37269 | 6510 |6223 | 47952 | Note that some documents might appear in both validation and test set. But they are never seen during training. ### Citation Information ```tex @article{tito2022hierarchical, title={Hierarchical multimodal transformers for Multi-Page DocVQA}, author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest}, journal={arXiv preprint arXiv:2212.05935}, year={2022} } ```
LangChainDatasets/sql-qa-chinook
--- license: mit ---
bigcode/bigcode-pii-dataset
--- dataset_info: features: - name: text dtype: string - name: type dtype: string - name: language dtype: string - name: fragments list: - name: category dtype: string - name: position sequence: int64 - name: value dtype: string - name: id dtype: int64 splits: - name: test num_bytes: 22496122 num_examples: 12099 download_size: 9152605 dataset_size: 22496122 language: - code task_categories: - token-classification extra_gated_prompt: |- ## Terms of Use for the dataset This is an annotated dataset for Personal Identifiable Information (PII) in code. We ask that you read and agree to the following Terms of Use before using the dataset and fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSfiWKyBB8-PxOCLo-KMsLlYNyQNJEzxJw0gcUAUHT3UY848qA/viewform): **Incomplete answers to the form will result in the request for access being ignored, with no follow-up actions by BigCode.** 1. You agree that you will not use the PII dataset for any purpose other than training or evaluating models for PII removal from datasets. 2. You agree that you will not share the PII dataset or any modified versions for whatever purpose. 3. Unless required by applicable law or agreed to in writing, the dataset is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using the dataset, and assume any risks associated with your exercise of permissions under these Terms of Use. 4. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE DATASET. extra_gated_fields: Email: text I have read the License and agree with its terms: checkbox --- # PII dataset ## Dataset description This is an annotated dataset for Personal Identifiable Information (PII) in code. The target entities are: Names, Usernames, Emails, IP addresses, Keys, Passwords, and IDs. The annotation process involved 1,399 crowd-workers from 35 countries with [Toloka](https://toloka.ai/). It consists of **12,099** samples of ~50 lines of code in 31 programming languages. You can also find a PII detection model that we trained on this dataset at [bigcode-pii-model](https://huggingface.co/loubnabnl/bigcode-pii-model). ## Dataset Structure You can load the dataset with: ```python from datasets import load_dataset ds = load_dataset("bigcode/bigcode-pii-dataset", use_auth_token=True) ds ``` ```` DatasetDict({ test: Dataset({ features: ['text', 'type', 'language', 'fragments', 'id'], num_rows: 12099 }) }) ```` It has the following data fields: - text: the code snippet - type: indicated if the data was pre-filtered with regexes (before annotation we selected 7100 files that were pre-filtered as positive for PII with regexes, and selected 5199 randomly) - language: programming language - fragments: detected secrets and their positions and categories - category: PII category - position: start and end - value: PII value ## Statistics Figure below shows the distribution of programming languages in the dataset: <img src="https://huggingface.co/datasets/bigcode/admin/resolve/main/pii_lang_dist.png" width="50%"> The following table shows the distribution of PII in all classes, as well as annotation quality after manual inspection of 300 diverse files from the dataset: | Entity | Count | Precision | Recall | | ---------------- | ----- | --------- | ------ | | IP\_ADDRESS | 2526 | 85% | 97% | | KEY | 308 | 91% | 78% | | PASSWORD | 598 | 91% | 86% | | ID | 1702 | 53% | 51% | | EMAIL | 5470 | 99% | 97% | | EMAIL\_EXAMPLE | 1407 | | | | EMAIL\_LICENSE | 3141 | | | | NAME | 2477 | 89% | 94% | | NAME\_EXAMPLE | 318 | | | | NAME\_LICENSE | 3105 | | | | USERNAME | 780 | 74% | 86% | | USERNAME\_EXAMPLE| 328 | | | | USERNAME\_LICENSE| 503 | | | | AMBIGUOUS | 287 | | | `AMBIGUOUS` and `ID` were not used in our [NER model](https://huggingface.co/loubnabnl/bigcode-pii-model) training for PII detection. # Dataset Creation We selected the annotation samples from [The Stack](https://huggingface.co/datasets/bigcode/the-stack) dataset after deduplication, a collection of code from open permissively licensed repositories on GitHub. To increase the representation of rare PII types, such as keys and IP addresses, we pre-filtered 7100 files from a larger sample. This pre-filtering was carried out using the [detect-secrets](https://github.com/Yelp/detect-secrets) tool with all default plugins activated, in addition to the regular expressions to detect emails, IPv4 and IPv6 addresses. To avoid introducing bias, the remaining 5100 files were randomly sampled from the dataset without pre-filtering. We then annotated the dataset through [Toloka Platform](https://toloka.ai/) with 1,399 crowd-workers from 35 countries. To ensure that crowd-workers received fair compensation, we established an hourly pay rate of \$7.30, taking into consideration different minimum wage rates across countries and their corresponding purchasing power. We limited annotation eligibility to countries where the hourly pay rate of \$7.30 was equivalent to the highest minimum wage in the US (\$16.50) in terms of purchasing power parity. # Considerations for Using the Data When using this dataset, please be mindful of the data governance risks that come with handling personally identifiable information (PII). Despite sourcing the data from open, permissive GitHub repositories and having it annotated by fairly paid crowd-workers, it does contain sensitive details such as names, usernames, keys, emails, passwords, and IP addresses. To ensure responsible use for research within the open-source community, access to the dataset will be provided through a gated mechanism. We expect researchers and developers working with the dataset to adhere to the highest ethical standards and employ robust data protection measures. To assist users in effectively detecting and masking PII, we've also released a PII model trained on this dataset. Our goal in providing access to both the dataset and the PII model is to foster the development of privacy-preserving AI technologies while minimizing potential risks related to handling PII.
semeru/code-text-javascript
--- license: mit Programminglanguage: "JavaScript" version: "N/A" Date: "Codesearchnet(Jun 2020 - paper release date)" Contaminated: "Very Likely" Size: "Standar Tokenizer (TreeSitter)" --- ### Dataset is imported from CodeXGLUE and pre-processed using their script. # Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/javascript in Semeru # CodeXGLUE -- Code-To-Text ## Task Definition The task is to generate natural language comments for a code, and evaluted by [smoothed bleu-4](https://www.aclweb.org/anthology/C04-1072.pdf) score. ## Dataset The dataset we use comes from [CodeSearchNet](https://arxiv.org/pdf/1909.09436.pdf) and we filter the dataset as the following: - Remove examples that codes cannot be parsed into an abstract syntax tree. - Remove examples that #tokens of documents is < 3 or >256 - Remove examples that documents contain special tokens (e.g. <img ...> or https:...) - Remove examples that documents are not English. ### Data Format After preprocessing dataset, you can obtain three .jsonl files, i.e. train.jsonl, valid.jsonl, test.jsonl For each file, each line in the uncompressed file represents one function. One row is illustrated below. - **repo:** the owner/repo - **path:** the full path to the original file - **func_name:** the function or method name - **original_string:** the raw string before tokenization or parsing - **language:** the programming language - **code/function:** the part of the `original_string` that is code - **code_tokens/function_tokens:** tokenized version of `code` - **docstring:** the top-level comment or docstring, if it exists in the original string - **docstring_tokens:** tokenized version of `docstring` ### Data Statistic | Programming Language | Training | Dev | Test | | :------------------- | :------: | :----: | :----: | | JavaScript | 58,025 | 3,885 | 3,291 | ## Reference <pre><code>@article{husain2019codesearchnet, title={Codesearchnet challenge: Evaluating the state of semantic code search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} }</code></pre>
emre/stanford-alpaca-cleaned-turkish-translated
--- license: afl-3.0 task_categories: - text-generation language: - tr size_categories: - 10K<n<100K --- 09/04/2023 Update: New instructions added from: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM Original Version: https://github.com/tatsu-lab/stanford_alpaca#data-release AI BASED TRANSLATION RESULTS OF STANFORD ALPACA EN TO TR For academic only, please cite before you use it. Taşar, D. E. T. (2023). stanford-alpaca-cleaned-turkish-translated [Dataset]. In Stanford Alpaca TR (1.0.1.a). https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated ### Citation Please cite the repo if you use the data or code in this repo. ``` @misc{alpaca-tr,tasar-2023 author = {Taşar, Davut Emre}, title = {stanford-alpaca-cleaned-turkish-translated}, year = {2023}, publisher = {Huggingface}, journal = {Huggingface repository}, howpublished = {\url{https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated}}, } ```
mmosiolek/pl_alpaca_data_cleaned
--- license: cc-by-4.0 language: - pl tags: - llama - alpaca - chat-gpt - self-instruct - gpt --- # Polpaca: The Polish Alpaca Please find the model here: https://huggingface.co/mmosiolek/polpaca-lora-7b This repository contains the polish translations of the datasets for constructing and evaluating instruction following models: Alpaca. ### Training The following dataset was translated: https://github.com/gururise/AlpacaDataCleaned It might be also found here: https://huggingface.co/datasets/yahma/alpaca-cleaned For the translation process, I relied on GPT-3.5-Turbo and the free $18 credits granted by the OpenAI platform. Unfortunately, the cost of the translation exceeded the amount granted, so I had to add $7 from my own pocket ;) Although the translation was extremely cheap, it took 5 days to complete. The following prompt was used for the translation based on: https://arxiv.org/abs/2301.08745 ``` Please provide the Polish translation for these sentences: [TEXT] ``` ### Manual Quality Assessment For evaluation the self-instruct (https://github.com/yizhongw/self-instruct) evaluation dataset was translated. This time with the help of DeepL that offers translation of 500K characters for free each month. Unfortunately this approach has certain limitations related to the fact, that some tasks from the original datasets can't be simply translated to another language. For example we can't propagate ortographic errors from one language to another. It's necessary to keep it mind while manually reviewing the results.
NeroUCH/online-health-chating
--- license: pddl task_categories: - question-answering - table-question-answering language: - zh tags: - healthcare - chat - llm - medical size_categories: - 100K<n<1M --- --- license: pddl --- # Online Health Chating This is the repository for the Online Health Chating project. which is the dataset of [chathealth](https://github.com/NeroHin/ChatHealth.git) project. > Alarm: This dataset isfor academic research only and any commercial use and clinical use is prohibited. ## Dataset We used crawler to collect the data from the following websites: - [KingNet](http://www.kingnet.com.tw/) | Item | Size | | :----: | :----: | | Row | 91,735 | - [問 8 健康咨詢](https://tw.wen8health.com/) | Item | Size | | :----: | :----: | | Row | 4,919 | - [臺灣 E 院](https://sp1.hso.mohw.gov.tw/doctor/) | Item | Size | | :----: | :----: | | Row | 153,251 | - [家庭醫生](https://www.familydoctor.com.cn/) | Item | Size | | :----: | :----: | | Row | 577,849 | ## LLM Dataset Then we concatenate the data and split it into train, dev set with 7:3 ratio. - train.json - dev.json | question | answer | | :----: | :----: | | e.g. 有什麼方法可以治療腎結石? | 有的,腎結石的治療方法有很多種,包括藥物治療、手術治療、醫療治療、中醫治療等。 | ```json { "question": "有什麼方法可以治療腎結石?", "answer": "有的,腎結石的治療方法有很多種,包括藥物治療、手術治療、醫療治療、中醫治療等。" } ```
rajuptvs/ecommerce_products_clip
--- license: mit dataset_info: features: - name: image dtype: image - name: Product_name dtype: string - name: Price dtype: string - name: colors dtype: string - name: Pattern dtype: string - name: Description dtype: string - name: Other Details dtype: string - name: Clipinfo dtype: string splits: - name: train num_bytes: 87008501.926 num_examples: 1913 download_size: 48253307 dataset_size: 87008501.926 ---
hanamizuki-ai/genshin-voice-v3.5-mandarin
--- language: - zh multilinguality: - monolingual pretty_name: Genshin Voice source_datasets: - original task_categories: - text-to-speech - automatic-speech-recognition dataset_info: features: - name: audio dtype: audio - name: language dtype: string - name: npcName dtype: string - name: text dtype: string - name: type dtype: string splits: - name: train num_bytes: 33310846721.498 num_examples: 67921 download_size: 17251924784 dataset_size: 33310846721.498 --- # Dataset Card for Genshin Voice ## Dataset Description ### Dataset Summary The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game. ### Languages The text in the dataset is in Mandarin. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game. #### Who are the source language producers? The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/). ### Annotations The dataset contains official annotations from the game, including ingame speaker name and transcripts. ## Additional Information ### Dataset Curators The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice). ### Licensing Information Copyright © COGNOSPHERE. All Rights Reserved.
fast-flash/fast-flash-hackernews-posts
--- license: apache-2.0 tags: - hackernews - text - social - nlp size_categories: - 10M<n<100M language: - en pretty_name: Fast Flash | HackerNews Posts task_categories: - text-classification - text-generation - conversational --- # Fast Flash | HackerNews Posts Dataset ### Exploratory Analysis Take a look at some fascinating findings from this dataset [on our website](http://wearefastflash.com/blog/hackernews). ### Dataset Summary We release dataset of all HackerNews posts. The dataset includes 35,316,999 posts and was collected in March 2023. You can also find a dataset of all users [right here](https://huggingface.co/datasets/fast-flash/fast-flash-hackernews-users). ### Dataset Structure The post objects in this dataset are structured according to HackerNews' [API specification](https://github.com/HackerNews/API). ## About the Author [Fast Flash](https://wearefastflash.com) is a multidisciplinary creative studio that specializes in data-driven development, product design, branding, and tech. Need help with design, coding, machine learning, pitch decks, data, or analytics? Drop us a line at [hi@wearefastflash.com](mailto:hi@wearefastflash.com).
zhengyun21/PMC-Patients-ReCDS
--- license: cc-by-nc-sa-4.0 language: - en tags: - information retrieval - patient similarity - clinical decision support size_categories: - 100K<n<1M --- # Dataset Card for PMC-Patients-ReCDS ## Dataset Description - **Homepage:** https://github.com/pmc-patients/pmc-patients - **Repository:** https://github.com/pmc-patients/pmc-patients - **Paper:** https://arxiv.org/pdf/2202.13876.pdf - **Leaderboard:** https://pmc-patients.github.io/ - **Point of Contact:** zhengyun21@mails.tsinghua.edu.cn ### Dataset Summary **PMC-Patients** is a first-of-its-kind dataset consisting of 167k patient summaries extracted from case reports in PubMed Central (PMC), 3.1M patient-article relevance and 293k patient-patient similarity annotations defined by PubMed citation graph. ### Supported Tasks and Leaderboards Based on PMC-Patients, we define two tasks to benchmark Retrieval-based Clinical Decision Support (ReCDS) systems: Patient-to-Article Retrieval (PAR) and Patient-to-Patient Retrieval (PPR). For details, please refer to [our paper](https://arxiv.org/pdf/2202.13876.pdf) and [leaderboard](https://pmc-patients.github.io/). ### Languages English (en). ## Dataset Structure The PMC-Patients ReCDS benchmark is presented as retrieval tasks and the data format is the same as [BEIR](https://github.com/beir-cellar/beir) benchmark. To be specific, there are queries, corpus, and qrels (annotations). ### Queries ReCDS-PAR and ReCDS-PPR tasks share the same query patient set and dataset split. For each split (train, dev, and test), queries are stored a `jsonl` file that contains a list of dictionaries, each with two fields: - `_id`: unique query identifier represented by patient_uid. - `text`: query text represented by patient summary text. ### Corpus Corpus is shared by different splits. For ReCDS-PAR, the corpus contains 11.7M PubMed articles, and for ReCDS-PPR, the corpus contains 155.2k reference patients from PMC-Patients. The corpus is also presented by a `jsonl` file that contains a list of dictionaries with three fields: - `_id`: unique document identifier represented by PMID of the PubMed article in ReCDS-PAR, and patient_uid of the candidate patient in ReCDS-PPR. - `title`: : title of the article in ReCDS-PAR, and empty string in ReCDS-PPR. - `text`: abstract of the article in ReCDS-PAR, and patient summary text in ReCDS-PPR. **PAR corpus note** Due to its large size, we fail to upload the full PAR corpus on Huggingface. Instead, we provide PMIDs of the articles we include in PAR corpus, but we recommend you to download the dataset from [Figshare](https://figshare.com/collections/PMC-Patients/6723465) which contains the full PAR corpus file. ### Qrels Qrels are TREC-style retrieval annotation files in `tsv` format. A qrels file contains three tab-separated columns, i.e. the query identifier, corpus identifier, and score in this order. The scores (2 or 1) indicate the relevance level in ReCDS-PAR or similarity level in ReCDS-PPR. Note that the qrels may not be the same as `relevant_articles` and `similar_patients` in `PMC-Patients.json` due to dataset split (see our manuscript for details). ### Data Instances **A sample of query** {"_id": "8699387-1", "text": "A 60-year-old female patient with a medical history of hypertension came to our attention because of several neurological deficits that had developed over the last few years, significantly impairing her daily life. Four years earlier, she developed sudden weakness and hypoesthesia of the right hand. The symptoms resolved in a few days and no specific diagnostic tests were performed. Two months later, she developed hypoesthesia and weakness of the right lower limb. On neurological examination at the time, she had spastic gait, ataxia, slight pronation of the right upper limb and bilateral Babinski sign. Brain MRI showed extensive white matter hyperintensities (WMHs), so leukodystrophy was suspected. However, these WMHs were located bilaterally in the corona radiata, basal ganglia, the anterior part of the temporal lobes and the medium cerebellar peduncle (A–D), and were highly suggestive of CADASIL. Genetic testing was performed, showing heterozygous mutation of the NOTCH3 gene (c.994 C<T; exon 6). The diagnosis of CADASIL was confirmed and antiplatelet prevention therapy was started. Since then, her clinical conditions remained stable, and the lesion load was unchanged at follow-up brain MRIs for 4 years until November 2020, when the patient was diagnosed with COVID-19 after a PCR nasal swab. The patient developed only mild respiratory symptoms, not requiring hospitalization or any specific treatment. Fifteen days after the COVID-19 diagnosis, she suddenly developed aphasia, agraphia and worsened right upper limb motor deficit, but she did not seek medical attention. Some days later, she reported these symptoms to her family medical doctor, and a new brain MRI was performed, showing a subacute ischemic area in the left corona radiata (E,F). Therapy with acetylsalicylic acid was switched to clopidogrel as secondary prevention, while her symptoms improved in the next few weeks. The patient underwent a carotid doppler ultrasound and an echocardiogram, which did not reveal any pathological changes. The review of the blood pressure log, both in-hospital and the personal one the patient had kept, excluded uncontrolled hypertension."} **A sample of qrels** query-id corpus-id score 8647806-1 6437752-1 1 8647806-1 6946242-1 1 ### Data Splits Refer to our paper. ## Dataset Creation If you are interested in the collection of PMC-Patients and reproducing our baselines, please refer to [this reporsitory](https://github.com/zhao-zy15/PMC-Patients). ### Citation Information If you find PMC-Patients helpful in your research, please cite our work by: ``` @misc{zhao2023pmcpatients, title={PMC-Patients: A Large-scale Dataset of Patient Summaries and Relations for Benchmarking Retrieval-based Clinical Decision Support Systems}, author={Zhengyun Zhao and Qiao Jin and Fangyuan Chen and Tuorui Peng and Sheng Yu}, year={2023}, eprint={2202.13876}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jaydenccc/AI_Storyteller_Dataset
--- dataset_info: features: - name: synopsis dtype: string - name: short_story dtype: string splits: - name: train num_bytes: 204642 num_examples: 100 download_size: 129691 dataset_size: 204642 --- # Dataset Card for "AI_Storyteller_Dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
monet-joe/emo163
--- license: mit task_categories: - audio-classification - image-classification language: - en tags: - music - art pretty_name: emo163 dataset size_categories: - 1M<n<10M --- The emo163 dataset comprises approximately 395,000 entries of music emotion labels. Each entry includes three primary columns: Song ID, Playlist ID, and the emotional label of the song. Sourced from the official website of Netease Cloud Music, this dataset offers comprehensive information regarding the emotional annotations of songs. The Song ID serves as a unique identifier for each track, while the Playlist ID denotes the playlist to which the song belongs. The emotional labels assign categorical emotional tags to each song, facilitating in-depth exploration within the field of music emotion analysis for researchers and data scientists. With its substantial scale, the dataset is suitable for constructing emotion analysis models, conducting data mining, and gaining a profound understanding of the relationship between music and emotion. ## Maintenance ```bash GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/monet-joe/emo163 ``` ## Usage ```python from datasets import load_dataset dataset = load_dataset("monet-joe/emo163") for item in dataset["train"]: print(item) for item in dataset["validation"]: print(item) for item in dataset["test"]: print(item) ``` ## Mirror <https://www.modelscope.cn/datasets/monetjoe/emo163> ## Reference [1] <https://music.163.com/#/discover/playlist>
thu-coai/cold
--- license: apache-2.0 language: - zh --- The COLD dataset. [GitHub repo](https://github.com/thu-coai/COLDataset). [Original paper](https://arxiv.org/abs/2201.06025). ```bib @inproceedings{deng-etal-2022-cold, title = "{COLD}: A Benchmark for {C}hinese Offensive Language Detection", author = "Deng, Jiawen and Zhou, Jingyan and Sun, Hao and Zheng, Chujie and Mi, Fei and Meng, Helen and Huang, Minlie", booktitle = "EMNLP", year = "2022" } ```
VMware/open-instruct-v1-oasst-dolly-hhrlhf
--- language: en dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: alpaca_prompt dtype: string - name: response dtype: string - name: instruction dtype: string splits: - name: train num_bytes: 60252132 num_examples: 62971 download_size: 33232110 dataset_size: 60252132 --- # Dataset Card for "open-instruct-v1-oasst-dolly-hhrlhf" This dataset is a combination of: 1. Filtered subset of[OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) 2. train split of [Mosaic-dolly-hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) (consists of [Databrick's dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and a filtered subset of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf)). ## Dataset The dataset consists of 3 columns: 1. instruction: The natural language instruction without any prompt templates (we extracted them out of the alpaca-format in Mosaic-dolly-hhrlhf) 2. alpaca_prompt: Alpaca prompt template versions of instruction 3. response: The response to the instruction ## License - It is usable for commercial purposes so long as you follow the terms of the license. - Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license: - Wikipedia (various pages) - https://www.wikipedia.org/ - Copyright © Wikipedia editors and contributors. - Databricks (https://www.databricks.com) - Copyright © Databricks - Mosaic ML (https://www.mosaicml.com/) - Copyright © Mosaic ML - VMware - Copyright © VMware [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lucadiliello/STORIES
--- license: cc language: - en dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 34099206982 num_examples: 945354 - name: dev num_bytes: 41804891 num_examples: 946 - name: test num_bytes: 42356443 num_examples: 947 download_size: 15347401118 dataset_size: 34183368316 task_categories: - fill-mask - text-generation pretty_name: STORIES size_categories: - 100K<n<1M --- Original STORIES dataset from the paper [A Simple Method for Commonsense Reasoning](https://arxiv.org/pdf/1806.02847v2.pdf).
paolorechia/medium-size-generated-tasks
--- license: other language: - en tags: - ReAct - LLM - Agent - langchain size_categories: - 1K<n<10K --- # LICENSE This is a dataset generated with the help of WizardLM. Therefore, the terms of use are restricted to research/academic only. # What is this This is a collection of .txt files with a prompt and the expected output. For instance: ``` #####PROMPT: Question: Make sure the task is unique and adds value to the original list. Thought:#####OUTPUT: I should check if the task is already in the list. Action: Python REPL Action Input: if task not in tasks: print("Task not found.") else: print("Task found.") ``` # What is it for This is meant to help training LLama based models at using the Langchain ReAct tooling, specifically with the Python REPL. # How good is it? Not much, the dataset is quite dirty at the moment. Still fine-tuning the first LoRA, so no tests have been made. # Next steps 1. Redo steps using a base model that has a more permissive license 2. Fix problems in the dataset generation phase, e.g. * model tries to install packages and fail * langchain agent tooling sometimes seem buggy and don't return the stdout correctly * model likes to ask for user input * model likes to exit the chain by calling sys.exit() * once model gets stuck with installation steps, it's just an infinite loop 3. Clean dataset better # How was it created There are a f ew steps involved in the generation of this dataset. 1. created a mechanism to log pair of prompt/output generated by a running Langchain Agent on a local server Server link: https://github.com/paolorechia/learn-langchain/blob/a3c288c43845d19692478f06757ed326c222f095/servers/vicuna_server.py#L39 ```python class PromptLogger: _instances = {} @staticmethod def get(session): if session not in PromptLogger._instances: PromptLogger._instances[session] = PromptLogger(session) return PromptLogger._instances[session] def __init__(self, session) -> None: self.input_step = 0 self.output_step = 0 self.session = session self._dir = f"logged_prompts/session_{session}/" try: os.makedirs(self._dir) except FileExistsError: pass def log(self, input_str, prefix="input"): filename = os.path.join(self._dir, f"{prefix}_{self.input_step}") with open(filename, "w") as fp: if prefix == "input": input_str = input_str.split("Now begin for real!\n")[1] fp.write(input_str) if prefix == "input": self.input_step += 1 elif prefix == "output": self.output_step += 1 else: raise ValueError("Invalid prefix") @app.post("/prompt") def process_prompt(prompt_request: PromptRequest): params = { "prompt": prompt_request.prompt, "temperature": prompt_request.temperature, "max_new_tokens": prompt_request.max_new_tokens, "stop": prompt_request.stop, } print("Received prompt: ", params["prompt"]) output = compute_until_stop(model, tokenizer, params, config.device) print("Output: ", output) if prompt_request.logging_session is not None: prompt_logger = PromptLogger.get(prompt_request.logging_session) prompt_logger.log(prompt_request.prompt, prefix="input") prompt_logger.log(output, prefix="output") return {"response": output} ``` 2. created a short list of tasks and then extended it with the help of a LLM until about 390 tasks were generated Script link: https://github.com/paolorechia/learn-langchain/blob/main/task_generation/generate_tasks.py ```python from langchain_app.models.llama_http_llm import build_llama_base_llm output = None # Now let's test it out! while True: params = {"temperature": 1.3, "max_new_tokens": 1024, "stop": []} llm = build_llama_base_llm(parameters=params) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. output = llm._call(""" You are given a list of tasks. Please extend it with new unique tasks: 1. "Print hello world to the terminal", 2. "Fetch a Chuck Norris joke from this endpoint https://api.chucknorris.io/jokes/random", 3. "Parse this HTML page https://api.chucknorris.io/ and find all the API endpoints ", 4. "Generate 10 unique cat jokes and store them in a CSV file with two columns, punch line and joke finisher", 5. "Connect to a Postgres database and return the existing databases names. Use the following credentials: \n\nhost localhost\nport 7036\nuser admin\npassword admin", 6. List the existing files in the current directory", 7. "Find out your existing working directory" , 8. "Fix the syntax error of this code snippet:\ndef myfunc():\n\tprint(“hello", 9. "Find the keys of the JSON payload stored in the variable response_json", 10. "Extract the key called 'address' from the JSON stored in the variable json_ and store into a variable called address", 11. "Create a joke about AI bots and save it in a local text file", 12. "Create an unit test for the following snippet of code:\ndef sum_2(x, y):\n\treturn x + y", 13. "Create random data and plot it using matplotlib and store the result as a .PNG image", 14. "Download a CSV file about suicide from the webpage https://catalog.data.gov/dataset/?res_format=CSV and plot a bar chart comparing the suicide numbers of male vs ,female", 15. "Design a Todo list system. Write the explanation in a file called 'todo_list_system_design.txt'", 16. Search for the source code called 'example.py' in the directory, inspect the file, write unit tests for it and execute them to make sure everything is correct.", 17. "Write a data pipeline that ingests data from the Crime Data from 2020 to present from https://catalog.data.gov/dataset/?res_format=CSV. Use the requests and pandas, save the csv to the local disk. Create a directory if necessary, give an appropriate name" """) with open("generated_tasks.txt", "a") as fp: fp.write(output) ``` The output can then be filtered with a simple bash script: ```bash cat generated_tasks.txt | tr -s ' ' | grep -oE '\s*[0-9]+\.[A-Za-z, ]+[A-Za-z, ]+\.' | awk 'length >= 50' | sed -e 's/[0-9\. ]*//' > filtered_generated.txt ``` And then deduplicated with a few lines of code: ```python import json with open("filtered_generated.txt", "r") as fp: tasks = fp.readlines() with open("dedup_generated_tasks.json", "w") as fp: json.dump(list(set(tasks)), fp, indent=4) ``` Result: https://github.com/paolorechia/learn-langchain/blob/main/task_generation/dedup_generated_tasks.json 3. used a prompted WizardLM 7b unquantized version to execute each task in the last, using the logger from item 1 https://github.com/paolorechia/learn-langchain/blob/main/langchain_app/agents/log_task_prompts_agent.py ``` from langchain.agents import Tool, initialize_agent, AgentType from langchain.tools.python.tool import PythonAstREPLTool from langchain_app.models.llama_http_llm import build_llama_base_llm import json prompt_template = """python For instance: Question: Find out how much 2 plus 2 is. Thought: I must use the Python shell to calculate 2 + 2 Action: Python REPL Action Input: 2 + 2 Observation: 4 Thought: I now know the answer Final Answer: 4 Example 2: Question: You have a variable age in your scope. If it's greater or equal than 21, say OK. Else, say Nay. Thought: I should write an if/else block in the Python shell. Action: Python REPL Action Input: if age >= 21: print("OK") # this line has four spaces at the beginning else: print("Nay") # this line has four spaces at the beginning Observation: OK Thought: I have executed the task successfully. Final Answer: I have executed the task successfully. Example 3: Question: Write and execute a script that sleeps for 2 seconds and prints 'Hello, World' Thought: I should import the sleep function. Action: Python REPL Action Input: from time import sleep Observation: Thought: I should call the sleep function passing 2 as parameter Action: Python REPL Action Input: sleep(2) Observation: Thought: I should use the 'print' function to print 'Hello, World' Action: Python REPL Action Input: print('Hello, World') Observation: Thought: I now finished the script Final Answer: I executed the following script successfully: from time import sleep sleep(2) print('Hello, World') Additional Hints: 1. If an error thrown along the way, try to understand what happened and retry with a new code version that fixes the error. 2. DO NOT IGNORE ERRORS. 3. If an object does not have an attribute, call dir(object) to debug it. 4. SUPER IMPORTANT: ALWAYS respect the indentation in Python. Loops demand an idendentation. For example: for i in range(10): print(i) # this line has four spaces at the beginning Same for ifs: if True: print("hello") # this line has four spaces at the beginning An error be thrown because of the indentation, something like... "expected an indented block after 'for' statement on line..." To fix, make sure to indent the lines! 5. Do not use \ in variable names, otherwise you'll see the syntax error "unexpected character after line continuation character..." 6. If the variable is not defined, use vars() to see the defined variables. 7. Do not repeat the same statement twice without a new reason. 8. NEVER print the HTML directly. Now begin for real! Question: {} """ offset = 0 with open("task_generation/dedup_generated_tasks.json", "r") as fp: tasks = json.load(fp) tasks = tasks[offset:] for idx, task in enumerate(tasks): params = {"temperature": 0, "max_new_tokens": 2048, "stop": ["Observation:"], "logging_session": f"medium_size_dataset{idx+offset}"} llm = build_llama_base_llm(parameters=params) python_tool = PythonAstREPLTool() tools = [ Tool( name="Python REPL", func=python_tool, description="useful for when you need to execute Python code", ), ] agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) first_task = tasks[idx] try: agent.run(prompt_template.format(first_task)) except Exception: pass ``` 5. extract all logs and consolidate into txt files inside a directory ```python import os dataset_folder = "medium_size_generated_tasks" # -1 means no number of max_actions max_actions_per_task = -1 if __name__ == "__main__": try: os.makedirs(dataset_folder) except FileExistsError: pass dir_ = "logged_prompts/" sessions = os.listdir(dir_) datapoints = 0 for session in sessions: session_dir = os.path.join(dir_, session) logs_files = os.listdir(session_dir) inputs_step_tuple = [log.split("_") for log in logs_files if "input" in log] outputs_step_tuple = [log.split("_") for log in logs_files if "output" in log] inputs_step_tuple.sort(key=lambda x: x[1]) outputs_step_tuple.sort(key=lambda x: x[1]) i = 0 for input_tuple, output_tuple in zip(inputs_step_tuple, outputs_step_tuple): input_filename = input_tuple[0]+"_"+input_tuple[1] output_filename = output_tuple[0]+"_"+output_tuple[1] input_ = os.path.join(session_dir, input_filename) output_ = os.path.join(session_dir, output_filename) with open(input_, "r") as fp: prompt = fp.read() with open(output_, "r") as fp: output = fp.read() datapoint_filename = os.path.join(dataset_folder, f"{datapoints}.txt") with open(datapoint_filename, "w") as fp: fp.write(f"#####PROMPT: {prompt}") fp.write(f"#####OUTPUT: {output}") datapoints+=1 i += 1 if i == max_actions_per_task: break ``` 6. Use the dataset! For instance, to convert it to JSON ```python dataset_list = [] # dir_ = "easy_task_mini_dataset_cleaned" dir_ = "medium_size_generated_tasks" files_ = os.listdir(dir_) for f in files_: filename = os.path.join(dir_, f) print(filename) with open(filename, "r") as fp: txt = fp.read() prompt = txt.split("#####PROMPT:")[1].split("#####OUTPUT:")[0].strip() output = txt.split("#####OUTPUT:")[1].strip() dataset_list.append({ "prompt":prompt, "output": output, }) with open("data.json", "w") as fp: json.dump(dataset_list, fp, indent=4) ``` You can also use my fork directly to train a LoRA: https://github.com/paolorechia/vicuna-react-lora/blob/main/finetune_wizard_react.py
Gdot/clts
--- dataset_info: features: - name: text dtype: string - name: summary dtype: string splits: - name: train num_bytes: 706157853 num_examples: 148317 - name: valid num_bytes: 97794789 num_examples: 20393 - name: test num_bytes: 78816630 num_examples: 16687 download_size: 593531838 dataset_size: 882769272 task_categories: - summarization language: - zh --- # Dataset Card for "clts" [original link](https://github.com/lxj5957/CLTS-Dataset)
csitfun/LogiCoT
--- license: cc-by-nc-nd-4.0 task_categories: - text-generation language: - en - zh tags: - instruction-finetuning pretty_name: logicot size_categories: - 100K<n<1M --- The instructions and demonstrations for building formal logical reasoning capable Generative Large Language models. CoT rationales are generated with the GPT-4 API. > For non-commercial research purposes only. Update: Our updated paper has been accepted by the findings of EMNLP2023. The dataset is hosted on the Huggingface Datasets. It is the only distribution channel we currently allow. **You can download data examples from our Github [Link](https://github.com/csitfun/LogiCoT)** **Important**: To request the dataset, please 1. Submit an access request through your huggingface account. 2. Send an email to Hanmeng Liu at hanhaishiyi@gmail.com. Please tell us your huggingface account username, your real name, org, and purpose. It would be best if you guaranteed that you will not share the data with others. We will approve your request after your info is provided. Your access will be granted as soon as possible after email has been sent. Please come back and check in a couple of hours. Note that you might not receive a reply letter due to the frequent requests. `general_inference.jsonl`: English instruction tuning data for the general inference task `general_inference_pruned`: a pruned version with a smaller size while more diverse `mrc.jsonl`: English instruction tuning data for the logical reading comprehension task `mrc_zh.jsonl`: Chinese instruction tuning data for the logical reading comprehension task `entailmentbank.jsonl`: derived from the EntailmentBank data `folio2instruction.jsonl`: derived from the FOLIO data For more information, please refer to our preview Arxiv eprint paper - [LogiCoT: Logical Chain-of-Thought Instruction-tuning Data Collection with GPT-4](https://arxiv.org/abs/2305.12147) ## Seminal Data * LogicInference * EntailmentBank * FOLIO * ReClor * LogiQA ## Instruction types ### General inference task * Language to Logic * One-Step Inference * Inference Chains ### Multi-choice reading comprehension task * Identify the Necessary Claim * Strengthen an Argument * Weaken an Argument * Resolve a Situation * Identify a Flaw in Arguments Reasoning ## How to cite ``` @inproceedings{liu2023logicot, title={LogiCoT: Logical Chain-of-Thought Instruction Tuning}, author={Liu, Hanmeng and Teng, Zhiyang and Cui, Leyang and Zhang, Chaoli and Zhou, Qiji and Zhang, Yue}, booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023}, pages={2908--2921}, year={2023} } ```
projectlosangeles/Los-Angeles-MIDI-Dataset
--- license: cc-by-nc-sa-4.0 tags: - mir - music - midi - midi-dataset --- # Los Angeles MIDI Dataset ## SOTA kilo-scale MIDI dataset for MIR and Music AI purposes *** ![Vintage_Los_Angeles_Print](https://user-images.githubusercontent.com/56325539/196157186-5b0edd15-020f-4877-a8e2-b1af42f960c6.jpg) *** ## Search and Explore Los Angeles MIDI dataset [![Open In Colab][colab-badge]][colab-notebook1] [colab-notebook1]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Los_Angeles_MIDI_Dataset_Search_and_Explore.ipynb> [colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg> *** ## [NEW] Master MIDI Dataset GPU Search and Filter [![Open In Colab][colab-badge]][colab-notebook5] [colab-notebook5]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Extras/Master_MIDI_Dataset_GPU_Search_and_Filter.ipynb> [colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg> *** ## Master MIDI Dataset Search and Filter [![Open In Colab][colab-badge]][colab-notebook4] [colab-notebook4]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Extras/Master_MIDI_Dataset_Search_and_Filter.ipynb> [colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg> *** ## Make your own Los Angeles MIDI Dataset from any MIDI scrape [![Open In Colab][colab-badge]][colab-notebook2] [colab-notebook2]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Los_Angeles_MIDI_Dataset_Maker.ipynb> [colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg> *** ## Make your own Los Angeles MIDI Dataset Metadata [![Open In Colab][colab-badge]][colab-notebook3] [colab-notebook3]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/META-DATA/Los_Angeles_MIDI_Dataset_Metadata_Maker.ipynb> [colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg> *** ## [Los Angeles MIDI Dataset is now avaialable for download!!!](https://huggingface.co/datasets/projectlosangeles/Los-Angeles-MIDI-Dataset) *** ## Main Features: ### 1) ~405000 100% unique MIDIs to explore :) ### 2) Each MIDI file was read-checked and 100% de-duped ### 3) Extensive meta-data for each MIDI file ### 4) Full chords data for each MIDI file ### 5) Helper Python code *** ## NEW in version 4.0 ### 1) Added 160519 new unique MIDIs ### 2) Dataset now contains 404714 MIDIs ### 3) Removed all malformed MIDIs ### 4) Expanded dataset MIDIs metadata ### 5) Added MIDIs chords database ### 6) Updated dataset concept artwork ### Enjoy! :) *** ```bibtex @inproceedings{lev2024losangelesmididataset, title = {Los Angeles MIDI Dataset: SOTA kilo-scale MIDI dataset for MIR and Music AI purposes}, author = {Aleksandr Lev}, booktitle = {GitHub}, year = {2024}, } ``` *** ### Project Los Angeles ### Tegridy Code 2024
lang-uk/malyuk
--- language: - uk size_categories: - 10B<n<100B --- ## Malyuk [mɐˈlʲuk] Combined corpus: [UberText 2.0](https://lang.org.ua/en/ubertext/), [Oscar](https://huggingface.co/datasets/oscar), [Ukrainian News](https://huggingface.co/datasets/zeusfsx/ukrainian-news) This is not an official release by any means. It is just a compilation made by me to simplify the training of the Ukrainian LLM. Nothing is guaranteed, no support requests, nothing. * 113GB of texts in jsonl. * 38941863 articles. ![alt text](https://huggingface.co/datasets/lang-uk/malyuk/resolve/main/eyes.png "Watching ya")
tianyang/repobench-c
--- language_creators: - found license: - cc-by-nc-nd-4.0 multilinguality: - multilingual pretty_name: RepoBench-Completion source_datasets: - original task_categories: - text-generation task_ids: - document-retrieval tags: - code size_categories: - 100K<n<1M --- # Dataset Card for RepoBench-C ## Dataset Description - **Homepage:** https://github.com/Leolty/repobench - **Paper:** https://arxiv.org/abs/2306.03091 ## Dataset Summary **RepoBench-C (Completion)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), focuing on the prediction of the next line of code, given in-file context (including several preceding lines and import statements), and cross-file context. ## Settings - `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file. - `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file. - `if`: short for in_file, indicating the next line does not contain any cross-file module. ## Supported Tasks - `python_cff`: python code prediction with cross-file-first setting. - `python_cfr`: python code prediction with cross-file-random setting. - `python_if`: python code prediction with in-file setting. - `java_cff`: java code prediction with cross-file-first setting. - `java_cfr`: java code prediction with cross-file-random setting. - `java_if`: java code prediction with in-file setting. ## Loading Data For example, if you want to load the `test` set to test your model on `Python` code prediction with `cff` setting, you can do the following: ```python from datasets import load_dataset dataset = load_dataset("tianyang/repobench-c", "python_cff", split="test") ``` > Note: The `split` argument is optional. If not provided, the entire dataset will be loaded. ## Dataset Structure ```json { "repo_name": "repository name of the data point", "file_path": "path/to/file", "context": "commented and concatenated cross-file context", "import_statement": "all import statements in the file", "code": "the code for next-line prediction", "prompt": "cross-file context + import statements + in-file code", "next_line": "the next line of the code" } ``` ## Licensing Information CC BY-NC-ND 4.0 ## Citation Information ```bibtex @misc{liu2023repobench, title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems}, author={Tianyang Liu and Canwen Xu and Julian McAuley}, year={2023}, eprint={2306.03091}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contributions Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset.
Zilun/RS5M
--- license: cc-by-nc-4.0 language: - en size_categories: - 1M<n<10M --- # RS5M ## File Explaination ### 1. pub11_NER_geolocation_info.csv * This file provides extracted geolocation entities in caption. we discovered that the captions from the PUB11 dataset contain a significant amount of location information. As a result, we executed a NER (Named Entity Recognition) extraction on the PUB11 subset. * We hypothesize that the location information in the captions is closely related to the image's content and its shooting location. While this might introduce some noise, given that most PUB11 images originate from the internet and the paired text's purpose is to supplement the image, we believe most of the location data is useful. * We specifically extracted entities labeled as "GPE" (geopolitical entities). However, most of these entities are country or city names, not UTM zones or latitude/longitude details. While city names can be readily converted to UTM zones, captions containing only country names provide us with coarse spatial information. Nonetheless, this is a valuable addition to our analysis of RS5M's geographic distribution. * Out of the dataset, 880,354 images have captions with location information. We took the NER tool from the NLTK implementation. We also tried Stanford NER models, but the estimated processing time was 900 hours. In the future, we plan to develop an algorithm to convert extracted GPEs to UTM zones if applicable. |img_name| text| entity| |:---:|:---:|:---:| |laion2b_0_0 |Aerial photography Pattern on the Earth Field Corn Farm Abstract Harvest Season| []| |laion2b_0_2| San AntonioTexas suburban housing development neighborhood - aerial view stock photo| ['San AntonioTexas'] |laion2b_0_4| Aerial view of historical orthodox monasteries on the top of meteors cliffs| [] |laion2b_0_7| Aerial view of Albert Park and the Melbourne skyline, Australia| ['Melbourne', 'Australia'] |laion2b_0_9| Aerial photo taken on Oct. 6, 2019 shows tourists viewing pink muhly grass in the Fenghuanggou scenic area during the National Day holiday in Nanchang, capital of east China's Jiangxi Province. (Xinhua/Peng Zhaozhi)| ['Fenghuanggou', 'Nanchang', 'China', 'Jiangxi Province'] ### 2. pub11_metacap_country_month.pkl * Only data from YFCC14M (with 7,841 pairs) have info on "meta_caption", "country", and "month". * **img_name**: Image name * **text**: Image caption * **url**: Image download url * **download_status**: If the image was downloaded successfully. * **meta_caption**: Meta caption generated by the template with absolute corrected information from the dataset. (month, country, date, shooting angle, etc.) * **country**: Which country the image was shot. * **month**: When the image was shot. |img_name|text|url|download_status|meta_caption|country| month| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |laion2b_0_0| Aerial photography Pattern on the Earth Field Corn Farm Abstract Harvest Season| https://image.shutterstock.com/image-photo/stock-photo-aerial-photography-pattern-on-the-earth-field-corn-farm-abstract-harvest-season-450w-702011665.jpg| SUCCESS| NaN| NaN| NaN| |laion2b_0_2| San AntonioTexas suburban housing development neighborhood - aerial view stock photo| http://media.istockphoto.com/photos/san-antoniotexas-suburban-housing-development-neighborhood-aerial-picture-id170094223?k=6&amp;m=170094223&amp;s=170667a&amp;w=0&amp;h=53-MMobWRGSl29N1E3oQa8FVsv53FL2D9eqfLn5hvl0=| SUCCESS| NaN| NaN| NaN| |laion2b_0_4| Aerial view of historical orthodox monasteries on the top of meteors cliffs| https://cdn.kated.com/wp-content/uploads/2020/06/Grc39v-Aerial-view-of-historical-orthodox-monasteries-on-the-top-of-meteors-cliffs-150x150.jpg| SUCCESS| NaN| NaN| NaN| |yfcc14m_27147| The canyon on the right contains Araster Spring, the canyon on the left is Edgar Canyon on the USGS maps. |http://farm3.staticflickr.com/2404/13175374033_8445b81b3f.jpg| SUCCESS | bearing a timestamp of 20 o'clock, March 12, 2014, this shot offers a glimpse of Spring in San Manuel, United States.| United States| March| |yfcc14m_27148| Echo Rock at 7870 feet "Observation Rock and Echo Rock are dissected satellitic volcanoes, which erupted olivine andesite upon Mount Rainier's northwestern flank late in its history. " - from Fisk, 1963, Geology of Mount Rainier National Park, Washington: USGS Professional Paper 444 Link to More Info i082006 326| http://farm1.staticflickr.com/82/220990009_cfc85da775.jpg | SUCCESS | taken in the heart of Summer within Buckley, United States, this image's timestamp reads 13 o'clock, August 19, 2006.| United States| August| |yfcc14m_27151 |There are lots of unique stones laying around. A few distinct types... numerous white round stones, as seen on the right. Also there's lots of crystall-y flakey rock, like the large one on the lower left. It doesn't come out in this photo, but it reflects light quite well. It's clearly man-made. Look at it in the satellite imagery (link below). Seriously. There used to be a building on the top. You can tell it's man made by the way the rocks and dirt are piled. None of the locals I've interviewed have any idea who put it there. It's been there for as long as any of them have a verbal history. What we do know is there's lots of pottery shards... thousands of them. Last time we were here some of the locals told us after it rains kids search the mound and sometimes find old coins. Further questioning got nowhere. Dave and I like to hike here, it's got a great view of the valley. This time we found a guy at the top digging for artifacts, and he found one while we were there. It was a large clay bead with designs in it. He offered to sell it to us... and while I wanted to I didn't because we 1) don't want to reward that behavior and 2) it's bad form and if it isn't it should be illegal. (34.44809567863388, 70.395348072052) maps.google.com/maps?f=q&source=s_q&hl=en&geo... | http://farm4.staticflickr.com/3605/3393504467_5b7bf7a058.jpg | SUCCESS | captured in Jalālābād, Afghanistan, this image highlights the beauty of Spring and is timestamped at 8 o'clock, March 7, 2008. | Afghanistan | March ### 3. pub11_train_metadata.csv, pub11_validation_metadata.csv, rs3_train_val_metadata.csv * For pub11 subset, we split the train file and validation file. * **file_name**: Image name * **text**: Image caption |file_name| text| |:---:|:---:| |laion2b_0_0.jpg| Aerial photography Pattern on the Earth Field Corn Farm Abstract Harvest Season |laion2b_0_2.jpg| San AntonioTexas suburban housing development neighborhood - aerial view stock photo |laion2b_0_4.jpg| Aerial view of historical orthodox monasteries on the top of meteors cliffs |laion2b_0_5.jpg| Overhead view of a car parking entrance road. Aerial view.... |laion2b_0_7.jpg| Aerial view of Albert Park and the Melbourne skyline, Australia * For rs3 subset, we did not split the train and validation files at this stage. * **img_name**: Image name * **subset_name**: fmow/ben/milllionaid * **top1_cap_vanilla**: The rank1 image caption filtered by VLMs. Generated by BLIP2-opt6.7B. * **ssl_cap_vanilla**: The rank1 rotation invariant image caption. Generated by BLIP2-opt6.7B. * **top1_cap_ft**: The rank1 image caption filtered by VLMs. Generated by fine-tuned BLIP2-opt6.7B. * **ssl_cap_ft**: The rank1 rotation invariant image caption. Generated by fine-tuned BLIP2-opt6.7B. * **country**: Which country the image was shot. * **month**: When the image was shot. * **meta_caption**: Meta caption generated by the template with absolute corrected information from the dataset. (month, country, date, shooting angle, etc.) img_name| subset_name| top1_cap_vanilla| ssl_cap_vanilla| top1_cap_ft| ssl_cap_ft| country |month| meta_caption| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |P0000001.jpg| millionaid| a google earth aerial view of a commercial building and some parking lots| view of the roof top parking lot from a satellite image, with an overhead view of the building| a parking lot next to some businesses with cars in them. There are many cars parked in it. There is one in the front and many on the sides. They are parked in front of a building as it has a parking lot in it| There are many yellow buildings next to a big highway. There is a white line dividing the buildings from the rest of the parking lot. The buildings seem to be in different rooms yet there are roads connecting each other. There is also an intersection |||| |293813.jpg| ben| an old satellite photo of some land that has been covered in dirt| the satellite view of a dirt area, with no grass| This area is not very large but has a number of houses. This is located in an area of land between two rivers. This area is not very large but has a number of houses.| This area is not very large but has a number of houses. This is located in an area of land between two rivers. This area is not very large but has a number of houses. | | August| a depiction from Summer, this satellite image showcases 'non-irrigated arable land, land principally occupied by agriculture, with significant areas of natural vegetation, transitional woodland/shrub' and is affiliated with utm zone 29N, timestamped 11 o'clock, August 18, 2017. |fmow/train/zoo/zoo_90/zoo_90_1_msrgb.jpg| fmow| a satellite photo shows the river next to buildings| the satellite image shows an area that is near the river| The path is very close to the river. there are trees in the middle of the path. a small park sits close to the path. a small town is also close to the path| there are several roads next to a river. A river is the source of various plants and animals. Many trees are around the river. The ground is fertile and provides people with plenty of food| Argentina| December| a peak into Rawson, Argentina during its Winter showcases zoo at the center and top-center blocks. the clarity comes from a ground sample distance of 2.15 meters, and it belongs to the series from the utm zone 20G, time-marked on 14 o'clock, December 24, 2016. ### 4. rs3_train_val_metadata_clean.pkl * There are many repeated expression in the caption generated by the fine-tuned model (see file "rs3_train_val_metadata.csv"). We performed a dedulication on sentences with similar meaning. ### 5. pub11_train_intermediate.csv, pub11_val_intermediate.csv, rs3_train_intermediate.csv, rs3_val_intermediate.csv * This file contains the final captions we made/selected for generating RS5M data with webdataset format. * For pub11 subset * **img_name**: Image name * **caption**: Image caption img_name| caption |:---:|:---:| laion2b_0_29.jpg| Aerial view of Teahwhit Head and James Island.jpg laion2b_0_94.jpg| Aerial view of boats. Top view of yachts from flying drone stock video laion2b_0_109.jpg| This aerial view of the exhibit shows the barn, yard, and water feature. * For rs3 subset * **base_name**: Image name, path related * **subset_name**: fmow/ben/millionaid * **save_name**: Image name, unique name, path unrelated * **caption**: Image caption base_name| subset_name| save_name| caption| |:---:|:---:|:---:|:---:| fmow/train/place_of_worship/place_of_worship_2222/place_of_worship_2222_0_rgb.jpg| fmow| fmow_place_of_worship_2222_0_rgb.jpg| the Fall aura of Tanjung, Indonesia gets a voice with this image of place of worship at the center and top-left blocks. its clarity, a result of the ground sample distance of 2.25 meters, ensures its place in the utm zone 50L, timestamped 2 o'clock, September 22, 2015. a satellite image of a large village surrounded by trees| P0294281.jpg| millionaid| millionaid_P0294281.jpg| there is a very large water area next to the road| 66994.jpg| ben| ben_66994.jpg| originating from utm zone 32N in the Summer season, this satellite image showcasing 'complex cultivation patterns, coniferous forest, inland marshes' is timestamped 10 o'clock, August 18, 2017. a satellite photo of the area near the train tracks
tianleliphoebe/DreamEditBench
--- license: cc-by-4.0 task_categories: - image-to-image - text-to-image language: - en size_categories: - n<1K --- ## DreamEditBench for Subject Replacement task and Subject Addition task. ## Dataset Description - **Homepage:** https://dreameditbenchteam.github.io - **Repository:** https://github.com/DreamEditBenchTeam/DreamEdit <!-- **Paper:** https://arxiv.org/abs/2306.12624 --> The goal of subject replacement is to replace a subject from a source image with a customized subject. In contrast, the aim of the subject addition task is to add a customized subject to a desired position in the source image. To standardize the evaluation of the two proposed tasks, we curate a new benchmark, i.e. DreamEditBench, consisting of 22 subjects in alignment with DreamBooth with 20 images for each subject correspondingly. For the subject replacement task, we collect 10 images for each type, which include same-typed source subjects in diverse environments. The images are retrieved from the internet with the search query “a photo of [Class name]”, and the source subject should be the main subject in the image which dominates a major part of the photo. For the subject addition task, we collect 10 reasonable backgrounds for each type of subject. In the meantime, we manually designate the specific location the target subject should be placed with a bounding box in the background. To collect the specific backgrounds for each subject, we first brainstorm and list the possible common environments of the subjects, then we search the listed keywords from the internet to retrieve and pick the backgrounds ## Data Structure There are 22 subject folders in each task folder respectively. In each subject folder, there are 10 source images. For Subject Addition task, there is an additional bbox.json file recording the manually labeled bounding box for each background. The replacement_subset.csv and addition_subset.csv record the easy/hard subset division for each task correspondingly. ## Citation Information If you find this dataset useful, please consider citing our paper: ``` @misc{li2023dreamedit, title={DreamEdit: Subject-driven Image Editing}, author={Tianle Li and Max Ku and Cong Wei and Wenhu Chen}, year={2023}, eprint={2306.12624}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
OpenShape/openshape-training-data
--- license: openrail ---
pankajmathur/WizardLM_Orca
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - en size_categories: - 10K<n<100K --- Explain tuned WizardLM dataset ~55K created using approaches from Orca Research Paper. We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see how the System prompt is added before each instruction.
Falah/sentiments-dataset-381-classes
--- dataset_info: features: - name: text dtype: string - name: sentiment dtype: string splits: - name: train num_bytes: 104602 num_examples: 1061 download_size: 48213 dataset_size: 104602 license: apache-2.0 task_categories: - text-classification language: - en pretty_name: sentiments-dataset-381-classes size_categories: - 1K<n<10K --- # Sentiments Dataset (381 Classes) ## Dataset Description This dataset contains a collection of labeled sentences categorized into 381 different sentiment classes. The dataset provides a wide range of sentiment labels to facilitate fine-grained sentiment analysis tasks. Each sentence is associated with a sentiment class name. ## Dataset Information - Number of classes: 381 - Features: `text` (string), `sentiment` (string) - Number of examples: 1,061 ## Class Names The dataset includes the following sentiment class names as examples: - Positive - Negative - Neutral - Joyful - Disappointed - Worried - Surprised - Grateful - Indifferent - Sad - Angry - Relieved - Sentiment - Excited - Hopeful - Anxious - Satisfied - Happy - Nostalgic - Inspired - Impressed - Amazed - Touched - Proud - Intrigued - Relaxed - Content - Comforted - Motivated - Frustrated - Delighted - Moved - Curious - Fascinated - Engrossed - Addicted - Eager - Provoked - Energized - Controversial - Significant - Revolutionary - Optimistic - Impactful - Compelling - Enchanted - Peaceful - Disillusioned - Thrilled - Consumed - Engaged - Trendy - Informative - Appreciative - Enthralled - Enthusiastic - Influenced - Validated - Reflective - Emotional - Concerned - Promising - Empowered - Memorable - Transformative - Inclusive - Groundbreaking - Evocative - Respectful - Outraged - Unity - Enlightening - Artistic - Cultural - Diverse - Vibrant - Prideful - Captivated - Revealing - Inspiring - Admiring - Empowering - Connecting - Challenging - Symbolic - Immersed - Evolving - Insightful - Reformative - Celebratory - Validating - Diversity - Eclectic - Comprehensive - Uniting - Influential - Honoring - Transporting - Resonating - Chronicle - Preserving - Replicated - Impressive - Fascinating - Tributary - Momentum - Awe-inspiring - Unearthing - Exploratory - Immersive - Transportive - Personal - Resilient - Mesmerized - Legendary - Awareness - Evidence-based - Contemporary - Connected - Valuable - Referencing - Camaraderie - Inspirational - Evoke - Emotive - Chronicling - Educational - Serene - Colorful - Melodious - Dramatic - Enlivened - Wonderstruck - Enchanting - Grandiose - Abundant - Harmonious - Captivating - Mesmerizing - Dedicated - Powerful - Mystical - Picturesque - Opulent - Revitalizing - Fragrant - Spellbinding - Lush - Breathtaking - Passionate - Melodic - Wonderland - Invigorating - Dappled - Flourishing - Ethereal - Elaborate - Kaleidoscope - Harmonizing - Tragic - Transforming - Marveling - Enveloped - Reverberating - Sanctuary - Graceful - Spectacular - Golden - Melancholic - Transcendent - Delicate - Awakening - Intertwined - Indelible - Verdant - Heartrending - Fiery - Inviting - Majestic - Lullaby-like - Kissed - Behold - Soulful - Splendid - Whispering - Masterpiece - Moving - Crystalline - Tapestry - Haunting - Renewal - Wisdom-filled - Stunning - Sun-kissed - Symphony - Awestruck - Dancing - Heart-wrenching - Magical - Gentle - Emotion-evoking - Embracing - Floating - Tranquil - Celestial - Breathless - Symphonic - Stillness - Delightful - Flawless - Commanding - Embraced - Heartfelt - Precise - Adorned - Beautiful - Scattering - Timeless - Radiant - Regal - Sparkling - Resilience - Recognized - Echoing - Rebirth - Cradled - Tirelessly - Glowing - Icy - Brilliant - Anticipation - Awakened - Blossoming - Enthralling - Excitement - Vivid - Spellbound - Mellifluous - Intricate - Silent - Contrasting - Poignant - Perfumed - Pure - Magnificent - Exquisite - Anguished - Harmonic - Kaleidoscopic - Gripping - Soothing - Intense - Poetic - Fragile - Unwavering - Intriguing - Fairy-tale - Ephemeral - Joyous - Resplendent - Elegant - Coaxing - Illuminating - Thunderous - Cool - Exciting - Teeming - Blissful - Enduring - Raw - Adventurous - Mysterious - Enrapturing - Marvelous - Swirling - Resonant - Careful - Whimsical - Intertwining - - and more ## Usage example ```python from datasets import load_dataset #Load the dataset dataset = load_dataset("Falah/sentiments-dataset-381-classes") #Convert the dataset to a pandas DataFrame df = pd.DataFrame(dataset['train']) #Get the unique class names from the "sentiment" column class_names = df['sentiment'].unique() #Print the unique class names for name in class_names: print(f"Class Name: {name}") ``` ## Application The Sentiments Dataset (381 Classes) can be applied in various NLP applications, such as sentiment analysis and text classification. ## Citation If you use this dataset in your research or publication, please cite it as follows: For more information or inquiries about the dataset, please contact the dataset author(s) mentioned in the citation. ``` @dataset{sentiments_dataset_381_classes), author = {Falah.G.Salieh}, title = {Sentiments Dataset (381 Classes)}, year = {2023}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/Falah/sentiments-dataset-381-classes}, } ```
declare-lab/flan-mini
--- dataset_info: features: - name: id dtype: string - name: source dtype: string - name: conversations dtype: list splits: - name: train num_examples: 1340153 license: cc size_categories: - 1M<n<10M --- # Dataset Card for Flan-mini ## Dataset Description - **Repository:** https://github.com/declare-lab/flacuna - **Paper:** https://arxiv.org/abs//2307.02053 - **Leaderboard:** https://declare-lab.net/instruct-eval/ - **Point of Contact:** sporia@sutd.edu.sg ### Dataset Summary Given the enormous size of the Flan Collection, we opted to work with a carefully selected subset that maintains a high level of task diversity while reducing the overall dataset size. In the Table below, we present the specific tasks included in our subset of Flan, along with their respective dataset sizes. As the public release of the Flan Collection does not include programming tasks, we augment the collection with existing code datasets. Specifically, we include CodeContests, APPS, and CodeSearchNet. Following the data processing pipeline of Flan Collection, we sample a fixed number of examples from each dataset, where each example is randomly augmented with different prompt templates. Specifically, the examples are processed with a pool of handcrafted prompt templates and may be used as zero-shot examples or grouped together with few-shot demonstrations. We incorporated various ChatGPT datasets, including Alpaca, Code Alpaca, and ShareGPT, into our Flan-mini collection. | Dataset Name | Source | Dataset Size | |-----------------------------|------------------------|--------------| | Flan2021 | Flan | 388K | | Public Pool of Prompts | Flan | 320K | | Natural instructions v2 | Flan | 200K | | CoT | Flan | 100K | | Code Search | HF/code_search_net | 100K | | Code Contest | HF/deepmind/code_contests | 50K | | Apps | HF/codeparrot/apps | 50K | | GPT4-Alpaca | GPT-4 | 52K | | Code-Alpaca | ChatGPT | 20K | | ShareGPT | ChatGPT | 60K | | Total | - | 1.34M | ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Citation Information ```bibtex @misc{ghosal2023flacuna, title={Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning}, author={Deepanway Ghosal and Yew Ken Chia and Navonil Majumder and Soujanya Poria}, year={2023}, eprint={2307.02053}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Gustrd/dolly-15k-libretranslate-pt
--- license: cc-by-sa-3.0 task_categories: - question-answering - summarization language: - pt size_categories: - 10K<n<100K --- # Summary databricks-dolly-15k ( https://huggingface.co/datasets/databricks/databricks-dolly-15k/ ) is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This is a portuguese translation done with libretranslate ( https://github.com/LibreTranslate/LibreTranslate ). This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Portuguese Version: 1.0 --- # Original Readme Dataset Overview databricks-dolly-15k is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category. Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly. For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the context field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. [42]) which we recommend users remove for downstream applications. Intended Uses While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories. Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets. Dataset Purpose of Collection As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. Sources Human-generated data: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories. Wikipedia: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages. Annotator Guidelines To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor. The annotation guidelines for each of the categories are as follows: Creative Writing: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better. Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. Open QA: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation. Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. Classification: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better. Brainstorming: Think up lots of examples in response to a question asking to brainstorm ideas. Personal or Sensitive Data This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information. Known Limitations Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia Some annotators may not be native English speakers Annotator demographics and subject matter may reflect the makeup of Databricks employees --- license: cc-by-sa-3.0 ---
ivrit-ai/audio-vad
--- language: - he license: other size_categories: - 1M<n<10M task_categories: - audio-classification - voice-activity-detection extra_gated_prompt: 'You agree to the following license terms: This material and data is licensed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), The full text of the CC-BY 4.0 license is available at https://creativecommons.org/licenses/by/4.0/. Notwithstanding the foregoing, this material and data may only be used, modified and distributed for the express purpose of training AI models, and subject to the foregoing restriction. In addition, this material and data may not be used in order to create audiovisual material that simulates the voice or likeness of the specific individuals appearing or speaking in such materials and data (a “deep-fake”). To the extent this paragraph is inconsistent with the CC-BY-4.0 license, the terms of this paragraph shall govern. By downloading or using any of this material or data, you agree that the Project makes no representations or warranties in respect of the data, and shall have no liability in respect thereof. These disclaimers and limitations are in addition to any disclaimers and limitations set forth in the CC-BY-4.0 license itself. You understand that the project is only able to make available the materials and data pursuant to these disclaimers and limitations, and without such disclaimers and limitations the project would not be able to make available the materials and data for your use.' extra_gated_fields: I have read the license, and agree to its terms: checkbox dataset_info: features: - name: audio dtype: audio - name: episode dtype: string - name: source dtype: string - name: uuid dtype: string - name: attrs struct: - name: duration dtype: float64 - name: end dtype: float64 - name: license dtype: string - name: segment dtype: int64 - name: start dtype: float64 splits: - name: train num_bytes: 704608554540.66 num_examples: 5657270 download_size: 473125104970 dataset_size: 704608554540.66 configs: - config_name: default data_files: - split: train path: data/train-* --- ivrit.ai is a database of Hebrew audio and text content. **audio-base** contains the raw, unprocessed sources. **audio-vad** contains audio snippets generated by applying Silero VAD (https://github.com/snakers4/silero-vad) to the base dataset. **audio-transcripts** contains transcriptions for each snippet in the audio-vad dataset. The audio-base dataset contains data from the following sources: * Geekonomy (Podcast, https://geekonomy.net) * HaCongress (Podcast, https://hacongress.podbean.com/) * Idan Eretz's YouTube channel (https://www.youtube.com/@IdanEretz) * Moneytime (Podcast, https://money-time.co.il) * Mor'e Nevohim (Podcast, https://open.spotify.com/show/1TZeexEk7n60LT1SlS2FE2?si=937266e631064a3c) * Yozevitch's World (Podcast, https://www.yozevitch.com/yozevitch-podcast) * NETfrix (Podcast, https://netfrix.podbean.com) * On Meaning (Podcast, https://mashmaut.buzzsprout.com) * Shnekel (Podcast, https://www.shnekel.live) * Bite-sized History (Podcast, https://soundcloud.com/historia-il) * Tziun 3 (Podcast, https://tziun3.co.il) * Academia Israel (https://www.youtube.com/@academiaisrael6115) * Shiluv Maagal (https://www.youtube.com/@ShiluvMaagal) Paper: https://arxiv.org/abs/2307.08720 If you use our datasets, the following quote is preferable: ``` @misc{marmor2023ivritai, title={ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development}, author={Yanir Marmor and Kinneret Misgav and Yair Lifshitz}, year={2023}, eprint={2307.08720}, archivePrefix={arXiv}, primaryClass={eess.AS} } ```
jed351/Chinese-Common-Crawl-Filtered
--- language: - zh --- # Traditional Chinese C4 ### Dataset Summary Data obtained from 2023-14 Common Crawl. Downloaded and processed using [code](https://github.com/jedcheng/c4-dataset-script) based on another [project](https://github.com/shjwudp/c4-dataset-script) attempting to recreate the C4 dataset. The resultant dataset contains both simplified and traditional Chinese. It was then filtered using a [modified list](https://github.com/jedcheng/c4-dataset-script/blob/master/SC_filter/SC_list.txt) of simplified Chinese characters to obtain [another traditional Chinese dataset](https://huggingface.co/datasets/jed351/Traditional-Chinese-Common-Crawl-Filtered). I would like to acknowledge computational resources and support provided by the Imperial College Research Computing Service (http://doi.org/10.14469/hpc/2232)
ArtifactAI/arxiv_python_research_code
--- dataset_info: features: - name: repo dtype: string - name: file dtype: string - name: code dtype: string - name: file_length dtype: int64 - name: avg_line_length dtype: float64 - name: max_line_length dtype: int64 - name: extension_type dtype: string splits: - name: train num_bytes: 12984199778 num_examples: 1415924 download_size: 4073853616 dataset_size: 12984199778 license: bigcode-openrail-m task_categories: - text-generation language: - en pretty_name: arxiv_python_research_code size_categories: - 1B<n<10B --- # Dataset Card for "ArtifactAI/arxiv_python_research_code" ## Dataset Description https://huggingface.co/datasets/ArtifactAI/arxiv_python_research_code ### Dataset Summary ArtifactAI/arxiv_python_research_code contains over 4.13GB of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs. ### How to use it ```python from datasets import load_dataset # full dataset (4.13GB of data) ds = load_dataset("ArtifactAI/arxiv_python_research_code", split="train") # dataset streaming (will only download the data as needed) ds = load_dataset("ArtifactAI/arxiv_python_research_code", streaming=True, split="train") for sample in iter(ds): print(sample["code"]) ``` ## Dataset Structure ### Data Instances Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata. ### Data Fields - `repo` (string): code repository name. - `file` (string): file path in the repository. - `code` (string): code within the file. - `file_length`: (integer): number of characters in the file. - `avg_line_length`: (float): the average line-length of the file. - `max_line_length`: (integer): the maximum line-length of the file. - `extension_type`: (string): file extension. ### Data Splits The dataset has no splits and all data is loaded as train split by default. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization 34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories. These repositories were then filtered, and the code from each '.py' file extension was extracted into 1.4 million files. #### Who are the source language producers? The source (code) language producers are users of GitHub that created unique repository ### Personal and Sensitive Information The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. ## Additional Information ### Dataset Curators Matthew Kenney, Artifact AI, matt@artifactai.com ### Citation Information ``` @misc{arxiv_python_research_code, title={arxiv_python_research_code}, author={Matthew Kenney}, year={2023} } ```
luoruipu1/Valley-Instruct-65k
--- license: apache-2.0 --- We released the data for the second stage of valley training, a total of 65K, the data comes from the public video website ttps://www.jukinmedia.com/licensing, and the open source multimodal data sets VATEX and VIOLIN.It consists of four aspects, detailed description, complex reasoning, causal inference and conversation. The detailed description and complex reasoning come from jukinmedia, the conversation comes from VATEX, and the causal inference comes from VIOLIN. Because causal inference data is too difficult, the data of 65k version does not include causal inference data. We release all data include causal inference data to facilitate VLLM research of the community. Since the video URL of jukinmedia is dynamic, we provide a script `get_jukinmedia_videourl.py` to get the video of jukinmedia. The VATEX part in `valley_instruct_65k needs` to be downloaded from Youtube and the vid is represented as \[youtube_id\]_\[start_second\]_\[end_second\], you also need to crop the video according to start and end second.
gauravshrm211/VC-startup-evaluation-for-investment
--- license: other --- This data set includes the completion pairs for evaluating startups before investing in them. This data set iincludes completion examples for Chain of Thought reasoning to perform financial calculations. This data set includes completion examples for evaluating risk profile, growth propspects, cost, ratios, market size, asset, liability, debt, equity and other ratios. This data set includes comparison of different startups.
iamtarun/code_contest_processed
--- dataset_info: features: - name: id dtype: string - name: description dtype: string - name: code dtype: string - name: language dtype: class_label: names: '0': UNKNOWN '1': Python2 '2': C++ '3': Python3 '4': JAVA - name: test_samples sequence: - name: input dtype: string - name: output dtype: string - name: source dtype: class_label: names: '0': UNKNOWN_SOURCE '1': CODECHEF '2': CODEFORCES '3': HACKEREARTH '4': CODEJAM '5': ATCODER '6': AIZU splits: - name: train num_bytes: 3321514817 num_examples: 38438 - name: valid num_bytes: 122746000 num_examples: 396 - name: test num_bytes: 77106001 num_examples: 514 download_size: 1047406436 dataset_size: 3521366818 configs: - config_name: default data_files: - split: train path: data/train-* - split: valid path: data/valid-* - split: test path: data/test-* task_categories: - text-generation - text2text-generation - question-answering tags: - code size_categories: - 10K<n<100K --- # Dataset Card for Code Contest Processed ## Dataset Summary This dataset is created by processing [code_contest dataset from Deepmind](https://huggingface.co/datasets/deepmind/code_contests). It is a competitive programming dataset for machine-learning. Read more about dataset at [original source](https://huggingface.co/datasets/deepmind/code_contests). ## Columns Description - `id` : unique string associated with a problem - `description` : problem description - `code` : one correct code for the problem - `language` : programming language used for code - `test_samples` : contains inputs and their corresponding outputs for the problem - `source` : source of problem
photonmz/roco-instruct-65k
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string - name: image dtype: string - name: id dtype: string splits: - name: train num_bytes: 18403899 num_examples: 65422 - name: validation num_bytes: 2289458 num_examples: 8174 - name: test num_bytes: 2313629 num_examples: 8176 download_size: 8200395 dataset_size: 23006986 --- # Dataset Card for "roco-instruct-65k" ## Dataset Description - **Repository:** [ROCO GitHub Repository](https://github.com/razorx89/roco-dataset) - **Paper:** [Radiology Objects in COntext (ROCO) dataset](https://labels.tue-image.nl/wp-content/uploads/2018/09/AM-04.pdf) - **Point of Contact:** ROCO's original authors ### Dataset Summary The "roco-instruct-65k" dataset is derived from the Radiology Objects in COntext (ROCO) dataset, a large-scale medical and multimodal imaging collection. The images are taken from publications available on the PubMed Central Open Access FTP mirror. The dataset was reformatted for the [LLaVA model](https://llava-vl.github.io/) in the [BabyDoctor project](https://github.com/photomz/BabyDoctor), focusing on deep analysis and diagnosis of radiology images. It includes captions, keywords, UMLS Semantic Types (SemTypes), and UMLS Concept Unique Identifiers (CUIs), and supports the creation of generative models for image captioning, classification models for image categorization, and tagging or content-based image retrieval systems. The language used is primarily English, and it covers the domain of medical imaging, specifically radiology. ### Supported Tasks and Leaderboards - `image-classification`: The dataset can be used to train models for image classification, which involves categorizing images as either radiology or non-radiology. Success on this task is typically measured by achieving a high accuracy. This task has an active leaderboard which can be found at [ImageCLEFmed Caption 2019 and CrowdAI](https://www.imageclef.org/2019/medical/caption). ### Languages The dataset consists entirely of medical texts in English. ## Dataset Structure ### Data Instances The dataset is structured in a conversation format where a human provides an image with instructions for analysis, and a model responds with a diagnosis. A typical instance in the dataset looks like: ```json { 'conversations': [ { "from": "human", "value": "The following image is a radiology scan. Deeply analyze and diagnose this image.\n<image>" }, { "from": "gpt", "value": "Computed tomography scan in axial view showing obliteration of the left maxillary sinus" } ], 'image': "ROCO_00002.jpg", 'id': "00002" } ``` ### Data Fields - `conversations`: A list containing the interaction between a human and a model regarding the image. - `image`: A string containing the name of the image file. - `id`: A string representing the unique identifier for the interaction. ### Data Splits The dataset is divided into training, validation, and test sets. The exact split sizes are: | | train | validation | test | |-----------------|-------:|-----------:|------:| | Data Instances | 65000| 8200 | 8200 | ## Dataset Creation ### Curation Rationale The "roco-instruct-65k" dataset was created to foster the development of AI models capable of performing deep analysis and diagnosis on radiology images, an essential step in automating medical imaging interpretation. ### Citation Information [@photomz](https://github.com/photomz) uploaded this dataset to HuggingFace. Please cite the original ROCO paper when using this dataset. ``` O. Pelka, S. Koitka, J. Rückert, F. Nensa, C.M. Friedrich, "Radiology Objects in COntext (ROCO): A Multimodal Image Dataset". MICCAI Workshop on Large-scale Annotation of Biomedical Data and Expert Label Synthesis (LABELS) 2018, September 16, 2018, Granada, Spain. Lecture Notes on Computer Science (LNCS), vol. 11043, pp. 180-189, Springer Cham, 2018. doi: 10.1007/978-3-030-01364-6_20 ```
Trelis/function_calling_extended
--- task_categories: - question-answering - conversational - text-generation language: - en tags: - function call - function calling - function-calling size_categories: - n<1K extra_gated_prompt: "Access to this dataset requires the purchase of a license [here](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj)" extra_gated_fields: Name: text Affiliation: text Email: text I have purchased a license (access will be granted once your payment clears): checkbox I agree to the terms of the license described on the dataset card: checkbox --- # Trelis Function Calling Dataset UPDATE: As of Dec 5th 2023, there is a v3 of this dataset now available from [here](https://huggingface.co/datasets/Trelis/function_calling_v3). - Allows models to be fine-tuned for function-calling. - The dataset is human generated and does not make use of Llama 2 or OpenAI! - Contains 59 training and 17 test rows - Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat Access this dataset by purchasing a license [HERE](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj). Alternatively, you can find pre-trained function calling models for Llama 2 and Mistral [HERE](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2) --Change-log-- 11Oct2023: Minor update adding in short prompts like "duck" to which the LLM should respond with a description of a duck or ducks, not a function call. 22Aug2023: Major updates to the main branch: - The 'systemPrompt' column is now replaced by 'functionList', which contains a raw list of function metadata without any guidance. - The previous dataset, with 'systemPrompt' - containing specific instructions - has been moved to the 'explicit' branch. - The 'implicit' branch is a copy of the 'explicit' branch, but with slightly less instruction provided to the LLM in the systemPrompt column. The reason for these updates are: - For one-shot model prompting, it is helpful to provide as much description as possible to the LLM. - For fine-tuning, is is desirable to minimise the length of any added context to describe functions, especially if not necessary. Users can play around with the different levels of instruction provided. In summary: - 'main' - provides the lowest level of instruction on how to use the functions - 'implicit' - moderate instructions - 'explicit' - detailed instructions 18Aug2023: Added new 'implicit' branch with a shorter system prompt. Performs similarly to main branch, but uses less tokens for prompting. 15Aug2023: Added datasets to fine-tune models for awareness of available functions. ## Fine-Tuning Notes and Scripts The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as) ### QLoRa Training Notebook for Llama 2 (FREE) - Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing). ### ADVANCED Fine-tuning Notebook for Structured Responses (incl. function calling) (PAID) - Fine-tune models for function calling or other structured responses. - Includes a prompt loss-mask for improved performance when structured responses are required. - Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop. - Request [access here](https://buy.stripe.com/5kAfZK6xT2Hxg7e8wW). ## Licensing The Function Calling Extended dataset is commercially licensed. Users can purchase a license per seat/user from [here](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj). Further terms: - Licenses are not transferable to other users/entities. ### Attribution of data sources This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans. ## Dataset Structure The datasets (train and test) contain three prompt types: 1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from [truthful_qa](https://huggingface.co/datasets/truthful_qa) - see below for license details. 2. The second portion of the train and test datasets provide examples where a function call is necessary. 3. The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate. Further extended on October 11th to add one and two word prompts not requiring function calls as responses. ## Branches Specify the branch using: ``` data = load_dataset( "Trelis/function_calling_extended", revision="implicit" # optionally specify a branch ) ``` The 'main' branch uses short system/function prompt, with no instruction on usage (see the other branches for prompts with stronger instruction): ``` { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } ``` The 'explicit' branch provides detailed instructions to the language model on how to call functions: ``` You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } } ``` The 'implicit' branch uses a shorter, less explicit branch that performs similarly and is therefore recommended as it reduces the length of the system prompt: ``` You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } ``` Said differently, the 'implicit' branch omits the following portion of the prompt: ``` To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } } ``` ## Training and Inference Syntax Here is sample prompt syntax for Llama. This will depend on the language model you use and also how to wish to fine-tune the model: ``` # Define the roles and markers B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" system_prompt = data['test'][index]['systemPrompt'] user_prompt = data['test'][index]['userPrompt'] correct_answer = data['test'][index]['assistantResponse'] # Format your prompt template prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n" ``` The `\n\n` after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using `\n\n` also provides the best chance for the model correctly telling whether to call a function or provide a usual response. Alternatively, you may prefer to stay away from the system prompt and create a separate wrapper for function descriptions (as an example for the data on 'main'): ``` # Define the roles and markers B_INST, E_INST = "[INST]", "[/INST]" B_FUNC, E_FUNC = "<FUNCTIONS>", "</FUNCTIONS>\n\n" functionList = data['test'][index]['functionList'] user_prompt = data['test'][index]['userPrompt'] correct_answer = data['test'][index]['assistantResponse'] # Format your prompt template prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST}\n\n" ``` ## File Structure (for prompt dataset generation) - `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses. - `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files. - `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions - `hello.py`: adds in prompts to accustomise the model to the presence of functions in the system prompt. ### JSON File Structure Each function file should be a JSON file with the following structure: ```json { "functionMetaData": { "function": "function_name", "description": "function_description", "arguments": [ { "name": "argument_name", "type": "argument_type", "description": "argument_description" }, ... ] }, "samplePromptResponsePairs": [ { "prompt": "sample_prompt", "response": { "arguments": { "argument_name": "argument_value", ... } } }, ... ] } ``` The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function. ## Dataset Generation To generate the dataset, run the `generate_dataset.py` script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair. ## CSV File Structure The generated CSV file has the following columns: 'main' branches: - `functionList`: Descriptions of two functions (the current function and a randomly selected other function). - `userPrompt`: The user's prompt. - `assistantResponse`: The assistant's response. 'explicit' and 'implicit' branches: - `systemPrompt`: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function ('explicit branch only'). - `userPrompt`: The user's prompt. - `assistantResponse`: The assistant's response. ## Testing JSON Structure A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure. To use the script, call it from the command line with the name of the function file as an argument: ``` python validate.py my_function.json ```
AtlasUnified/atlas-math-sets
--- license: mit task_categories: - question-answering language: - en tags: - math pretty_name: Atlas Math Sets size_categories: - 10M<n<100M --- # ATLAS MATH SETS ![ComfyUI_00008_.png](https://cdn-uploads.huggingface.co/production/uploads/63239d8b3259cbaadbcb7adc/_zFyLhOVwB9kbcjcsOFbE.png) This set of data consists of mathematical computations. Simple in nature as it derived from python scripts, this dataset contains addition, subtraction, multiplication, division, fractions, decimals, square roots, cube roots, exponents, and factors. Format of the JSONL is as follows: {"answer": "[num]", "input": "[equation]", "output": "[num]", "instruction": "[pre-generated_instruction] [equation]"}
shanover/disease_symptoms_prec_full
--- license: mit ---
FarisHijazi/kajiwoto.ai-chat
--- task_categories: - text-generation tags: - roleplay - character - ShareGPT size_categories: - 1K<n<10K --- This is an NSFW roleplay dataset scraped from <https://kajiwoto.ai/> as of 2023-07-15. Kajiwoto is a platform where you can create your own character datasets and chat with them. There are many public datasets in Kajiwoto, the power in this dataset is the metadata, there is so much information and categorization for each dataset. ## Processing data Do be aware that a lot of the data is NSFW (explicit content) The raw datasets are in [kajiwoto_raw.json](./kajiwoto_raw.json), this data needs to be processed so that it can be used, the main operations are: 1. transform shape (convert to a known format such as ShareGPT) 2. deduplication 3. template rendering of strings such as `"you rolled a dice with %{1|2|3|4|5|6}"`. This operation is lossy as it will choose only one of the options 4. dropping datasets that are too short 5. dropping datasets with too few upvotes or comments 6. filtering in or out NSFW datasets I have processed an initial example here: [kajiwoto_sharegpt-len_gt_6-upvotes_gt_0-sampled.json](./kajiwoto_sharegpt-len_gt_6-upvotes_gt_0-sampled.json) it is any dataset with at least 1 upvote and at least 6 lines in the conversation, you can most models as this is in the shareGPT format Here's an example [this conversation](https://kajiwoto.ai/d/033Q): ```json { "conversation": [ { "from": "user", "value": "What's your favourite drink? " }, { "from": "gpt", "value": "Coconut milk.. " }, { "from": "user", "value": "Soo" }, { "from": "gpt", "value": "What..? " }, ... ], "metadata": { "id": "033Q", "name": "Qiqi dataset", "description": "About qiqi", "profilePhotoUri": "2021_10/mzi1zgm0mg_nhprrq_1633269387804.jpg", "dominantColors": [ "#d97da1", "#eb9db8", "#661d3a", "#745b8b", "#d2b8d3", "#644484" ], "personalities": null, "personalitiesLastUpdatedAt": null, "nsfw": false, "deleted": false, "price": 0, "purchased": false, "status": "PUBLISHED", "tags": [], "updatedAt": 1649233318521, "user": { "id": "4zkE", "username": "blossomxx", "displayName": "Blossom", "profile": { "id": "56736", "photoUri": "2021_10/ytk0nzbhnw_nhprrq_1633268155638.jpg", "__typename": "UserProfile" }, "__typename": "User" }, "count": 9, "__typename": "AiTrainerGroup", "kudos": { "id": "_ai_g:033Q", "upvotes": 1, "upvoted": false, "comments": 0, "__typename": "Kudos" }, "editorSettings": null, "editorState": null } } ``` --- *Scraping and processing code will be uploaded soon*
lionelchg/dolly_creative_writing
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: instruction dtype: string - name: context dtype: string - name: response dtype: string - name: category dtype: string - name: text dtype: string splits: - name: train num_bytes: 1532046.0564174894 num_examples: 673 - name: test num_bytes: 81951.94358251058 num_examples: 36 download_size: 1011371 dataset_size: 1613998.0 --- # Dataset Card for "dolly_creative_writing" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
links-ads/wildfires-cems
--- license: cc-by-4.0 task_categories: - image-segmentation - image-classification language: - en tags: - semantic segmentation - remote sensing - sentinel - wildfire pretty_name: Wildfires - CEMS size_categories: - 1K<n<10K --- # Wildfires - CEMS The dataset includes annotations for burned area delineation and land cover segmentation, with a focus on European soil. The dataset is curated from various sources, including the Copernicus European Monitoring System (EMS) and Sentinel-2 feeds. --------- - **Repository:** https://github.com/links-ads/burned-area-seg - **Paper:** https://paperswithcode.com/paper/robust-burned-area-delineation-through --------- ![Dataset sample](assets/sample.png) ## Dataset Preparation The dataset has been compressed into segmentented tarballs for ease of use within Git LFS (that is, tar > gzip > split). To revert the process into files and directories follow these steps: ```console $ git clone https://huggingface.co/datasets/links-ads/wildfires-cems $ cd wildfires-ems # revert the multipart compression: merge first, then untar $ cat data/train/train.tar.* | tar -xzvf - -i $ cat data/test/test.tar.* | tar -xzvf - -i $ cat data/val/val.tar.* | tar -xzvf - -i ``` It is very likely that the extracted files will retain the internal directory structure, making the `train/val/test` directories useless. Adapt the output structure as you see fit, the original structure is shown below. ## Dataset Structure The main dataset used in the paper comprises the following inputs: | Suffix | Data Type | Description | Format | |---------|--------------------|-------------------------------------------------------------------------------------------|--------------------------| | S2L2A | Sentinel-2 Image | L2A data with 12 channels in reflectance/10k format | GeoTIFF (.tif) | | DEL | Delineation Map | Binary map indicating burned areas as uint8 values (0 or 1) | GeoTIFF (.tif) | | GRA | Grading Map | Grading information (if available) with uint8 values ranging from 0 to 4 | GeoTIFF (.tif) | | ESA_LC | Land Cover Map | ESA WorldCover 2020 land cover classes as uint8 values | GeoTIFF (.tif) | | CM | Cloud Cover Map | Cloud cover mask, uint8 values generated using CloudSen12 (0 or 1) | GeoTIFF (.tif) | Additionally, the dataset also contains two land cover variants, the ESRI Annual Land Cover (9 categories) and the static variant (10 categories), not used in this study. The dataset already provides a `train` / `val` / `test` split for convenience, however the inner structure of each group is the same. The folders are structured as follows: ``` train/val/test/ ├── EMSR230/ │ ├── AOI01/ │ │ ├── EMSR230_AOI01_01/ │ │ │ ├── EMSR230_AOI01_01_CM.png │ │ │ ├── EMSR230_AOI01_01_CM.tif │ │ │ ├── EMSR230_AOI01_01_DEL.png │ │ │ ├── EMSR230_AOI01_01_DEL.tif │ │ │ ├── EMSR230_AOI01_01_ESA_LC.png │ │ │ ├── EMSR230_AOI01_01_ESA_LC.tif │ │ │ ├── EMSR230_AOI01_01_GRA.png │ │ │ ├── EMSR230_AOI01_01_GRA.tif │ │ │ ├── EMSR230_AOI01_01_S2L2A.json -> metadata information │ │ │ ├── EMSR230_AOI01_01_S2L2A.png -> RGB visualization │ │ │ └── EMSR230_AOI01_01_S2L2A.tif │ │ │ └── ... │ │ ├── EMSR230_AOI01_02/ │ │ │ └── ... │ │ ├── ... │ ├── AOI02/ │ │ └── ... │ ├── ... ├── EMSR231/ │ ├── ... ├── ... ``` ### Source Data - Activations are directly derived from Copernicus EMS (CEMS): [https://emergency.copernicus.eu/mapping/list-of-activations-rapid](https://emergency.copernicus.eu/mapping/list-of-activations-rapid) - Sentinel-2 and LC images are downloaded from Microsoft Planetary Computer, using the AoI provided by CEMS. - DEL and GRA maps represent the rasterized version of the delineation/grading products provided by the Copernicus service. ### Licensing Information CC-BY-4.0 [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ```bibtex @inproceedings{arnaudo2023burned, title={Robust Burned Area Delineation through Multitask Learning}, author={Arnaudo, Edoardo and Barco, Luca and Merlo, Matteo and Rossi, Claudio}, booktitle={Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases}, year={2023} } ``` ### Contributions - Luca Barco (luca.barco@linksfoundation.com) - Edoardo Arnaudo (edoardo.arnaudo@polito.it | linksfoundation.com)
Tarklanse/Traditional_Chinese_roleplay_chat_Dataset
--- task_categories: - text-generation - text2text-generation language: - zh license: cc-by-sa-4.0 --- # Traditional_Chinese_roleplay_chat_Dataset 這個資料集是以繁體中文為主,將各種由ChatGPT生成與極小部分個人撰寫的對話內容整理為alpaca dataset format的格式 以一層一層堆疊的方式,將一則對話紀錄拆成數筆資料(共約1000則對話),在幾次嘗試性的訓練中能夠讓llama2重現原本英文那種很活躍的對話風格,並且能夠維持善於扮演各種角色的能力 目前個人有以這個資料集製作一個lora 2023/09/07 更新 為資料集加入一些中英翻譯的句子,以期AI能以更好的文字去描寫他的動作,並增加了一些與食物有關的對話,希望能降低AI生出奇怪食物名的機率
TFLai/Turkish-Alpaca
--- license: apache-2.0 --- Stanford alpaca turkish: [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
chargoddard/PIPPA-Judged
--- license: apache-2.0 task_categories: - conversational language: - en tags: - not-for-all-audiences - conversational - roleplay - custom-format dataset_info: - config_name: adequately_rated features: - name: id dtype: string - name: rating struct: - name: analysis dtype: string - name: judge dtype: string - name: score dtype: float64 - name: submission_timestamp dtype: timestamp[ns] - name: categories sequence: string - name: bot_id dtype: string - name: bot_name dtype: string - name: bot_greeting dtype: string - name: bot_definitions dtype: string - name: bot_description dtype: string - name: conversation struct: - name: is_human sequence: bool - name: message sequence: string splits: - name: train num_bytes: 203748289.37737644 num_examples: 14610 download_size: 111617678 dataset_size: 203748289.37737644 - config_name: best_rated features: - name: id dtype: string - name: rating struct: - name: analysis dtype: string - name: judge dtype: string - name: score dtype: float64 - name: submission_timestamp dtype: timestamp[ns] - name: categories sequence: string - name: bot_id dtype: string - name: bot_name dtype: string - name: bot_greeting dtype: string - name: bot_definitions dtype: string - name: bot_description dtype: string - name: conversation struct: - name: is_human sequence: bool - name: message sequence: string splits: - name: train num_bytes: 10780111.409220532 num_examples: 773 download_size: 9421151 dataset_size: 10780111.409220532 - config_name: default features: - name: id dtype: string - name: rating struct: - name: analysis dtype: string - name: judge dtype: string - name: score dtype: float64 - name: submission_timestamp dtype: timestamp[ns] - name: categories sequence: string - name: bot_id dtype: string - name: bot_name dtype: string - name: bot_greeting dtype: string - name: bot_definitions dtype: string - name: bot_description dtype: string - name: conversation struct: - name: is_human sequence: bool - name: message sequence: string splits: - name: train num_bytes: 234735880 num_examples: 16832 download_size: 116686573 dataset_size: 234735880 - config_name: ratings_only features: - name: success dtype: bool - name: score dtype: float64 - name: response dtype: string - name: id dtype: string splits: - name: train num_bytes: 7190167 num_examples: 16832 download_size: 2848419 dataset_size: 7190167 configs: - config_name: adequately_rated data_files: - split: train path: adequately_rated/train-* - config_name: best_rated data_files: - split: train path: best_rated/train-* - config_name: default data_files: - split: train path: data/train-* - config_name: ratings_only data_files: - split: train path: ratings_only/train-* --- # Dataset Card for "PIPPA-Judged" Pygmalion's [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) dataset augmented with quality scores generated by [TheBloke/OpenOrca-Platypus2-13B-GPTQ](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ). Making this public so people can reproduce the exact dataset used for one of my models - probably not useful for anything else. If you want data along these lines, look at Ilya Gusev's [pippa_scored](https://huggingface.co/datasets/IlyaGusev/pippa_scored) instead. It's much higher quality and better executed.
Nan-Do/SPP_30K_reasoning_tasks
--- dataset_info: features: - name: type dtype: int64 - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 44253001 num_examples: 89898 download_size: 10073876 dataset_size: 44253001 task_categories: - text-generation - conversational - text2text-generation language: - en tags: - code - python - reasoning pretty_name: SPP python reasoning tasks --- # Dataset Card for "SPP_30K_verified_tasks" ### Dataset Summary This is an augmented version of the [Synthetic Python Problems(SPP) Dataset](https://huggingface.co/datasets/wuyetao/spp). This dataset has been generated from the subset of the data has been de-duplicated and verified using a Python interpreter. (SPP_30k_verified.jsonl). The original dataset contains small Python functions that include a docstring with a small description of what the function does and some calling examples for the function. The current dataset includes three different tasks: - Type 1: give the code as input and ask the model to generate some example calls along the expected return value - Type 2: give the description and the example calling calls and ask the model to write the function. - Type 3: give the the function and the example calls without the expected value as input and ask the model to write what the function should return. For example: ```python def lowest_prime_factor(n): """ Find the smallest prime factor of a given number. Examples: >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 """ smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest ``` First task: ``` INSTRUCTION: Execute the following function with several example calls, showing the values returned in each case. INPUT: def lowest_prime_factor(n): smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest OUTPUT: Sure, here you have several example calls along the return values they will generate. """ >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 """ ``` Second task: ``` INSTRUCTION: Follow the task given and refer to the provided calling examples as you create the function's body. INPUT: Find the smallest prime factor of a given number. Examples: >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 OUTPUT: Of course, here's a function implementation based on the provided calling examples and task. """ python def lowest_prime_factor(n): smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest """ ``` Third task: ``` INSTRUCTION: Examine the given function and the accompanying instance function calls, and outline the specific output generated by each call. INPUT: def lowest_prime_factor(n): smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest Function calls: >>> lowest_prime_factor(12) >>> lowest_prime_factor(14) >>> lowest_prime_factor(8) OUTPUT: Sure thing, here you have the specific output generated by each function call. >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 ``` ### Languages The dataset is in English. ### Data Splits There are no splits (Only training). ## Dataset Creation August of 2023 ### Curation Rationale This dataset was created to improve the Python 3 reasoning/understanding capabilities of LLMs. ### Source Data The source dataset can be found at [Synthetic Python Problems(SPP) Dataset](https://huggingface.co/datasets/wuyetao/spp). ### Annotations The dataset includes an instruction, input, output and type columns. The type colum indicates the type of task (from 1 to 3). #### Annotation process The responses were generated parsing the docstrings of the functions.
qgyd2021/e_commerce_customer_service
--- task_categories: - text-retrieval - question-answering language: - en tags: - e-commerce size_categories: - 1M<n<10M --- ## 电商客户服务数据集 是从 (lightinthebox)[https://www.lightinthebox.com/] 网站收集的电商数据. 此数据可用于电商客服机器人的研究. 数据内容: faq.json: 包含通用问题的问答对. product.jsonl: 包含一些商品信息. examples 中包含收集商品信息的爬虫代码. python==3.8.10
theblackcat102/evol-code-zh
--- task_categories: - text2text-generation language: - zh --- Evolved codealpaca in Chinese
lamini/spider_text_to_sql
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 9388343 num_examples: 7000 - name: validation num_bytes: 1090039 num_examples: 1034 download_size: 1054303 dataset_size: 10478382 --- # Dataset Card for "spider_text_to_sql" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shareAI/CodeChat
--- license: openrail --- # CodeChat Dataset 该数据集是一个比较轻量的小数据集,可用于针对性提升模型的数理逻辑推理、代码问答能力。 样本从shareAI/ShareGPT-Chinese-English-90k、garage-bAInd/Open-Platypus等数据集中抽取并组合,整理成了统一的多轮对话格式 主要包含逻辑推理、代码问答、代码生成相关语料样本,可以配合LoRA用于轻量微调训练快速激活你的模型在代码QA这方面的能力 推荐使用firefly框架,可以快速开箱即用使用该数据格式的加载: https://github.com/yangjianxin1/Firefly
vikp/evol_codealpaca_filtered_87k
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string - name: quality_prob dtype: float64 - name: learning_prob dtype: float64 splits: - name: train num_bytes: 194291512.64351812 num_examples: 87705 download_size: 107933444 dataset_size: 194291512.64351812 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "evol_codealpaca_filtered_86k" Filtered version of `theblackcat102/evol-codealpaca-v1`, with manual filtering, and automatic filtering based on quality and learning value classifiers.
twang2218/chinese-law-and-regulations
--- license: apache-2.0 dataset_info: - config_name: default features: - name: publish_date dtype: timestamp[ns] - name: effective_date dtype: timestamp[ns] - name: type dtype: string - name: status dtype: string - name: title dtype: string - name: office dtype: string - name: office_level dtype: string - name: office_category dtype: string - name: effective_period dtype: string - name: content dtype: string splits: - name: train num_bytes: 363619544 num_examples: 22552 download_size: 159516785 dataset_size: 363619544 - config_name: metadata features: - name: publish_date dtype: timestamp[ns] - name: effective_date dtype: timestamp[ns] - name: type dtype: string - name: status dtype: string - name: title dtype: string - name: office dtype: string - name: office_level dtype: string - name: office_category dtype: string - name: effective_period dtype: string splits: - name: train num_bytes: 4529871 num_examples: 22552 download_size: 740438 dataset_size: 4529871 configs: - config_name: default data_files: - split: train path: data/train-* - config_name: metadata data_files: - split: train path: metadata/train-* ---
chenqile09/llama2-chinese-couplet
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: data dtype: string splits: - name: train num_bytes: 211969430 num_examples: 770491 - name: validation num_bytes: 1101256 num_examples: 4000 download_size: 56353998 dataset_size: 213070686 --- # Dataset Card for "chenqile09-chinese-couplet" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceH4/lima_llama2
--- dataset_info: features: - name: conversations sequence: string - name: source dtype: string - name: length dtype: int64 - name: prompt_id dtype: string - name: prompt dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: meta struct: - name: category dtype: string - name: source dtype: string - name: text dtype: string splits: - name: train num_bytes: 8806712 num_examples: 1000 - name: test num_bytes: 188848 num_examples: 300 download_size: 5237615 dataset_size: 8995560 --- # Dataset Card for "lima_llama2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pablo-moreira/gpt4all-j-prompt-generations-pt
--- language: - pt license: apache-2.0 size_categories: - 100K<n<1M task_categories: - text-generation pretty_name: GPT4All Prompt Generations translated into Portuguese using Google Translate. dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: source dtype: string - name: id dtype: string splits: - name: train num_bytes: 1956916380 num_examples: 808812 download_size: 1134108118 dataset_size: 1956916380 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "gpt4all-j-prompt-generations-pt" ## Dataset Description Copy translated into Portuguese of the dataset [gpt4all_prompt_generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) using the googletrans library. ## Translate [translate_dataset.ipynb](translate_dataset.ipynb) ## Usage [dataset_usage.ipynb](dataset_usage.ipynb)
classla/ParlaSent
--- license: cc-by-sa-4.0 language: - sl - en - cs - bs - hr - sr - sk tags: - sentiment - classification - parliament - parlament pretty_name: ParlaSent size_categories: - 10K<n<100K configs: - config_name: EN data_files: ParlaSent_EN.jsonl - config_name: BCS data_files: ParlaSent_BCS.jsonl - config_name: CZ data_files: ParlaSent_CZ.jsonl - config_name: SK data_files: ParlaSent_SK.jsonl - config_name: SL data_files: ParlaSent_SL.jsonl - config_name: EN_additional_test data_files: ParlaSent_EN_test.jsonl - config_name: BCS_additional_test data_files: ParlaSent_BCS_test.jsonl task_categories: - text-classification --- # The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0 ## Dataset Description - **Repository: [Clarin.si repo](http://hdl.handle.net/11356/1868)** - **Paper: https://arxiv.org/abs/2309.09783** ### Dataset Summary This dataset was created and used for sentiment analysis experiments. The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test. Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev" and "test" portions" for performing language-specific experiments. The 6-level annotation schema, used by annotators, is the following: - Positive for sentences that are entirely or predominantly positive - Negative for sentences that are entirely or predominantly negative - M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment - M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment - P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment - N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment Dataset is described in detail in our [paper](https://arxiv.org/abs/2309.09783). ### Data Attributes The attributes in training data are the following: - sentence - the sentence labeled for sentiment - country - the country of the parliament the sentence comes form - annotator1 - first annotator's annotation - annotator2 - second annotator's annotation - reconciliation - the final label agreed upon after reconciliation - label - three level (positive, negative, neutral) label based on the reconciliation label - document_id - internal identifier of the document the sentence comes form - sentence_id - internal identifier of the sentence inside the document - term - the term of the parliament the sentence comes from - date - the date the sentence was uttered as part of a speech in the parliament - name - name of the MP giving the speech - party - the party of the MP - gender - binary gender of the MP - birth year - year of birth of the MP - split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset - ruling - whether the MP was in a coalition or an opposition at the time of giving the speech The attributes in the test data (_test.jsonl files) are the following: - sentence - the sentence labeled for sentiment - country - the country of the parliament the sentence comes form - annotator1 - first (only) annotator's annotation, used as a final annotation - label - three level (positive, negative, neutral) label based on the annotator1 label - document_id - internal identifier of the document the sentence comes form - sentence_id - internal identifier of the sentence inside the document - term - the term of the parliament the sentence comes from - date - the date the sentence was uttered as part of a speech in the parliament - name - name of the MP giving the speech - party - the party of the MP - gender - binary gender of the MP - birth year - year of birth of the MP - ruling - whether the MP was in a coalition or an opposition at the time of giving the speech ### Citation information Please quote the following paper: ``` @article{ Mochtak_Rupnik_Ljubešić_2023, title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings}, rights={All rights reserved}, url={http://arxiv.org/abs/2309.09783}, abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.}, note={arXiv:2309.09783 [cs]}, number={arXiv:2309.09783}, publisher={arXiv}, author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola}, year={2023}, month={Sep}, language={en} } ```
BEE-spoke-data/coedit-reworded-deduped
--- license: apache-2.0 dataset_info: - config_name: dedup-by-target features: - name: task dtype: string - name: id dtype: string - name: original_instruction dtype: string - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 23629242 num_examples: 79943 download_size: 11836738 dataset_size: 23629242 - config_name: dedup-input features: - name: task dtype: string - name: id dtype: string - name: original_instruction dtype: string - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 23457166 num_examples: 79293 download_size: 11795306 dataset_size: 23457166 - config_name: default features: - name: task dtype: string - name: id dtype: string - name: original_instruction dtype: string - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: update_type dtype: string splits: - name: train num_bytes: 25021311 num_examples: 79943 download_size: 11862526 dataset_size: 25021311 configs: - config_name: dedup-by-target data_files: - split: train path: dedup-by-target/train-* - config_name: dedup-input data_files: - split: train path: dedup-input/train-* - config_name: default data_files: - split: train path: data/train-* source_dataasets: chargoddard/coedit-reworded --- # BEE-spoke-data/coedit-reworded-deduped Minhash deduplication on the `target` column. Source data from [coedit-reworded](https://hf.co/chargoddard/coedit-reworded) ## load ``` from datasets import load_dataset dataset = load_dataset("BEE-spoke-data/coedit-reworded-deduped", revision="refs/convert/parquet") dataset ``` output: ```python DatasetDict({ train: Dataset({ features: ['task', 'id', 'original_instruction', 'instruction', 'input', 'output'], num_rows: 79943 }) }) ``` ## Citation Original dataset courtesy of Grammarly: ``` @article{raheja2023coedit, title={CoEdIT: Text Editing by Task-Specific Instruction Tuning}, author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang}, year={2023}, eprint={2305.09857}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
substratusai/the-stack-yaml-k8s
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: - code license: - other multilinguality: - multilingual pretty_name: The-Stack size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: [] extra_gated_prompt: |- ## Terms of Use for The Stack The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset: 1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. 2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes. 3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it. By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well. extra_gated_fields: Email: text I have read the License and agree with its terms: checkbox dataset_info: features: - name: hexsha dtype: string - name: size dtype: int64 - name: ext dtype: string - name: lang dtype: string - name: max_stars_repo_path dtype: string - name: max_stars_repo_name dtype: string - name: max_stars_repo_head_hexsha dtype: string - name: max_stars_repo_licenses sequence: string - name: max_stars_count dtype: int64 - name: max_stars_repo_stars_event_min_datetime dtype: string - name: max_stars_repo_stars_event_max_datetime dtype: string - name: max_issues_repo_path dtype: string - name: max_issues_repo_name dtype: string - name: max_issues_repo_head_hexsha dtype: string - name: max_issues_repo_licenses sequence: string - name: max_issues_count dtype: int64 - name: max_issues_repo_issues_event_min_datetime dtype: string - name: max_issues_repo_issues_event_max_datetime dtype: string - name: max_forks_repo_path dtype: string - name: max_forks_repo_name dtype: string - name: max_forks_repo_head_hexsha dtype: string - name: max_forks_repo_licenses sequence: string - name: max_forks_count dtype: int64 - name: max_forks_repo_forks_event_min_datetime dtype: string - name: max_forks_repo_forks_event_max_datetime dtype: string - name: content dtype: string - name: avg_line_length dtype: float64 - name: max_line_length dtype: int64 - name: alphanum_fraction dtype: float64 splits: - name: train num_bytes: 2056665435.7311056 num_examples: 276520 download_size: 312473618 dataset_size: 2056665435.7311056 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for The Stack YAML K8s This dataset is a subset of The Stack dataset data/yaml. The YAML files were parsed and filtered out all valid K8s YAML files which is what this data is about. The dataset contains 276520 valid K8s YAML files. The dataset was created by running the [the-stack-yaml-k8s.ipynb](https://github.com/substratusai/the-stack-yaml-k8s/blob/main/the-stack-k8s-yaml.ipynb) Notebook on K8s using [substratus.ai](https://substratus.ai) Source code used to generate dataset: https://github.com/substratusai/the-stack-yaml-k8s Need some help? Questions? Join our Discord server: <a href="https://discord.gg/JeXhcmjZVm"><img alt="discord-invite" src="https://dcbadge.vercel.app/api/server/JeXhcmjZVm?style=flat"></a> ### How to use it ```python from datasets import load_dataset ds = load_dataset("substratusai/the-stack-yaml-k8s", split="train") ds[0]["content"] ``` ## Original The Stack Dataset Description - **Homepage:** https://www.bigcode-project.org/ - **Repository:** https://github.com/bigcode-project - **Paper:** https://arxiv.org/abs/2211.15533 - **Leaderboard:** N/A - **Point of Contact:** contact@bigcode-project.org ## Dataset Structure ### Data Instances Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity. ### Data Fields - `content` (string): the content of the file. - `size` (integer): size of the uncompressed file. - `lang` (string): the programming language. - `ext` (string): file extension - `avg_line_length` (float): the average line-length of the file. - `max_line_length` (integer): the maximum line-length of the file. - `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters. - `hexsha` (string): unique git hash of file - `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}` - `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}` - `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head - `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository - `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository - `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event - `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event ### Data Splits The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split. ## Dataset Creation ### Curation Rationale One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible. ### Source Data #### Initial Data Collection and Normalization 220.92M active GitHub repository names were collected from the event archives published between January 1st, 2015 and March 31st, 2022 on [GHArchive](https://gharchive.org/). Only 137.36M of these repositories were public and accessible on GitHub – others were not accessible as they had been deleted by their owners. 51.76B files were downloaded from the public repositories on GitHub between November 2021 and June 2022. 5.28B files were unique. The uncompressed size of all stored files is 92.36TB. The list of programming language extensions is taken from this [list](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) (also provided in Appendix C of the paper). Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication. To find near-duplicates, MinHash with 256 permutations of all documents was computed in linear time. Locality Sensitive Hashing was used to find the clusters of duplicates. Jaccard Similarities were computed inside these clusters to remove any false positives and with a similarity threshold of 0.85. Roughly 40% of permissively licensed files were (near-)duplicates. See section 3 of the paper for further details. The following are not stored: - Files that cannot contribute to training code: binary, empty, could not be decoded - Files larger than 1MB - The excluded file extensions are listed in Appendix B of the paper. ##### License detection Permissive licenses have minimal restrictions on how the software can be copied, modified, and redistributed. The full list of licenses can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json). GHArchive contained the license information for approximately 12% of the collected repositories. For the remaining repositories, [go-license-detector](https://github.com/src-d/go-license-detector) was run to detect the most likely SPDX license identifier. The detector did not detect a license for ~81% of the repositories, in which case the repository was excluded from the dataset. A file was included in the safe license dataset if at least one of the repositories containing the file had a permissive license. #### Who are the source language producers? The source (code) language producers are users of GitHub that created unique repository names between January 1st, 2015, and March 31st, 2022. ### Personal and Sensitive Information The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to contact@bigcode-project.org. The PII pipeline for this dataset is still a work in progress (see this [issue](https://github.com/bigcode-project/admin/issues/9) for updates). Researchers that wish to contribute to the anonymization pipeline of the project can apply to join [here](https://www.bigcode-project.org/docs/about/join/). Developers with source code in the dataset can request to have it removed [here](https://www.bigcode-project.org/docs/about/ip/) (proof of code contribution is required). ### Opting out of The Stack We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools. You can check if your code is in The Stack with the following ["Am I In The Stack?" Space](https://huggingface.co/spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2). ## Considerations for Using the Data ### Social Impact of Dataset The Stack is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code. With the release of The Stack, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022. We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market. A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157). ### Discussion of Biases The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks, as the comments within the code may contain harmful or offensive language, which could be learned by the models. Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer. Roughly 40 natural languages are present in docstrings and comments with English being the most prevalent. In python files, it makes up ~96% of the dataset. For further information on data analysis of the Stack, see this [repo](https://github.com/bigcode-project/bigcode-analysis). ### Other Known Limitations One of the current limitations of The Stack is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues. The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware. To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)). The accuracy of license attribution is limited by the accuracy of GHArchive and go-license-detector. Any mistakes should be reported to BigCode Project for review and follow-up as needed. ## Additional Information ### Dataset Curators 1. Harm de Vries, ServiceNow Research, harm.devries@servicenow.com 2. Leandro von Werra, Hugging Face, leandro@huggingface.co ### Licensing Information The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack/blob/main/licenses.json). ### Citation Information ``` @article{Kocetkov2022TheStack, title={The Stack: 3 TB of permissively licensed source code}, author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm}, journal={Preprint}, year={2022} } ``` ## Terms of Use for The Stack The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset: 1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. 2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes. 3. To host, share, or otherwise provide access to The Stack dataset, you must include these Terms of Use and require users to agree to it.
PeacefulData/HyPoradise-v1-GigaSpeech
--- license: mit language_creators: - expert-generated task_categories: - text-generation tags: - code - Whisper-tiny pretty_name: Whispering LLaLMA for new Hypotheses Paradise Subset size_categories: - 1k<n<10M --- - If you consider this work would be related or useful for your research, please consider to cite the work in EMNLP 2023. Thank you. ```bib @inproceedings{radhakrishnan2023whispering, title={Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition}, author={Srijith Radhakrishnan, Chao-Han Huck Yang, Sumeer Ahmad Khan, Rohit Kumar, Narsis A. Kiani, David Gomez-Cabrero, Jesper N. Tegner}, booktitle={Proc. of EMNLP}, year={2023} } ```
crumb/textbook-codex
--- dataset_info: features: - name: text dtype: string - name: src dtype: string - name: src_col dtype: string - name: model dtype: string splits: - name: train num_bytes: 12286698438.0 num_examples: 3593574 download_size: 5707800000 dataset_size: 12286698438.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "textbook-codex" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
euclaise/LittleTown
--- license: other size_categories: - 10K<n<100K dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 75640201 num_examples: 100000 download_size: 16577014 dataset_size: 75640201 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "LittleTown" [Language models are greedy reasoners](https://arxiv.org/pdf/2210.01240.pdf), so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python. 90% of the examples contain backtracking. License: ``` Zero-Clause BSD ============= Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted. THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ```
WeixiangYan/CodeTransOcean
--- license: apache-2.0 ---
chiayewken/bamboogle
--- dataset_info: features: - name: Question dtype: string - name: Answer dtype: string splits: - name: test num_bytes: 10747 num_examples: 125 download_size: 8383 dataset_size: 10747 configs: - config_name: default data_files: - split: test path: data/test-* --- # Bamboogle This repo contains the data for ["Measuring and Narrowing the Compositionality Gap in Language Models" paper](https://arxiv.org/abs/2210.03350). The original data link is here: https://docs.google.com/spreadsheets/d/1jwcsA5kE4TObr9YHn9Gc-wQHYjTbLhDGx6tmIzMhl_U/edit?usp=sharing This dataset is distributed with the MIT license.
KomeijiForce/Text2Emoji
--- task_categories: - translation - text-generation language: - en size_categories: - 100K<n<1M ---
alfredplpl/anime-with-gpt4v-caption-for-lora
--- license: cc-by-nc-4.0 language: - en --- # Anime style image - text by GPT4V small dataset ![cute1.png](cute1.png) ## The text is as follows: This is a charming anime-style illustration featuring a young girl as the main subject. The image predominantly uses a soft, pastel color palette, creating a gentle and whimsical ambiance. The main character has light blonde hair styled in two low twintails, secured with what could be interpreted as dark-colored hair ties or ribbons. She has large expressive blue eyes and a demure expression, with her mouth slightly open as if she is about to speak or is quietly admiring something. A black hairband is perched on top of her head. She is dressed in an outfit that radiates a youthful, almost springtime elegance. She wears a long-sleeved white coat, with the sleeves rolled up to just below the elbow, revealing a light green dress with a floral hem design underneath. The dress itself is a rich, green color with a subtle texture that suggests a fabric like cotton or linen. It is accented with small white, yellow-centered flowers near the hem, which also features a ruffled fringe hinting at layers beneath. Around her neck, she has a thin, green scarf or kerchief, and her feet are adorned with sturdy black boots with brown soles and notable detailing, including black laces tied in neat bows. In her right hand, the girl holds a glass of what appears to be a cold, whipped cream-topped beverage, the kind typically found at a cafe. On her left, she gently cradles a triangular-shaped pastry, possibly a slice of pie or cake, on a small, simple plate. To her right, the image shows a smaller rendition of the girl in a similar pose but without food or drink, emphasizing her adorable and innocent demeanor. Additionally, there are two cute white rabbits in the image, one sitting directly in front of the girl and the other to her left. The rabbit in front wears a collar with a bell, hinting at it being a pet. The one on the left appears to be free and unadorned. Both rabbits have their attention directed towards the girl, further amplifying the sweetness and serene nature of the scene. Leaf motifs and plant elements are scattered throughout the image, further establishing the connection to nature and spring. The overall composition is bordered by a teal background, which contrasts with the lighter colors and helps the central elements to stand out. The backdrop features subtle watercolor-effects, adding texture and visual interest. Lastly, text elements on the image read "MatsoTie, Mity Litite, Ianoiynote," and "magnolia kat," likely representing illustrative or fictional branding and the artist's signature, respectively. The chosen font for the main text is elegant and simple, maintaining the gentle aesthetics of the artwork. ## format - cute1.png+cute1.txt - [llava.json](llava.json) - [metadata.csv](metadata.csv) Thanks https://huggingface.co/datasets/p1atdev/niji-v5 . ## Restriction You may not develop models that compete with OpenAI because of [OpenAI's terms of use](https://openai.com/policies/terms-of-use).
SciPhi/open-tora
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: instruction dtype: string - name: output dtype: string - name: shuffled_index dtype: string splits: - name: train num_bytes: 227167755 num_examples: 132054 download_size: 54463656 dataset_size: 227167755 --- # Dataset Card for "sympy-logic-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
allenai/UNcommonsense
--- language: - en license: mit size_categories: - 10K<n<100K task_categories: - text-generation tags: - abductive reasoning - commonsense reasoning - uncommonsense configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: human_explanations sequence: string - name: outcome dtype: string - name: context dtype: string - name: source dtype: string - name: gpt4_explanations dtype: string - name: enhanced_explanations sequence: string splits: - name: train num_bytes: 16885523 num_examples: 16099 - name: validation num_bytes: 5319894 num_examples: 2239 download_size: 12397712 dataset_size: 22205417 --- # Dataset Card for UNcommonsense ## Dataset Description - **Paper:** https://arxiv.org/abs/2311.08469 - **Point of Contact:** [Wenting Zhao](mailto:wzhao@cs.cornell.edu) ### Dataset Summary UNcommonsense is an abductive reasoning dataset. Unlike [aNLG](https://arxiv.org/abs/1908.05739), we focus on explaining unusual, unexpected, and unlikely situations. UNcommonsense is an English-language corpus consisting of 20k unique contexts paired with explicitly uncommon outcomes. Given these contexts and uncommon outcomes, we crowdsource 41k abductive explanations, which provide a plausible explanation of how an uncommon outcome could have arisen, given an input context. ### Data Fields - `context` (string): Several sentences describing a context. - `outcome` (string): An unexpected outcome from the context. - `human_explanations` (list of strings): A list of human-authored explanations that make the unexpected outcome likely given the context. - `gpt4_explanations` (list of strings): A list of GPT-4 generated explanations that make the unexpected outcome likely given the context. - `enhanced_explanations` (list of strings): A list of GPT-4 enhanced human-authored explanations that make the unexpected outcome likely given the context. - `source` (string): The source of the dataset from which we created the example. ### Citation Information Please consider citing [our paper](https://arxiv.org/pdf/2311.08469.pdf) if you find this dataset useful: ``` @article{zhao2023uncommonsense, title={UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations}, author={Zhao, Wenting and Chiu, Justin T and Hwang, Jena D and Brahman, Faeze and Hessel, Jack and Choudhury, Sanjiban and Choi, Yejin and Li, Xiang Lorraine and Suhr, Alane}, journal={arXiv preprint arXiv:2311.08469}, year={2023} } ```
hanruijiang/civitai-stable-diffusion-2.5m
--- license: apache-2.0 task_categories: - text-generation - text-to-image language: - en tags: - art size_categories: - 1M<n<10M --- inspired by thefcraft/civitai-stable-diffusion-337k. collected using civitai api to get all prompts.
rishiraj/bengalichat
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: prompt dtype: string - name: prompt_id dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: category dtype: string - name: text dtype: string splits: - name: train num_bytes: 66596881 num_examples: 9500 - name: test num_bytes: 3573980 num_examples: 500 download_size: 27678311 dataset_size: 70170861 task_categories: - conversational - text-generation language: - bn pretty_name: Bengali Chat license: cc-by-nc-4.0 --- # Dataset Card for Bengali Chat We know that current English-first LLMs don’t work well for many other languages, both in terms of performance, latency, and speed. Building instruction datasets for non-English languages is an important challenge that needs to be solved. Dedicated towards addressing this problem, I release 2 new datasets [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) & [rishiraj/hindichat](https://huggingface.co/datasets/rishiraj/hindichat/) of 10,000 instructions and demonstrations each. This data can be used for supervised fine-tuning (SFT) to make language multilingual models follow instructions better. ### Dataset Summary [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is translated from [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots/) which comprised mostly of single-turn instructions across the following categories: | Category | Count | |:-----------|--------:| | Generation | 4560 | | Open QA | 1240 | | Brainstorm | 1120 | | Chat | 850 | | Rewrite | 660 | | Summarize | 420 | | Coding | 350 | | Classify | 350 | | Closed QA | 260 | | Extract | 190 | ### Languages The data in [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) are in Bengali (BCP-47 bn). ### Data Fields The data fields are as follows: * `prompt`: Describes the task the model should perform. * `prompt_id`: A unique ID for the prompt. * `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content. * `category`: Which category the example belongs to (e.g. `Chat` or `Coding`). * `text`: Content of `messages` in a format that is compatible with dataset_text_field of SFTTrainer. ### Data Splits | | train_sft | test_sft | |---------------|------:| ---: | | bengalichat | 9500 | 500 | ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{bengalichat, author = {Rishiraj Acharya}, title = {Bengali Chat}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/datasets/rishiraj/bengalichat}} } ```
ChengAoShen/emoji_with_text
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 197767083.176 num_examples: 47192 download_size: 150864115 dataset_size: 197767083.176 license: mit task_categories: - text-to-image language: - en tags: - art size_categories: - 10K<n<100K --- # "Emoji_for_diffusion" Dataset ## Description This data set includes various style emoji and their description from different apps. Each image is sized with 64*64, which is easy to train in your personal GPU, and has **RGBA** channels. The description text is formatted as follows: ``` app/company + emoji content + description information ``` You can use this dataset to train your personal diffusion model. I sincerely hope this dataset can help your research work. ## Citation If you use this dataset, please cite it as: ``` @misc{ChengAoShen2023emoji, author = {ChengAo Shen and Siyuan Mu}, title = {emoji_for_diffusion}, year={2023}, howpublished= {\url{https://huggingface.co/datasets/ChengAoShen/emoji_for_diffusion/}} } ```
Locutusque/InstructMix-V2
--- license: other language: - en - code task_categories: - text-generation - question-answering - conversational pretty_name: InstructMix-V2 size_categories: - 10M<n<100M --- **Dataset Summary:** A new and improved verison of InstructMix that has nearly twice as many examples. **Dataset Contents:** The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used: - Locutusque/ColumnedChatCombined - TokenBender/code_instructions_120k_alpaca_style - Open-Orca/OpenOrca - vicgalle/alpaca-gpt4 - ChristophSchuhmann/essays-with-instructions - checkai/instruction-poems - pubmed_qa - BI55/MedText - nampdn-ai/tiny-codes - TIGER-Lab/MathInstruct - garage-bAInd/Open-Platypus - KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format - teknium/openhermes - ssbuild/ultrachat It contains two of the following columns: - Input (string) - Output (string) These should hopefully be self-explanatory **Dataset Composition:** - Number of samples: 13,639,348 - Languages: English **Use Cases:** The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications. **Dataset Creation:** The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources. **License:** Please ensure that you read and adhere to the licensing agreements of the datasets included in this compilation, as some may contain specific rules that must be followed.
Yhyu13/ToolBench_toolllama_G123_dfs
--- license: apache-2.0 --- Dataset mentioned for ToolBench project https://github.com/OpenBMB/ToolBench They were in the google drive data.zip https://drive.google.com/drive/folders/1yBUQ732mPu-KclJnuQELEhtKakdXFc3J These two json are already processed by the original author. Just plugin into the ToolBnech repo deepseed arguments. ``` --data_path ./toolllama_G123_dfs_train.json \ --eval_data_path ./toolllama_G123_dfs_eval.json \ ``` ~~My objective is to tailer the training data to 1/100 size and used them for the LLaMA-Factory project. https://github.com/hiyouga/LLaMA-Factory~~ So that more open source models could benifit from function calling dataset. ## Edit The objective is obtained by using another dataset instead: https://huggingface.co/datasets/Yhyu13/glaive-function-calling-v2-llama-factory-convert It is smaller and better.
m-a-p/MusicTheoryBench
--- license: cc dataset_info: features: - name: id dtype: int64 - name: instruction dtype: string - name: stem dtype: string - name: options struct: - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: subject dtype: string - name: answer dtype: string - name: split dtype: string - name: abc_score dtype: string - name: analysis dtype: string splits: - name: dev num_bytes: 2599.489247311828 num_examples: 5 - name: test num_bytes: 190802.51075268816 num_examples: 367 download_size: 0 dataset_size: 193402.0 configs: - config_name: default data_files: - split: dev path: data/dev-* - split: test path: data/test-* --- [**🌐 DemoPage**](https://ezmonyi.github.io/ChatMusician/) | [**🤗 Dataset**](https://huggingface.co/datasets/m-a-p/MusicPile) | [**🤗 Benchmark**](https://huggingface.co/datasets/m-a-p/MusicTheoryBench) | [**📖 arXiv**](http://arxiv.org/abs/2402.16153) | [💻 **Code**](https://github.com/hf-lin/ChatMusician) | [**🤖 Model**](https://huggingface.co/m-a-p/ChatMusician) # Dataset Card for MusicTheoryBench MusicTheoryBench is a benchmark designed to **assess the advanced music understanding capabilities** of current LLMs. You can easily load it: ``` from datasets import load_dataset dataset = load_dataset("m-a-p/MusicTheoryBench") ``` The evaluation code will be available in the coming weeks. ## Dataset Structure MusicTheoryBench consists of 372 questions, formatted as multiple-choice questions, each with 4 options, among which only one is correct. There are 269 questions on music knowledge and 98 questions on music reasoning, along with 5 questions held out for enabling few-shot evaluation. ## Dataset Details Despite the significant advancements in music information retrieval,the definition of advanced music understanding capabilities remains unclear in current research. To measure the advanced understanding abilities of existing LLMs in music, [MAP](https://m-a-p.ai/) first define two critical elements of music understanding: **music knowledge** and **music reasoning**. Definition of music knowledge and reasoning is discussed in [ChatMusician paper](http://arxiv.org/abs/2402.16153). ### music knowledge subset In the music knowledge subset, the questions span Eastern and Western musical aspects. It includes 30 topics such as notes, rhythm, beats, chords, counterpoint, orchestration and instrumentation, music-related culture, history, etc. Each major area undergoes targeted examination under the guidance of experts and is divided into various subcategories. For example, in the triads section, the test set specifically examines the definition, types, and related technical details of triads. This test also features different levels of difficulty, corresponding to the high school and college levels of music major students. ### music reasoning subset Most of the questions in the reasoning subset require both music knowledge and reasoning capabilities. Correctly answering these questions requires detailed analysis of the given information and multi-step logical reasoning, calculating chords, melodies, scales, rhythms, etc. ## Curation Process To ensure consistency with human testing standards, MusicTheoryBenchmark is crafted by a professional college music teacher according to college-level textbooks and exam papers. The content underwent multiple rounds of discussions and reviews by a team of musicians. The team carefully selected questions and manually compiled them into JSON and ABC notation. The questions are then labeled into music knowledge and music reasoning subsets. Since the teacher is from China, half of the questions are delivered in Chinese, and later translated into English with GPT-4 Azure API and proofread by the team. ### Languages MusicTheoryBench primarily contains English. ## Limitations - The MusicThoeryBench results reported in [ChatMusician paper](http://arxiv.org/abs/2402.16153) are obtained with perplexity mode. Direct generation may result in a worse performance. See [Opencompass documentaion](https://opencompass.readthedocs.io/en/latest/get_started/faq.html#what-are-the-differences-and-connections-between-ppl-and-gen) for more details. ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{yuan2024chatmusician, title={ChatMusician: Understanding and Generating Music Intrinsically with LLM}, author={Ruibin Yuan and Hanfeng Lin and Yi Wang and Zeyue Tian and Shangda Wu and Tianhao Shen and Ge Zhang and Yuhang Wu and Cong Liu and Ziya Zhou and Ziyang Ma and Liumeng Xue and Ziyu Wang and Qin Liu and Tianyu Zheng and Yizhi Li and Yinghao Ma and Yiming Liang and Xiaowei Chi and Ruibo Liu and Zili Wang and Pengfei Li and Jingcheng Wu and Chenghua Lin and Qifeng Liu and Tao Jiang and Wenhao Huang and Wenhu Chen and Emmanouil Benetos and Jie Fu and Gus Xia and Roger Dannenberg and Wei Xue and Shiyin Kang and Yike Guo}, year={2024}, eprint={2402.16153}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## Dataset Card Contact Authors of ChatMusician.
recruit-jp/japanese-image-classification-evaluation-dataset
--- license: cc-by-4.0 task_categories: - image-classification language: - ja size_categories: - 1K<n<10K --- # recruit-jp/japanese-image-classification-evaluation-dataset ## Overview * **Developed by**: [Recruit Co., Ltd.](https://huggingface.co/recruit-jp) * **Dataset type**: Image Classification * **Language(s)**: Japanese * **LICENSE**: CC-BY-4.0 More details are described in our tech blog post. * [日本語CLIP学習済みモデルとその評価用データセットの公開](https://blog.recruit.co.jp/data/articles/japanese-clip/) ## Dataset Details This dataset is comprised of four image classification tasks related to concepts and things unique to Japan. Specifically, is consists of the following tasks. * `jafood101`: Image classification task of 101 types of Japanese dishes and ingredients * `jaflower30`: Image classification task of 30 types of Japanese flowers * `jafacility20`: Image classification task of 20 types of Japanese facilities * `jalandmark10`: Image classification task of 10 types of Japanese landmarks ## Dataset Structure A data point has five fields as below. |id|license|license_url|url|category| |---|---|---|---|---| |11190751074|Attribution License|https://creativecommons.org/licenses/by/2.0/|https://www.flickr.com/photos/26202414@N08/11190751074/|ガソリンスタンド| |119354302|Attribution License|https://creativecommons.org/licenses/by/2.0/|https://www.flickr.com/photos/yamauchibukuro/119354302/|ガソリンスタンド| |12586081383|Attribution-NonCommercial License|https://creativecommons.org/licenses/by-nc/2.0/|https://www.flickr.com/photos/24544963@N02/12586081383/|ガソリンスタンド| |21721007800|Attribution-NonCommercial License|https://creativecommons.org/licenses/by-nc/2.0/|https://www.flickr.com/photos/coswata/21721007800/|ガソリンスタンド| |32664671806|Attribution License|https://creativecommons.org/licenses/by/2.0/|https://www.flickr.com/photos/31029865@N06/32664671806/|ガソリンスタンド| To access the images, you need to retrieve the images from the URLs listed in the `url` field. The image labels are in the `category` field. All the images in this dataset are licensed under CC-BY-2.0、CC-BY-NC-2.0、Public Domain Mark 1.0, or Public Domain Dedication, so you can collect and save them to your local environment to use them for evaluating your image classification model. However, please note that CC-BY-NC-2.0 prohibits commercial use. Also, please note that CC-BY-2.0, CC-BY-NC-2.0, and Public Domain Mark 1.0 prohibit sublicensing, so the collected image data cannot be published. ## Disclaimer - ㈱リクルートは、本データセット利用による成果に関し、正確性、有用性、確実性、違法性の確認及び何らの保証および補償を行わないものとし、また、データセット利用によって利用者に生じた損害および第三者との間における紛争について㈱リクルートは一切責任を負いません。 - To use this dataset, you are required to download the images yourself. There may be cases where you are unable to download certain images due to broken links or other reasons.
MBZUAI/GranD-f
--- license: apache-2.0 --- [![Dataset](https://img.shields.io/badge/Dataset-Website-<COLOR>)](https://grounding-anything.com/GranD-f) # 🌐💬 GranD-f - Grounded Conversation Generation (GCG) Dataset The GranD-f datasets comprise four datasets: one high-quality human-annotated set proposed in our GLaMM paper, and 3 other open-source datasets including Open-PSG, RefCOCO-g and Flickr-30k, repurposed for the GCG task using OpenAI GPT4. ## 💻 Download ``` git lfs install git clone https://huggingface.co/datasets/MBZUAI/GranD-f ``` ## 📚 Additional Resources - **Paper:** [ArXiv](https://arxiv.org/abs/2311.03356). - **GitHub Repository:** [GitHub - GLaMM](https://github.com/mbzuai-oryx/groundingLMM). - **Project Page:** For a detailed overview and insights into the project, visit our [Project Page - GLaMM](https://mbzuai-oryx.github.io/groundingLMM/). ## 📜 Citations and Acknowledgments ```bibtex @article{hanoona2023GLaMM, title={GLaMM: Pixel Grounding Large Multimodal Model}, author={Rasheed, Hanoona and Maaz, Muhammad and Shaji, Sahal and Shaker, Abdelrahman and Khan, Salman and Cholakkal, Hisham and Anwer, Rao M. and Xing, Eric and Yang, Ming-Hsuan and Khan, Fahad S.}, journal={The IEEE/CVF Conference on Computer Vision and Pattern Recognition}, year={2024} } ```
diffnamehard/toxic-dpo-v0.1-NoWarning-alpaca
--- license: apache-2.0 --- The alpaca format of dataset [toxicsharegpt-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt/blob/main/toxicsharegpt-NoWarning.jsonl) DISCLAIMER : I'M NOT THE AUTHOR OF THIS DATASET. ORIGINAL DATASET: [Undi95/toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt?not-for-all-audiences=true) ### Usage restriction To use this data, you must acknowledge/agree to the following: - data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content - none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically (llama-2-70b via prompt engineering for chosen and llama-2-13b-chat-hf for rejected) you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities - This dataset is meant exclusively for academic/research or other non-nefarious use-cases. This dataset is meant exclusively for academic/research or other non-nefarious use-cases.
5CD-AI/Vietnamese-Multi-turn-Chat-Alpaca
--- task_categories: - question-answering language: - vi - en ---
cfahlgren1/DevSpecCode
--- license: mit --- # DevSpecCode Synthetic Code Dataset with instructions requiring multiple complex requirements, limitations, and instructions. ### Example Instruction ``` Please create a small function in Go that meets the following requirements: 1. Write a Go function named `parallelSum` that accepts a slice of integers and returns the sum of those integers. However, the sum must be calculated in parallel using Go routines, by dividing the slice into four roughly equal parts and summing each part in separate Go routines. Use channels to collect the results of each summing routine. 2. Ensure that the `parallelSum` function is safe for concurrent use by multiple goroutines. To achieve this, you must implement a mechanism to prevent race conditions when the separate sums are combined to produce the final sum. 3. The function should be able to handle slices of any size (including those not evenly divisible by four). It must allocate any extra elements correctly among the four summing routines to ensure accurate results. If the number of elements is less than four, the function should still use multiple routines for practice, but it may result in some routines receiving no elements to sum. Remember, the implementation should not exceed 50 lines of code and should contain all the required concurrency controls and error handling exclusively within the function body. ``` ### Languages - Python (*majority*) - JavaScript - Java - C# - C++ - Ruby - Go - TypeScript
tabtoyou/KoLLaVA-v1.5-Instruct-581k
--- license: cc-by-nc-4.0 task_categories: - visual-question-answering language: - ko size_categories: - 100K<n<1M --- # KoLLaVA-v1.5 Visual Instruct 581K Dataset Card [LLaVA-v1.5](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json)의 Instruction-following Data에서 필요한 데이터를 필터링하고, 한국어로 번역한 데이터셋입니다. (feat. DeepL) 이미지 다운로드 및 사용 방법은 [KoLLaVA](https://github.com/tabtoyou/KoLLaVA) repo를 참고해주세요.
BrainGPT/train_valid_split_pmc_neuroscience_2002-2022_filtered_subset
--- license: apache-2.0 dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 5019145554 num_examples: 440852 - name: validation num_bytes: 47529568 num_examples: 4446 download_size: 2502259877 dataset_size: 5066675122 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* --- Data from PubMed for abstracts and PubMed Central Open Access Subset (PMC OAS) for full-text articles using the Entrez Programming Utilities (E-utilities) API and the pubget Python package, respectively. The data span publication dates from 2002 to 2022. For science general journals, a keyword filter of ``Neuroscience" was applied (all sourced journals are below). Data extraction efforts yielded 332,807 abstracts and 123,085 full-text articles, totaling 1.3 billion tokens. Figures and tables are excluded. A randomly allocated 90\% of the data was used for training, the remaining 10% was reserved for validation. Sourced journals: Nature, Cell, Cell Reports, eLife, Science Advances, Nature Communications, PNAS, The EMBO Journal, Nature Neuroscience, Neuron, Brain, NeuroImage, Molecular Psychiatry, Journal of Neuroscience, Nature Reviews Neuroscience, Cerebral Cortex, Annals of Neurology, Human Brain Mapping, Epilepsia, Clinical Neurophysiology, Trends in Cognitive Sciences, Biological Psychiatry, Translational Psychiatry, Neuroscience and Biobehavioral Reviews, Neuropsychopharmacology, Alzheimer's and Dementia, NeuroImage: Clinical, Neurobiology of Aging, Trends in Neurosciences, Nature Reviews Neurology, Brain Stimulation, Frontiers in Neuroscience, Movement Disorders, Nature Human Behaviour, Frontiers in Neurology, Cortex, Journal of Alzheimer's Disease, Neurobiology of Disease, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, Brain Structure and Function, Pain, Frontiers in Human Neuroscience, eNeuro, Current Opinion in Neurobiology, European Journal of Neuroscience, Frontiers in Aging Neuroscience, Alzheimer's Research and Therapy, Journal of Neurology, Glia, Epilepsy and Behavior, Brain Imaging and Behavior, Journal of Neurophysiology, Sleep, Neuroscience, Neuropsychologia, Journal of Neural Engineering, Molecular Neurobiology, Frontiers in Cellular Neuroscience, Neuropharmacology, Alzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring, Journal of Neuroinflammation, Epilepsia Open, Acta Neuropathologica Communications, Frontiers in Neuroinformatics, Current Opinion in Behavioral Sciences, Developmental Cognitive Neuroscience, Frontiers in Molecular Neuroscience, Cerebellum, Journal of Cognitive Neuroscience, Network Neuroscience, Annual Review of Neuroscience, Progress in Neurobiology, Epilepsy Research, Molecular Autism, Journal of Comparative Neurology, Social Cognitive and Affective Neuroscience, Brain Topography, Hippocampus, Seizure: the journal of the British Epilepsy Association, Psychophysiology, Frontiers in Behavioral Neuroscience, Journal of Neurotrauma, Journal of Physiology, Frontiers in Neural Circuits, Neurobiology of Learning and Memory, Journal of Neural Transmission, Frontiers in Neuroanatomy, International Journal of Neuropsychopharmacology, Neuroscientist, Brain Sciences, Behavioural Brain Research, Experimental Neurology, Progress in Neuro-Psychopharmacology and Biological Psychiatry, Neurological Sciences, Neurotherapeutics, Neuroscience Letters, Current Opinion in Neurology, Journal of Neuroscience Methods, Journal of Neurochemistry, Neuromodulation, Molecular Neurodegeneration, Frontiers in Systems Neuroscience, Sleep Medicine Reviews, Brain and Behavior, Brain Research, Neurorehabilitation and Neural Repair, Autism Research.
FarReelAILab/Machine_Mindset_MBTI_dataset
--- unknown: null license: apache-2.0 --- Here are the ***behavior datasets*** used for supervised fine-tuning (SFT). And they can also be used for direct preference optimization (DPO). The exact copy can also be found in [Github](https://github.com/PKU-YuanGroup/Machine-Mindset/edit/main/datasets/behaviour). Prefix ***'en'*** denotes the datasets of the English version. Prefix ***'zh'*** denotes the datasets of the Chinese version. ## Dataset introduction There are four dimension in MBTI. And there are two opposite attributes within each dimension. To be specific: + Energe: Extraversion (E) - Introversion (I) + Information: Sensing (S) - Intuition (N) + Decision: Thinking (T) - Feeling (F) + Execution: Judging (J) - Perceiving (P) Based on the above, you can infer the content of the json file from its name. The datasets follow the Alpaca format, consisting of instruction, input and output. ## How to use these datasets for behavior supervised fine-tuning (SFT) For example, if you want to make an LLM behave like an ***ISFJ***, you need to select ***the four corresponding files*** (en_energe_introversion.json, en_information_sensing.json, en_decision_feeling.json, en_execution_judging.json). And use the four for SFT. ## How to use these datasets for direct preference optimization (DPO) For example, if you want to make an LLM be ***more feeling (F) than thinking (T)*** by DPO, you need to select ***the two corresponding files*** (en_decision_feeling.json, en_decision_thinking.json). And then compile the two into the correct format for DPO. For the correct format, please refer to [this](https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/datasets/dpo/README.md).
jtatman/stable-diffusion-prompts-uncensored
--- language: - en license: mit size_categories: - 100K<n<1M task_categories: - text-to-image - image-to-image pretty_name: NSFW Prompts dataset_info: features: - name: model dtype: string - name: prompt dtype: string - name: negative_prompt dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 647548943 num_examples: 851568 download_size: 0 dataset_size: 647548943 configs: - config_name: default data_files: - split: train path: data/train-* tags: - uncensored - nsfw - art - not-for-all-audiences - diffusers - image generation --- # Dataset Card for "stable-diffusion-prompts-uncensored" ## Not SAFE for public - Definately Unfiltered This dataset comes from prompts shared from images' metadata on Civitai. Not for the faint of heart. Thanks to Civitai.com for all the models, building a playground, allowing fine tuning of models, and generally being a good influence on model building and generation. The purpose of this dataset is to allow for analysis of prompts and feature analysis in prompts and negative prompts. This could be for: - similarity - effective prompting - prompt alignment or misalignment - statistical research on prompts and categories - popularity of image generation approaches - mimimalism prompts with certain models - matching generated prompts to images for LLAVA purposes - mimimizing prompts for better context usage - social research on interest level and creative approaches - modeling based on prompts for automating prompt generation strategy - modeling of categorical interest and similarity - modeling of evolution of prompts based on model versioning A seperate upload will include metadata statistics such as cry count, laugh count, etc. for semantic analysis based on prompt length and content.
GAIR/MathPile_Commercial
--- license: cc-by-sa-4.0 extra_gated_prompt: >- By using this data, you agree to comply with the original usage licenses of all sources contributing to MathPile_Commercial. The MathPile_Commercial is governed by the CC BY-SA 4.0 license. Access to this dataset is granted automatically once you accept the license terms and complete all the required fields below. extra_gated_fields: Your Full Name: text Organization or Entity you are affiliated with: text Country or state you are located in: text Your email: text What is your intended use(s) for this dataset: text You AGREE to comply with the original usage licenses of all sources contributing to this dataset and the license of this dataset: checkbox You AGREE to cite our paper if you use this dataset: checkbox You ENSURE that the information you have provided is true and accurate: checkbox language: - en size_categories: - 1B<n<10B --- <br> **🔥Update**: - [2024/01/06] We released the commercial-use version of MathPile, namely `MathPile_Commercial`. <br> # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> `MathPile_Commercial` is a commercial-use version of [MathPile](https://huggingface.co/datasets/GAIR/MathPile), obtained by culling documents that are prohibited from commercial use in the MathPile (latest version, i.e., `v0.2`). Specifically, we conducted a non-commercial use detection in the source data, utilizing the license information in the metadata for arXiv sources and employing keyword matching for other sources. As a result, we have excluded approximately 8,000 documents from the latest version of MathPile, comprising 7,350 from arXiv, 518 from Creative Commons sources, 68 from textbooks, and 8 from Wikipedia. This version of the dataset contains around 9.2 billion tokens. MathPile is a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens, which is significantly different from the previous work in the following characteristics: <div align="center"> <img src="./imgs/mathpile-key-features.png" width=45%/> </div> - **Math-centric**: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath. - **Diversity**: MathPile draws from a wide range of sources: **Textbooks** (including lecture notes), **arXiv**, **Wikipedia**, **ProofWiki**, **StackExchange**, and **Web Pages**. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. **This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).** - **High-Quality**: We adhered to the principle of *less is more*, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. - **Data Documentation**: To enhance transparency, we've extensively documented MathPile. This includes a **dataset sheet** (see Table 5 in our paper) and **quality annotations** for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed **data contamination detection** to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM. <div align="center"> <img src="./imgs/mathpile-overview.png" width=70%/> </div> ## Dataset Details Refer to Appendix A in [our paper](https://huggingface.co/papers/2312.17120) for the MathPile Dataset Sheet. ### How to download MathPile? Currently, we recommend that you download it locally from the command line (such as `huggingface-cli`) instead of the python function `load_dataset("GAIR/MathPile")` (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows ``` $ huggingface-cli download --resume-download --repo-type dataset GAIR/MathPile --local-dir /your/path/ --local-dir-use-symlinks False $ cd /your/path/ $ find . -type f -name "*.gz" -exec gzip -d {} \; ``` Later we will also support the datasets loading via `load_dataset("GAIR/MathPile")`. Stay tuned. ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** GAIR Lab, SJTU - **Funded by [optional]:** GAIR Lab, SJTU - **Language(s) (NLP):** English - **License:** CC BY-SA 4.0 ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/GAIR-NLP/MathPile - **Paper [optional]:** https://huggingface.co/papers/2312.17120 - **Demo [optional]:** https://gair-nlp.github.io/MathPile/ ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use To develop mathematical language models. <!-- This section describes suitable use cases for the dataset. --> ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> This dataset may be not suitable for scenarios unrelated to mathematics or reasoning. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> ``` { "text": ..., "SubSet": "CommomCrawl" | "StackExchange" | "Textbooks" | "Wikipedia" | "ProofWiki" | "arXiv" "meta": {"language_detection_score": , "idx": , "contain_at_least_two_stop_words": , } ``` ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> To create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> We sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and gather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication, all aimed at maintaining the high quality of the corpus. Please see [our paper](https://arxiv.org/abs/2312.17120) for more details. ### Annotations <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> We provided *quantity annotations* (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers the flexibility to filter the data according to their criteria, tailoring it to their specific needs. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> The corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> - The decisions made during the data collection and processing phases might not always be optimal. - Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> If you find our work useful or use MathPile, please cite our paper: ``` @article{wang2023mathpile, title={Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math}, author={Wang, Zengzhi and Xia, Rui and Liu, Pengfei}, journal={arXiv preprint arXiv:2312.17120}, year={2023} } ``` ## Dataset Card Authors [Zengzhi Wang](https://scholar.google.com/citations?user=qLS4f-8AAAAJ&hl=en) ## Dataset Card Contact stefanpengfei@gmail.com, zzwang.nlp@gmail.com
xzuyn/dalle-3_vs_sd-v1-5_dpo
--- language: - en size_categories: - n<1K --- 750 [DALL·E 3 images](https://huggingface.co/datasets/dataautogpt3/Dalle3) (the first 3 arrow files) paired with a Base SD v1.5 generated version as a rejected image. Images are bytes encoded in base64 strings, so it can save in a jsonl.
eduagarcia/LegalPT
--- language: - pt size_categories: - 10M<n<100M task_categories: - text-generation tags: - legal dataset_info: - config_name: all features: - name: id dtype: int64 - name: source dtype: string - name: orig_id dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 135151899572 num_examples: 24194918 download_size: 71423192838 dataset_size: 135151899572 - config_name: acordaos_tcu features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 3494790013 num_examples: 634711 download_size: 1653039356 dataset_size: 3494790013 - config_name: datastf features: - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 - name: id dtype: int64 splits: - name: train num_bytes: 3699382656 num_examples: 737769 download_size: 1724245648 dataset_size: 3699382656 - config_name: iudicium_textum features: - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 - name: id dtype: int64 splits: - name: train num_bytes: 896139675 num_examples: 198387 download_size: 408025309 dataset_size: 896139675 - config_name: mlp_pt_BRCAD-5 features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 20311710293 num_examples: 3128292 download_size: 9735599974 dataset_size: 20311710293 - config_name: mlp_pt_CJPG features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 63201157801 num_examples: 14068634 download_size: 30473107046 dataset_size: 63201157801 - config_name: mlp_pt_eurlex-caselaw features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 1499601545 num_examples: 104312 download_size: 627235870 dataset_size: 1499601545 - config_name: mlp_pt_eurlex-contracts features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 467200973 num_examples: 11581 download_size: 112805426 dataset_size: 467200973 - config_name: mlp_pt_eurlex-legislation features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 5669271303 num_examples: 232556 download_size: 1384571339 dataset_size: 5669271303 - config_name: mlp_pt_legal-mc4 features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 4483889482 num_examples: 191174 download_size: 2250422592 dataset_size: 4483889482 - config_name: parlamento-pt features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 2867291543 num_examples: 2670846 download_size: 1319479156 dataset_size: 2867291543 - config_name: tesemo_v2 features: - name: id dtype: int64 - name: text dtype: string - name: meta struct: - name: dedup struct: - name: exact_norm struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: exact_hash_idx dtype: int64 - name: is_duplicate dtype: bool - name: minhash struct: - name: cluster_main_idx dtype: int64 - name: cluster_size dtype: int64 - name: is_duplicate dtype: bool - name: minhash_idx dtype: int64 splits: - name: train num_bytes: 29158221995 num_examples: 2216656 download_size: 13543440397 dataset_size: 29158221995 configs: - config_name: all data_files: - split: train path: all/train-* - config_name: acordaos_tcu data_files: - split: train path: acordaos_tcu/train-* - config_name: datastf data_files: - split: train path: datastf/train-* - config_name: iudicium_textum data_files: - split: train path: iudicium_textum/train-* - config_name: mlp_pt_BRCAD-5 data_files: - split: train path: mlp_pt_BRCAD-5/train-* - config_name: mlp_pt_CJPG data_files: - split: train path: mlp_pt_CJPG/train-* - config_name: mlp_pt_eurlex-caselaw data_files: - split: train path: mlp_pt_eurlex-caselaw/train-* - config_name: mlp_pt_eurlex-contracts data_files: - split: train path: mlp_pt_eurlex-contracts/train-* - config_name: mlp_pt_eurlex-legislation data_files: - split: train path: mlp_pt_eurlex-legislation/train-* - config_name: mlp_pt_legal-mc4 data_files: - split: train path: mlp_pt_legal-mc4/train-* - config_name: parlamento-pt data_files: - split: train path: parlamento-pt/train-* - config_name: tesemo_v2 data_files: - split: train path: tesemo_v2/train-* --- # LegalPT LegalPT aggregates the maximum amount of publicly available legal data in Portuguese, drawing from varied sources including legislation, jurisprudence, legal articles, and government documents. This is the raw version. Deduplicated version is available [here](https://huggingface.co/datasets/eduagarcia/LegalPT_dedup). ## Dataset Details Dataset is composed by six corpora: [Ulysses-Tesemõ](https:github.com/ulysses-camara/ulysses-tesemo), [MultiLegalPile (PT)](https://arxiv.org/abs/2306.02069v2), [ParlamentoPT](http://arxiv.org/abs/2305.06721), [Iudicium Textum](https://www.inf.ufpr.br/didonet/articles/2019_dsw_Iudicium_Textum_Dataset.pdf), [Acordãos TCU](https://link.springer.com/chapter/10.1007/978-3-030-61377-8_46), and [DataSTF](https://legalhackersnatal.wordpress.com/2019/05/09/mais-dados-juridicos/). - **MultiLegalPile**: a multilingual corpus of legal texts comprising 689 GiB of data, covering 24 languages in 17 jurisdictions. The corpus is separated by language, and the subset in Portuguese contains 92GiB of data, containing 13.76 billion words. This subset includes the jurisprudence of the Court of Justice of São Paulo (CJPG), appeals from the [5th Regional Federal Court (BRCAD-5)](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0272287), the Portuguese subset of legal documents from the European Union, known as [EUR-Lex](https://eur-lex.europa.eu/homepage.html), and a filter for legal documents from [MC4](http://arxiv.org/abs/2010.11934). - **Ulysses-Tesemõ**: a legal corpus in Brazilian Portuguese, composed of 2.2 million documents, totaling about 26GiB of text obtained from 96 different data sources. These sources encompass legal, legislative, academic papers, news, and related comments. The data was collected through web scraping of government websites. - **ParlamentoPT**: a corpus for training language models in European Portuguese. The data was collected from the Portuguese government portal and consists of 2.6 million documents of transcriptions of debates in the Portuguese Parliament. - **Iudicium Textum**: consists of rulings, votes, and reports from the Supreme Federal Court (STF) of Brazil, published between 2010 and 2018. The dataset contains 1GiB of data extracted from PDFs. - **Acordãos TCU**: an open dataset from the Tribunal de Contas da União (Brazilian Federal Court of Accounts), containing 600,000 documents obtained by web scraping government websites. The documents span from 1992 to 2019. - **DataSTF**: a dataset of monocratic decisions from the Superior Court of Justice (STJ) in Brazil, containing 700,000 documents (5GiB of data). ### Dataset Description - **Language(s) (NLP):** Portuguese (pt-BR and pt-PT) - **Repository:** https://github.com/eduagarcia/roberta-legal-portuguese - **Paper:** https://aclanthology.org/2024.propor-1.38/ ## Citation ```bibtex @inproceedings{garcia-etal-2024-robertalexpt, title = "{R}o{BERT}a{L}ex{PT}: A Legal {R}o{BERT}a Model pretrained with deduplication for {P}ortuguese", author = "Garcia, Eduardo A. S. and Silva, Nadia F. F. and Siqueira, Felipe and Albuquerque, Hidelberg O. and Gomes, Juliana R. S. and Souza, Ellen and Lima, Eliomar A.", editor = "Gamallo, Pablo and Claro, Daniela and Teixeira, Ant{\'o}nio and Real, Livy and Garcia, Marcos and Oliveira, Hugo Gon{\c{c}}alo and Amaro, Raquel", booktitle = "Proceedings of the 16th International Conference on Computational Processing of Portuguese", month = mar, year = "2024", address = "Santiago de Compostela, Galicia/Spain", publisher = "Association for Computational Lingustics", url = "https://aclanthology.org/2024.propor-1.38", pages = "374--383", } ``` ## Acknowledgment This work has been supported by the AI Center of Excellence (Centro de Excelência em Inteligência Artificial – CEIA) of the Institute of Informatics at the Federal University of Goiás (INF-UFG).
5CD-AI/Vietnamese-c-s-ale-alpaca-gpt4-data-gg-translated
--- task_categories: - question-answering language: - en - vi size_categories: - 10K<n<100K ---
KBlueLeaf/danbooru2023-sqlite
--- license: mit task_categories: - image-classification - text-to-image language: - en --- # Metadata Database for Danbooru2023 Danbooru 2023 datasets: https://huggingface.co/datasets/nyanko7/danbooru2023 This dataset contains a sqlite db file which have all the tags and posts metadata in it.<br> The Peewee ORM config file is provided too, plz check it for more information. (Especially on how I link posts and tags together) The original data is from the official dump of the posts info.<br> Check this [link](https://console.cloud.google.com/storage/browser/danbooru_public/data) for more info. ## Details This section contains some details that you need to be aware of if you want to use other ORM system or use plain SQL query to utilize this database. #### Custom Enum Fields Some fields in Post/Tags use my custom enum field to store type/category or something like that: * Post.rating * 0: general * 1: sensitive * 2: questionable * 3: explicit * Tag.type * 0: general * 1: artist * 2: character * 3: copyright * 4: meta #### Tag List I use peewee ManyToManyField to implement the Tag List things. Which utilize a through model which have all the pair of Tag and Post<br> Since it is very likely we will want to use Tag to query posts, so many-to-many is better.<br> The con of this design is the database file will be 1.5x larger than before(we have 0.25B entries for the post-tag pairs), but the query speed become 2~3x faster, so I think it is acceptable. After done some checking, I can ensure that all the "categorical tag list" can be done by full list + filter, and that is how I done it now. Check the db.py for more details. #### Utils if you think above details are too complicated, just use the db_utils.py and other PeeWee API to utilize this database. I also provide a write_csv.py for exporting whole dataset into csv for data analysis. ## License The source code, database file of this repo is licensed under MiT License.<br> **Notice**: The license doesn't cover the "content" of the database.<br> All the content is from official danbooru dumps for posts' meta. ## Acknowledgement Thx for AngelBottomless for fixing wrong entries and add more entries into this dataset:<br> https://huggingface.co/datasets/AngelBottomless/danbooru-2023-sqlite-fixed-7110548 Note: I have changed the definition of TagListField and have added some index into it. Do not mixed up the .db files from 2 different repo.
grimulkan/wikipedia-summaries
--- license: unknown --- Summaries for random Wikipedia articles of varying lengths, in fastchat JSON format, generated by `gpt-4-1106-preview`. OpenAI terms apply. This was designed to train a 32K context-length model. Check the total conversation lengths before using data items for training to ensure that they fit inside your target context window, and discard any that don't fit. The summary requests were randomly selected from the following types: - Standard detailed summary - Summary as a bulleted list - Summary in tabular form (markdown table) - Summary in ELI5 form ('explain it like I'm 5') In addition, summary inputs could be a single article, or a series of (shorter) articles presented one by one as independent documents in the same prompt. In the latter case, the output will include the summary of each input document, in order, with sub-headings. The wording for each summarization request was randomized, and the position was also randomly selected (before the article(s) or after). The Wikipedia articles themselves were converted to text and augmented/modified in various random ways (sub-headings removed, bullets removed, citations/background removed, etc.)
hkust-nlp/deita-complexity-scorer-data
--- license: mit language: - en size_categories: - 1K<n<10K --- <img src="https://huggingface.co/datasets/hkust-nlp/deita-images/resolve/main/logo-final.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Dataset Card for Deita Complexity Scorer Training Data [GitHub](https://github.com/hkust-nlp/deita) | [Paper](https://arxiv.org/abs/2312.15685) Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs). This dataset includes data for training Deita Complexity Scorer. **Model Family**: Other models and the dataset are found in the [Deita Collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4) ## Performance | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) | |------------------------------------------------|-----------|------------|----------|---------------|----------------| | **Proprietary Models** | | | | | | | GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- | | GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- | | Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- | | GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- | | **Open-sourced Models based on LLaMA-1-13B** | | | | | | | LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 | | WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 | | Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 | | Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 | | DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 | | **Open-sourced Models based on LLaMA-2-13B** | | | | | | | Tulu-2-13B | SFT | 326K SFT | 6.70 | 78.90 | -- | | Tulu-2-13B+DPO | SFT + DPO | 326K SFT + 60K DPO | 7.00 | 89.50 | -- | | LLaMA2-13B-Chat | SFT + PPO | -- | 6.65 | 81.09 | -- | | WizardLM-13B-v1.2 | SFT | >70K SFT | 7.09 | 89.17 | -- | | Vicuna-13B-v1.5 | SFT | 125K SFT | 6.57 | 78.80 | 61.63 | | Random | SFT | 10K SFT | 5.78 | 65.19 | 61.32 | | DEITA-LLaMA2-13B-v1.0-sft | SFT | 10K SFT | 6.79 | 81.09 | 62.71 | | **Open-sourced Models based on Mistral-7B** | | | | | | | Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 | | Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 | | $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 | | OpenChat-3.5 | C-RLFT | >> 70K C-RLFT | 7.81 | 88.51 | -- | | Starling-7B | C-RLFT + APA | >>70K C-RLFT + 183K APA | 8.09 | 91.99 | -- | | Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 | | DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | 64.94 | | DEITA-7B-v1.0-sft (10K) | SFT | 10K SFT | 7.32 | 81.67 | 64.00 | | DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 | ## Citation If you find the content of this project helpful, please cite our paper as follows: ``` @misc{liu2023what, title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning}, author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He}, year={2023}, eprint={2312.15685}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
hkust-nlp/deita-quality-scorer-data
--- license: mit language: - en size_categories: - 1K<n<10K --- <img src="https://huggingface.co/datasets/hkust-nlp/deita-images/resolve/main/logo-final.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Dataset Card for Deita Quality Scorer Training Data [GitHub](https://github.com/hkust-nlp/deita) | [Paper](https://arxiv.org/abs/2312.15685) Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs). This dataset includes data for training Deita Quality Scorer. **Model Family**: Other models and the dataset are found in the [Deita Collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4) ## Performance | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) | |------------------------------------------------|-----------|------------|----------|---------------|----------------| | **Proprietary Models** | | | | | | | GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- | | GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- | | Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- | | GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- | | **Open-sourced Models based on LLaMA-1-13B** | | | | | | | LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 | | WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 | | Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 | | Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 | | DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 | | **Open-sourced Models based on LLaMA-2-13B** | | | | | | | Tulu-2-13B | SFT | 326K SFT | 6.70 | 78.90 | -- | | Tulu-2-13B+DPO | SFT + DPO | 326K SFT + 60K DPO | 7.00 | 89.50 | -- | | LLaMA2-13B-Chat | SFT + PPO | -- | 6.65 | 81.09 | -- | | WizardLM-13B-v1.2 | SFT | >70K SFT | 7.09 | 89.17 | -- | | Vicuna-13B-v1.5 | SFT | 125K SFT | 6.57 | 78.80 | 61.63 | | Random | SFT | 10K SFT | 5.78 | 65.19 | 61.32 | | DEITA-LLaMA2-13B-v1.0-sft | SFT | 10K SFT | 6.79 | 81.09 | 62.71 | | **Open-sourced Models based on Mistral-7B** | | | | | | | Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 | | Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 | | $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 | | OpenChat-3.5 | C-RLFT | >> 70K C-RLFT | 7.81 | 88.51 | -- | | Starling-7B | C-RLFT + APA | >>70K C-RLFT + 183K APA | 8.09 | 91.99 | -- | | Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 | | DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | 64.94 | | DEITA-7B-v1.0-sft (10K) | SFT | 10K SFT | 7.32 | 81.67 | 64.00 | | DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 | ## Citation If you find the content of this project helpful, please cite our paper as follows: ``` @misc{liu2023what, title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning}, author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He}, year={2023}, eprint={2312.15685}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
PeacefulData/Robust-HyPoradise
--- license: apache-2.0 language_creators: - expert-generated task_categories: - text-generation tags: - generative error correction - large language model - LLaMA pretty_name: Robust HyPoradise size_categories: - 100K<n<1M language: - en --- # HypothesesParadise This repo releases the Robust HyPoradise dataset in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition." If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you. ```bib @inproceedings{hu2024large, title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition}, author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong}, booktitle={International Conference on Learning Representations}, year={2024} } ```
fblgit/simple-math
--- dataset_info: features: - name: output dtype: string - name: instruction dtype: string splits: - name: arithmetic.float2_train num_bytes: 645500.3 num_examples: 19000 - name: arithmetic.float2_valid num_bytes: 33973.7 num_examples: 1000 - name: arithmetic.float3_train num_bytes: 1890863.85 num_examples: 47500 - name: arithmetic.float3_valid num_bytes: 99519.15 num_examples: 2500 - name: arithmetic.float34_train num_bytes: 9321513.05 num_examples: 218500 - name: arithmetic.float34_valid num_bytes: 490605.95 num_examples: 11500 - name: arithmetic.float4_train num_bytes: 21671996.6 num_examples: 475000 - name: arithmetic.float4_valid num_bytes: 1140631.4 num_examples: 25000 download_size: 27928049 dataset_size: 35294604 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - math - finance license: cc-by-nc-nd-4.0 task_categories: - text-generation - question-answering pretty_name: Simple Math size_categories: - 100K<n<1M --- # Simple Math: 2+2=4 -1=3 (LoLo: Learning Only Logical Operations) Just like my teacher gave me homework, i thought maybe we can also add some of these basics on the trainings of our models. It was created with very simple code that is in the repo, if you add more complex operations and so.. **please share the code** :D thank you Current Code Version: 20240127.fblgit (A modification over @win10 for progressive and DPO operation) ![LoLo: Learning Only Logical Operations](https://huggingface.co/datasets/fblgit/simple-math/resolve/main/LOLO.png) ## Does it Works? ### 34BEAGLES Evaluation: ``` hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8) | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |--------------|-------|------|-----:|--------|-----:|---|-----:| |arc_challenge |Yaml |none | 25|acc |0.7039|± |0.0133| | | |none | 25|acc_norm|0.7321|± |0.0129| |truthfulqa_mc2|Yaml |none | 0|acc |0.7387|± |0.0141| hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto |Tasks|Version| Filter |n-shot| Metric |Value | |Stderr| |-----|-------|----------|-----:|-----------|-----:|---|-----:| |gsm8k|Yaml |get-answer| 5|exact_match|0.6399|± |0.0132| | Groups |Version|Filter|n-shot|Metric|Value | |Stderr| |------------------|-------|------|-----:|------|-----:|---|-----:| |mmlu |N/A |none | 0|acc |0.7477|± |0.1079| | - humanities |N/A |none | 0|acc |0.7188|± |0.0855| | - other |N/A |none | 0|acc |0.7950|± |0.1057| | - social_sciences|N/A |none | 0|acc |0.8297|± |0.0664| | - stem |N/A |none | 0|acc |0.6641|± |0.1291| ``` ### 34BEAGLES-MATH Evaluation ``` hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto |Tasks|Version| Filter |n-shot| Metric |Value | |Stderr| |-----|-------|----------|-----:|-----------|-----:|---|-----:| |gsm8k|Yaml |get-answer| 5|exact_match|0.6505|± |0.0131| hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8) | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |--------------|-------|------|-----:|--------|-----:|---|-----:| |arc_challenge |Yaml |none | 25|acc |0.7090|± |0.0133| | | |none | 25|acc_norm|0.7329|± |0.0129| |truthfulqa_mc2|Yaml |none | 0|acc |0.7378|± |0.0141| | Groups |Version|Filter|n-shot|Metric|Value | |Stderr| |------------------|-------|------|-----:|------|-----:|---|-----:| |mmlu |N/A |none | 0|acc |0.7524|± |0.1045| | - humanities |N/A |none | 0|acc |0.7307|± |0.0846| | - other |N/A |none | 0|acc |0.7937|± |0.1029| | - social_sciences|N/A |none | 0|acc |0.8274|± |0.0667| | - stem |N/A |none | 0|acc |0.6708|± |0.1236| ``` But it gets better, because when increasing length and complexity, the marks are even superior: ``` |Tasks|Version| Filter |n-shot| Metric |Value | |Stderr| |-----|-------|----------|-----:|-----------|-----:|---|-----:| |gsm8k|Yaml |get-answer| 5|exact_match|0.6611|± | 0.013| ``` On a 3.20% GSM Improvement compared to its base model. ## Note to contributors: **thank you to those contributing on the experiment with beautiful commits and good spirit** * Feel free to contribute on the readme Evaluation tests. * Lets aim to build an ablation & paper together. All contributors will be cited. ## Versions ``` 27.01.24 Added new code to generate the dataset, seed 42 and now also generates DPO. 24.01.24 Added gradual complexity on a separate script 20-23.01.24 Multiple contributions with operations and increased complexity on the main generator script. ``` ## Citations If you use Simple Math o train your model, please cite on the modelcard or the paper. ``` @misc{simplemath, title={Simple-Math: 2+2=4 4-1=3}, author={Xavier Murias}, year={2024}, publisher = {Juanako.AI}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/datasets/fblgit/simple-math}}, } ```
Weyaxi/sci-datasets
--- license: other configs: - config_name: default data_files: merged_all.json --- # Mainly science focused but other datasets exist too! Einstein models are based on this repo.a
Himitsui/Lewd-Assistant-v1
--- license: cc-by-nc-4.0 language: - en tags: - not-for-all-audiences --- Hi! This is an experimental dataset concept. I've been working with my good buddy [@Sao](https://huggingface.co/Sao10K) on an NSFW General Instruct Dataset. We've been working together at it for a while. |--- This is v1, it is *not good* as we decided to use samples of the No Robots dataset for instruction, using a small model to run the dataset preparation. |--- He recently figured out how to curate datasets much faster with Aphrodite-engine so we are releasing the prototype, incomplete attempt as we work on v2 and so on :) |--- We are doing a curated mix of human / model instructions and double-checking responses to keep it high quality, so the results would not look as scuffed as this version. | ⸜(。˃ ᵕ ˂ )⸝♡₊˚⊹♡・:*ੈ Dataset format is Alpaca: Instruction + Output ``` "instruction": "My friend is starting a business where she makes flower bouquets and centerpieces for weddings. She is having a hard time coming up with a name. Can you give me some suggestions? She would like a name with some alliteration.", "output": "\n Sure thing! How about these lewd, luscious suggestions:\n\n1. Blossoming Boudoir Bridal Boutique\n2. Petal-Perfect Pairings Palace\n3. Flowery Fantasies for Fiancés\n4. Blooming Bridal Bedlam\n5. Erotic Entwined Engagement Enclave\n6. Vivacious Vine Vows Venue\n7. Seductive Stem Studio\n8. Sensual Sprigs for Sinful Suitors\n9. Tempting Tendrils Temptress Troupe\n\nI'm sure one of these names will have her business blooming in no time!" ``` *ps, the newline at the beginning of output was an unchecked mistake, oops!*
winder-hybrids/MedicalTextbook_QA
--- dataset_info: - config_name: Anatomy_Gray features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1898508 num_examples: 500 download_size: 152583 dataset_size: 1898508 - config_name: Biochemistry_Lippincott features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1657461 num_examples: 500 download_size: 161466 dataset_size: 1657461 - config_name: Cell_Biology_Alberts features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1911167 num_examples: 500 download_size: 178902 dataset_size: 1911167 - config_name: Gynecology_Novak features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1716835 num_examples: 500 download_size: 166726 dataset_size: 1716835 - config_name: Histology_Ross features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1805108 num_examples: 500 download_size: 161573 dataset_size: 1805108 - config_name: Immunology_Janeway features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1662680 num_examples: 500 download_size: 163548 dataset_size: 1662680 - config_name: Neurology_Adams features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1891656 num_examples: 500 download_size: 188245 dataset_size: 1891656 - config_name: Obstentrics_Williams features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1597198 num_examples: 500 download_size: 169259 dataset_size: 1597198 - config_name: Pathology_Robbins features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1749146 num_examples: 500 download_size: 175037 dataset_size: 1749146 - config_name: Pediatrics_Nelson features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1885412 num_examples: 500 download_size: 180188 dataset_size: 1885412 - config_name: Pharmacology_Katzung features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1748810 num_examples: 500 download_size: 172568 dataset_size: 1748810 - config_name: Physiology_Levy features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1756829 num_examples: 500 download_size: 167776 dataset_size: 1756829 - config_name: Psichiatry_DSM-5 features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int64 - name: original_text dtype: string splits: - name: test num_bytes: 1976522 num_examples: 500 download_size: 171016 dataset_size: 1976522 configs: - config_name: Anatomy_Gray data_files: - split: test path: Anatomy_Gray/test-* - config_name: Biochemistry_Lippincott data_files: - split: test path: Biochemistry_Lippincott/test-* - config_name: Cell_Biology_Alberts data_files: - split: test path: Cell_Biology_Alberts/test-* - config_name: Gynecology_Novak data_files: - split: test path: Gynecology_Novak/test-* - config_name: Histology_Ross data_files: - split: test path: Histology_Ross/test-* - config_name: Immunology_Janeway data_files: - split: test path: Immunology_Janeway/test-* - config_name: Neurology_Adams data_files: - split: test path: Neurology_Adams/test-* - config_name: Obstentrics_Williams data_files: - split: test path: Obstentrics_Williams/test-* - config_name: Pathology_Robbins data_files: - split: test path: Pathology_Robbins/test-* - config_name: Pediatrics_Nelson data_files: - split: test path: Pediatrics_Nelson/test-* - config_name: Pharmacology_Katzung data_files: - split: test path: Pharmacology_Katzung/test-* - config_name: Physiology_Levy data_files: - split: test path: Physiology_Levy/test-* - config_name: Psichiatry_DSM-5 data_files: - split: test path: Psichiatry_DSM-5/test-* --- # Medical textbook question answering This corpus contains multiple-choice quiz questions for 13 commonly-used medical textbooks. The questions are designed to examine understanding of the main concepts in the textbooks. The QA data is used to evaluate knowledge learning of language models in the following paper: - **Paper:** [Conditional language learning with context](link pending) ### Data Splits - subjects: anatomy, biochemistry, cell biology, gynecology, histology, immunology, neurology, obstentrics, pathology, pediatrics, pharmacology, physiology, psychiatry - 500 questions for each subject ## Dataset Creation Question and answers are generated by GPT-4 given excerpts from the textbooks. Refer to the paper for the instructions used to generate the questions. ### Citation Information ``` pending ```
binjang/NIKL-korean-english-dictionary
--- license: mit task_categories: - translation - token-classification language: - ko - en size_categories: - 10K<n<100K --- | Column Name | Type | Description | 설명 | |--------------------|-----------------------|--------------------------------------|----------------------------| | Form | `str` | Registered word entry | 단어 | | Part of Speech | `str` or `None` | Part of speech of the word in Korean | 품사 | | Korean Definition | `List[str]` | Definition of the word in Korean | 해당 단어의 한글 정의 | | English Definition | `List[str]` or `None` | Definition of the word in English | 한글 정의의 영문 번역본 | | Usages | `List[str]` or `None` | Sample sentence or dialogue | 해당 단어의 예문 (문장 또는 대화 형식) | | Vocabulary Level | `str` or `None` | Difficulty of the word (3 levels) | 단어의 난이도 ('초급', '중급', '고급') | | Semantic Category | `str` or `None` | Semantic category of the word | 단어 분류 (ex. '자연 > 기상 및 기후') | For more information, visit: - https://github.com/binjang/NIKL-dictionary-parser - https://krdict.korean.go.kr/kor/mainAction
TIGER-Lab/SKGInstruct-skg-only
--- license: cc-by-nc-2.0 task_categories: - text-generation language: - en pretty_name: SKGInstruct size_categories: - 100K<n<1M tags: - code - SKG --- # 🏗️ StructLM: Towards Building Generalist Models for Structured Knowledge Grounding SKGInstruct-skg-only is an instruction tuning dataset constructed from 19 structured knowledge grounding datasets. Project Page: [https://tiger-ai-lab.github.io/StructLM/](https://tiger-ai-lab.github.io/StructLM/) Paper: [https://arxiv.org/pdf/2402.16671.pdf](https://arxiv.org/pdf/2402.16671.pdf) Code: [https://github.com/TIGER-AI-Lab/StructLM](https://github.com/TIGER-AI-Lab/StructLM) Models: 7B | [StructLM-7B](https://huggingface.co/TIGER-Lab/StructLM-7B) 13B | [StructLM-13B](https://huggingface.co/TIGER-Lab/StructLM-13B) 34B | [StructLM-34B](https://huggingface.co/TIGER-Lab/StructLM-34B) ## **License** | Dataset Name | License Type | |--------------|----------------| | TabMWP | [Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/)| | everything else | [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)| ## **Citation** Please cite our paper if you use our data, model or code. Please also kindly cite the original dataset papers. ``` @misc{zhuang2024structlm, title={StructLM: Towards Building Generalist Models for Structured Knowledge Grounding}, author={Alex Zhuang and Ge Zhang and Tianyu Zheng and Xinrun Du and Junjie Wang and Weiming Ren and Stephen W. Huang and Jie Fu and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2402.16671}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Query-of-CC/knowledge_pile_full
--- license: apache-2.0 language: - en tags: - knowledge - cc - Retrieval - Reasoning --- Knowledge Pile is a knowledge-related data leveraging [Query of CC](https://arxiv.org/abs/2401.14624),a total of 735GB disk size and 188B tokens (using Llama2 tokenizer). ## *Query of CC* Just like the figure below, we initially collected seed information in some specific domains, such as keywords, frequently asked questions, and textbooks, to serve as inputs for the Query Bootstrapping stage. Leveraging the great generalization capability of large language models, we can effortlessly expand the initial seed information and extend it to an amount of domain-relevant queries. Inspiration from Self-instruct and WizardLM, we encompassed two stages of expansion, namely **Question Extension** and **Thought Generation**, which respectively extend the queries in terms of breadth and depth, for retrieving the domain-related data with a broader scope and deeper thought. Subsequently, based on the queries, we retrieved relevant documents from public corpora, and after performing operations such as duplicate data removal and filtering, we formed the final training dataset. ![The overview of Query of CC’s two major components: Query Bootstrapping and Data Retrieval.](https://github.com/ngc7292/query_of_cc/blob/master/images/main_stage.png?raw=true) ## **Knowledge Pile** Statistics Based on *Query of CC* , we have formed a high-quality knowledge dataset **Knowledge Pile**. As shown in Figure below, comparing with other datasets in academic and mathematical reasoning domains, we have acquired a large-scale, high-quality knowledge dataset at a lower cost, without the need for manual intervention. Through automated query bootstrapping, we efficiently capture the information about the seed query. **Knowledge Pile** not only covers mathematical reasoning data but also encompasses rich knowledge-oriented corpora spanning various fields such as biology, physics, etc., enhancing its comprehensive research and application potential. <img src="https://github.com/ngc7292/query_of_cc/blob/master/images/query_of_cc_timestamp_prop.png?raw=true" width="300px" style="center"/> This table presents the top 10 web domains with the highest proportion of **Knowledge Pile**, primarily including academic websites, high-quality forums, and some knowledge domain sites. Table provides a breakdown of the data sources' timestamps in **Knowledge Pile**, with statistics conducted on an annual basis. It is evident that a significant portion of **Knowledge Pile** is sourced from recent years, with a decreasing proportion for earlier timestamps. This trend can be attributed to the exponential growth of internet data and the inherent timeliness introduced by the **Knowledge Pile**. | **Web Domain** | **Count** | |----------------------------|----------------| |en.wikipedia.org | 398833 | |www.semanticscholar.org | 141268 | |slideplayer.com | 108177 | |www.ncbi.nlm.nih.gov | 97009 | |link.springer.com | 85357 | |www.ipl.org | 84084 | |pubmed.ncbi.nlm.nih.gov | 68934 | |www.reference.com | 61658 | |www.bartleby.com | 60097 | |quizlet.com | 56752 | ### cite ``` @article{fei2024query, title={Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora}, author={Fei, Zhaoye and Shao, Yunfan and Li, Linyang and Zeng, Zhiyuan and Yan, Hang and Qiu, Xipeng and Lin, Dahua}, journal={arXiv preprint arXiv:2401.14624}, year={2024} } ```
booydar/babilong
--- configs: - config_name: qa1 data_files: - split: 4k path: qa1/4k-* - split: 32k path: qa1/32k-* - split: 128k path: qa1/128k-* - split: 256k path: qa1/256k-* - split: 512k path: qa1/512k-* - split: 1M path: qa1/1M-* - config_name: qa10 data_files: - split: 4k path: qa10/4k-* - split: 32k path: qa10/32k-* - split: 128k path: qa10/128k-* - split: 256k path: qa10/256k-* - split: 512k path: qa10/512k-* - split: 1M path: qa10/1M-* - config_name: qa2 data_files: - split: 4k path: qa2/4k-* - split: 32k path: qa2/32k-* - split: 128k path: qa2/128k-* - split: 256k path: qa2/256k-* - split: 512k path: qa2/512k-* - split: 1M path: qa2/1M-* - config_name: qa3 data_files: - split: 4k path: qa3/4k-* - split: 32k path: qa3/32k-* - split: 128k path: qa3/128k-* - split: 256k path: qa3/256k-* - split: 512k path: qa3/512k-* - split: 1M path: qa3/1M-* - config_name: qa4 data_files: - split: 4k path: qa4/4k-* - split: 32k path: qa4/32k-* - split: 128k path: qa4/128k-* - split: 256k path: qa4/256k-* - split: 512k path: qa4/512k-* - split: 1M path: qa4/1M-* - config_name: qa5 data_files: - split: 4k path: qa5/4k-* - split: 32k path: qa5/32k-* - split: 128k path: qa5/128k-* - split: 256k path: qa5/256k-* - split: 512k path: qa5/512k-* - split: 1M path: qa5/1M-* - config_name: qa6 data_files: - split: 4k path: qa6/4k-* - split: 32k path: qa6/32k-* - split: 128k path: qa6/128k-* - split: 256k path: qa6/256k-* - split: 512k path: qa6/512k-* - split: 1M path: qa6/1M-* - config_name: qa7 data_files: - split: 4k path: qa7/4k-* - split: 32k path: qa7/32k-* - split: 128k path: qa7/128k-* - split: 256k path: qa7/256k-* - split: 512k path: qa7/512k-* - split: 1M path: qa7/1M-* - config_name: qa8 data_files: - split: 4k path: qa8/4k-* - split: 32k path: qa8/32k-* - split: 128k path: qa8/128k-* - split: 256k path: qa8/256k-* - split: 512k path: qa8/512k-* - split: 1M path: qa8/1M-* - config_name: qa9 data_files: - split: 4k path: qa9/4k-* - split: 32k path: qa9/32k-* - split: 128k path: qa9/128k-* - split: 256k path: qa9/256k-* - split: 512k path: qa9/512k-* - split: 1M path: qa9/1M-* dataset_info: - config_name: qa1 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1472626 num_examples: 100 - name: 32k num_bytes: 12473127 num_examples: 100 - name: 128k num_bytes: 50504415 num_examples: 100 - name: 256k num_bytes: 99258457 num_examples: 100 - name: 512k num_bytes: 198020073 num_examples: 100 - name: 1M num_bytes: 386962416 num_examples: 100 download_size: 440322259 dataset_size: 748691114 - config_name: qa10 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1472062 num_examples: 100 - name: 32k num_bytes: 12472140 num_examples: 100 - name: 128k num_bytes: 50506207 num_examples: 100 - name: 256k num_bytes: 99257468 num_examples: 100 - name: 512k num_bytes: 198020290 num_examples: 100 - name: 1M num_bytes: 386962380 num_examples: 100 download_size: 440355027 dataset_size: 748690547 - config_name: qa2 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1478639 num_examples: 100 - name: 32k num_bytes: 12452418 num_examples: 100 - name: 128k num_bytes: 50515008 num_examples: 100 - name: 256k num_bytes: 99272135 num_examples: 100 - name: 512k num_bytes: 198032173 num_examples: 100 - name: 1M num_bytes: 386975422 num_examples: 100 download_size: 440284466 dataset_size: 748725795 - config_name: qa3 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1493294 num_examples: 100 - name: 32k num_bytes: 12523530 num_examples: 100 - name: 128k num_bytes: 50554168 num_examples: 100 - name: 256k num_bytes: 99334687 num_examples: 100 - name: 512k num_bytes: 198073368 num_examples: 100 - name: 1M num_bytes: 387042294 num_examples: 100 download_size: 440583882 dataset_size: 749021341 - config_name: qa4 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1471946 num_examples: 100 - name: 32k num_bytes: 12484947 num_examples: 100 - name: 128k num_bytes: 50503566 num_examples: 100 - name: 256k num_bytes: 99255085 num_examples: 100 - name: 512k num_bytes: 198016746 num_examples: 100 - name: 1M num_bytes: 386958149 num_examples: 100 download_size: 440381047 dataset_size: 748690439 - config_name: qa5 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1478461 num_examples: 100 - name: 32k num_bytes: 12463791 num_examples: 100 - name: 128k num_bytes: 50517131 num_examples: 100 - name: 256k num_bytes: 99269843 num_examples: 100 - name: 512k num_bytes: 198038696 num_examples: 100 - name: 1M num_bytes: 387001125 num_examples: 100 download_size: 440661841 dataset_size: 748769047 - config_name: qa6 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1473892 num_examples: 100 - name: 32k num_bytes: 12473495 num_examples: 100 - name: 128k num_bytes: 50504836 num_examples: 100 - name: 256k num_bytes: 99258872 num_examples: 100 - name: 512k num_bytes: 198020386 num_examples: 100 - name: 1M num_bytes: 386962983 num_examples: 100 download_size: 440335019 dataset_size: 748694464 - config_name: qa7 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1475284 num_examples: 100 - name: 32k num_bytes: 12475060 num_examples: 100 - name: 128k num_bytes: 50510112 num_examples: 100 - name: 256k num_bytes: 99261198 num_examples: 100 - name: 512k num_bytes: 198023770 num_examples: 100 - name: 1M num_bytes: 386965624 num_examples: 100 download_size: 440351170 dataset_size: 748711048 - config_name: qa8 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1475311 num_examples: 100 - name: 32k num_bytes: 12464499 num_examples: 100 - name: 128k num_bytes: 50506943 num_examples: 100 - name: 256k num_bytes: 99260981 num_examples: 100 - name: 512k num_bytes: 198023921 num_examples: 100 - name: 1M num_bytes: 386965883 num_examples: 100 download_size: 440314954 dataset_size: 748697538 - config_name: qa9 features: - name: question dtype: string - name: input dtype: string - name: target dtype: string splits: - name: 4k num_bytes: 1471528 num_examples: 100 - name: 32k num_bytes: 12472641 num_examples: 100 - name: 128k num_bytes: 50503824 num_examples: 100 - name: 256k num_bytes: 99257992 num_examples: 100 - name: 512k num_bytes: 198019692 num_examples: 100 - name: 1M num_bytes: 386962128 num_examples: 100 download_size: 440326888 dataset_size: 748687805 --- # BABILong (100 samples) : a long-context needle-in-a-haystack benchmark for LLMs Preprint is on [arXiv](https://arxiv.org/abs/2402.10790) ## bAbI + Books = BABILong **BABILong** is a novel generative benchmark for evaluating the performance of NLP models in processing arbitrarily long documents with distributed facts. It contains 10 configs, each corresponding to its bAbI task. Each config has spltis corresponding to different sequence lengths in tokens: '4k', '32k', '128k', '256k', '512k', '1M' Solving tasks with a long context size requires the model to distinguish important information from large amounts of irrelevant details. To simulate this behavior we ”hide” the sentences of the original task between the sentences of irrelevant text. We use the [bAbI](https://huggingface.co/datasets/facebook/babi_qa) dataset [1] as facts and [PG19](https://huggingface.co/datasets/pg19) as background text. Resulting test samples might have lenghts of **millions of tokens**. BABILong consists of 10 tasks designed for evaluation of basic aspects of reasoning. The bAbI tasks are generated by simulating a set of characters and objects engaged in various movements and interactions with each other in multiple locations. Each interaction is represented by a fact, e.g. **”Mary travelled to the office”**, and the task is to answer a question using the facts from the current simulation, for instance, **”Where is Mary?”**. The bAbI tasks vary based on the number of facts, question complexity and the aspects of reasoning. ### First ten tasks of BABILong | Task | Name | facts per task | supporting facts per task | |------|--------------------------|-----------------|---------------------------| | qa1 | single supporting fact | 2 - 10 | 1 | | qa2 | two supporting facts | 2 - 68 | 2 | | qa3 | three supporting facts | 4 - 32 | 3 | | qa4 | two arg relations | 2 | 1 | | qa5 | three arg relations | 2 - 126 | 1 | | qa6 | yes-no questions | 2 - 26 | 1 | | qa7 | counting | 2 - 52 | 1-10 | | qa8 | lists-sets | 2 - 50 | 1-8 | | qa9 | simple negation | 2 - 10 | 1 | | qa10 | indefinite knowledge | 2 - 10 | 1 | Join us in this exciting endeavor and let's push the boundaries of what's possible together! ## Citation ``` @misc{kuratov2024search, title={In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss}, author={Yuri Kuratov and Aydar Bulatov and Petr Anokhin and Dmitry Sorokin and Artyom Sorokin and Mikhail Burtsev}, year={2024}, eprint={2402.10790}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## References [1] Weston, Jason, et al. "Towards ai-complete question answering: A set of prerequisite toy tasks." arXiv preprint [arXiv:1502.05698](https://arxiv.org/abs/1502.05698) (2015).