datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.68M
| likes
int64 0
6.41k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
violetxi/hendrycks-math-mc-llama-sft-intermediate-part1-of-1 | violetxi | "2024-12-03T04:15:06Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T04:14:44Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: target_answer
dtype: string
- name: target_solution
dtype: string
- name: generated
dtype: string
- name: answer
dtype: string
- name: is_target
dtype: bool
splits:
- name: test
num_bytes: 30216206
num_examples: 28954
download_size: 4707820
dataset_size: 30216206
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
udamaurizio/parler_tts_mini_V01_TestVoice_Italian_V1 | udamaurizio | "2024-12-03T04:33:42Z" | 0 | 0 | [
"language:it",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-12-03T04:29:32Z" | ---
language:
- it
--- |
juliadollis/stf_regex_ner_2_fuzzycosseno_80 | juliadollis | "2024-12-03T04:30:55Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T04:30:43Z" | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: inteiro_teor_regex
dtype: string
- name: NER
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
- name: desambiguacao
list:
- name: class
dtype: string
- name: count
dtype: int64
- name: elements
sequence: string
- name: entity
dtype: string
splits:
- name: train
num_bytes: 160888933
num_examples: 1000
download_size: 44243977
dataset_size: 160888933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RussRobin/VDD | RussRobin | "2024-12-03T05:04:58Z" | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2305.13608",
"region:us"
] | null | "2024-12-03T04:34:17Z" | ---
license: cc-by-4.0
---
VDD: Varied Drone Dataset for Semantic Segmentation
Paper: https://arxiv.org/abs/2305.13608
GitHub Repo: https://github.com/RussRobin/VDD
This HF repo contains VDD source images and annotations.
Please refer to our GitHub Repo if you want to download our annotation of UDD and UAVid.
|
CaptionEmporium/dalle3-llama3.2-11b | CaptionEmporium | "2024-12-03T11:00:32Z" | 0 | 0 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:other",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"image-text-dataset",
"synthetic-dataset",
"LLaMA",
"LLaMA-3.2b",
"synthetic-captions",
"synthetic-images",
"DALLE3"
] | [
"text-to-image",
"image-to-text",
"other"
] | "2024-12-03T04:44:19Z" | ---
license: cc-by-sa-4.0
language:
- en
pretty_name: dalle3-llama3.2-11b
tags:
- image-text-dataset
- synthetic-dataset
- LLaMA
- LLaMA-3.2b
- synthetic-captions
- synthetic-images
- DALLE3
task_categories:
- text-to-image
- image-to-text
- other
size_categories:
- 1M<n<10M
---
# Dataset Card for dalle3-llama3.2-11b
## Dataset Description
- **Point of Contact:** Caption Emporium
### Dataset Summary
This is 3,577,716 new synthetic captions for the 1,192,572 images found in [ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions](https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions). The dataset was filtered for duplicates and then re-encoded with JPEGXL lossless or lossy depending on the source. The long captions were produced using [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct). Medium and short captions were produced from these captions using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It includes the original captions from that repository as well. The dataset was also tagged with [SmilingWolf/wd-eva02-large-tagger-v3](https://huggingface.co/SmilingWolf/wd-eva02-large-tagger-v3).
Llama-3.2-11B-Vision-Instruct was found to be able to discern an extensively larger number of cultural references as compared to CogVLM used to previous caption the images. For example,
> A custom funko pop figurine of juan manuel prat, a data and ai expert at accenture, alongside his figurine in its box. the figurine features prat with short gray hair, wearing a black t-shirt with white chinese characters, gray pants, and black shoes, holding a tablet with a pie chart on its screen. a small figurine of a brown and gray dog sits to his right. the figurine is mounted on a round black base featuring the words "data & ai" in white text.
>
> to the left of the figurine is its box, which is white with a large window to display the figurine. the box features prat's name in black text on a gray banner, as well as the accenture logo and a speech bubble with the words "apiercard." the background of the image is a light teal color, providing a clean and modern contrast to the figurine and its box.
>
> overall, the image presents a unique and personalized figurine that showcases juan manuel prat's expertise in data and ai, as well as his connection to accenture. the use of a funko pop figurine adds a playful touch to the image, while the attention to detail in the design of the figurine itself creates a sense of professionalism and expertise.
This dataset offers a simple and accessible way to use this dataset.
### Languages
The text is in English, but occasionally text in images in other languages is transcribed.
### Intended Usage
Training text-to-image models and other machine learning tasks.
### Data Splits
| | train |
|--------------------------------------|---------:|
| dalle3-llama3.2-11b | 1192572 |
## Dataset Creation
### Producing the Dataset
The following prompt was used to produce the captions:
```py
prompt = "Please make a long, detailed caption of this image, including the contents of the foreground and background. Describe everything you see and use as many paragraphs as necessary. If any well-known characters or people are present, describe who they are."
```
After producing the caption, they were stripped of prefixes as per [anime-caption-danbooru-2021-sfw-5m-hq](https://huggingface.co/datasets/CaptionEmporium/anime-caption-danbooru-2021-sfw-5m-hq). A pair of medium and short length captions were then produced using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) with the following prompt:
```py
prompt_shorten = lambda img_prompt: f"""
You have two tasks: prepare a long paragraph length caption and short caption from a long caption.
## Caption for you to distill
The following is the caption to distill.
\`\`\`
{img_prompt}
\`\`\`
## Tasks
1. Distill it into a long single paragraph that sufficiently describes all details of the image. Remove any redundant lines or descriptions.
2. Distill it into a single sentence that is a maximum of 25 words. Remove any redundant lines or descriptions.
Please only write the two captions, each on a separate line.
"""
```
This task had a failure rate of 2.5% that only produced a single medium caption, so to resolve this all failing prompts went through another captioning step where a short prompt was produced from that.
### Data Instances
The new captions and tags are found as the columns `caption_llama32_long`, `caption_llama32_medium`, `caption_llama32_short`, and `wd_tagger_eva02_l`.
An example of a row:
```json
{
"image_name": "1698064832265.jpg",
"md5_pil_hash": "766d0c40fb109578b4cdc18bb2ede7ff",
"md5_file_hash": "9d3454342c5fd20a3bc3383b43072579",
"sha512_hash": "e8f8d808abdb499b6012d411137462048fc322a64aba7bfc35dccdcdb801ae9ae3369a814190032a5d82855e8617854fa750530fe40dc09e9beb57cecf8d5a78",
"short_caption": "An oversized Shrek figure stands within a supermarket setting, its face exaggerated with enlarged, red, paint-filled eyes, a toothy grin, and a seat upon a shopping cart.",
"long_caption": "This image showcases a large, animated green ogre figure with exaggerated facial features. The ogre has wide, bulging eyes that are red with white sclera, and it has a large, open mouth filled with sharp teeth. The figure appears to be in a store, as evidenced by the shopping cart in front of it and the aisles in the background. The store's interior is bright, with white lighting and a few blurred figures, suggesting motion or the use of a slow shutter speed. The overall mood of the image is playful and surreal, as the ogre's expression and the choice of colors give it a comedic and slightly eerie appearance.",
"resolution": "1024x1024",
"url": "https://desu-usergeneratedcontent.xyz/g/image/1698/06/1698064832265.jpg",
"width": 1024,
"height": 1024,
"source": "chan_archive",
"original_prompt": null,
"short_caption2": [
"An oversized Shrek figure stands within a supermarket setting, its face exaggerated with enlarged, red, paint-filled eyes, a toothy grin, and a seat upon a shopping cart."
],
"long_caption2": [
"a creatively designed figure resembling the character Shrek. The figure has exaggerated facial features, with large, bulging eyes that are red and appear to be filled with a liquid, possibly paint or blood. The skin of the figure is green, and it has a wide, toothy grin, revealing multiple sharp teeth. The figure is wearing a brown vest and is seated inside a shopping cart. The background suggests that the figure is in a store, with bright lights and shelves visible."
],
"caption_llama32_long": "A surreal and unsettling scene, featuring a grotesque, zombie-like version of shrek, the beloved animated ogre, situated in the middle of a walmart store.\n\nshrek's face is distorted, with bulging red eyes, bloodshot and dripping with red paint, and a menacing, toothy grin. his skin is a sickly green hue, and he wears a tattered, brown fur vest over a cream-colored shirt, while his hands grasp a blue-handled shopping cart. notably, the shrek figure is surrounded by a burst of light, suggesting motion and movement.\n\nin the background, the walmart store's fluorescent lights and white ceiling are visible, with other shoppers riding the store's circular conveyor belt. the overall atmosphere of the image is one of chaos and disturbance, with the grotesque shrek figure disrupting the mundane and ordinary setting of the walmart store.",
"caption_llama32_medium": "A surreal and unsettling scene features a grotesque, zombie-like Shrek in a Walmart store, surrounded by fluorescent lights and other shoppers on a conveyor belt. Shrek's distorted face has bulging red eyes, a menacing grin, and sickly green skin, wearing a tattered fur vest and grasping a blue-handled shopping cart. The image conveys chaos and disturbance as the monstrous figure disrupts the mundane setting.",
"caption_llama32_short": "A grotesque, zombie-like Shrek disrupts the mundane Walmart store atmosphere with his distorted face and menacing presence amidst shoppers on a conveyor belt.",
"wd_tagger_eva02_l": "{\"ratings\":{\"general\":0.70572,\"sensitive\":0.30799,\"questionable\":0.00991,\"explicit\":0.00546},\"character\":{},\"general\":{\"green_skin\":0.95689,\"colored_skin\":0.91007,\"teeth\":0.88029,\"horror_(theme)\":0.84293,\"male_focus\":0.83938,\"1boy\":0.82954,\"realistic\":0.783,\"open_mouth\":0.77577,\"blood\":0.77054,\"solo\":0.74826,\"blood_on_clothes\":0.61918,\"zombie\":0.54889,\"blurry\":0.52864,\"shopping_cart\":0.51149,\"black_eyes\":0.50906,\"smile\":0.48647,\"bald\":0.44188,\"shirt\":0.42842,\"white_shirt\":0.3891,\"holding\":0.36722,\"motion_blur\":0.36401}}"
}
```
### Loading the dataset with
Install the PIL plug-in for JPEGXL (`pip install pillow pillow-jxl-plugin`) and the package for webdataset (`pip install webdataset`).
```py
import io
import json
import pillow_jxl
import webdataset as wds
from PIL import Image
WDS_URLS = "https://huggingface.co/datasets/CaptionEmporium/dalle3-llama3.2-11b/resolve/main/data/wds/{000000..000137}.tar"
ds = wds.WebDataset(WDS_URLS)
for row in ds:
row_json = json.loads(row["json"])
image_pil = Image.open(io.BytesIO(row["jxl"]))
```
### Discussion of Biases
It is biased to the content of the curated dataset along with the training data of LLaMA 3.2 and the tagger.
### Known Limitations
- Captions are not manually verified for correctness, so there may be errors.
## Additional Information
### Dataset Curators
Caption Emporium, [ProGamerGov](https://huggingface.co/datasets/ProGamerGov)
### Licensing Information
The dataset is available under the [Creative Commons ShareAlike (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/legalcode.en).
### Citation Information
```
@misc{dalle3-llama3.2-11b,
author = { Caption Emporium },
title = {dalle3-llama3.2-11b},
year = {2024},
publisher = {Huggingface},
journal = {Huggingface repository},
howpublished = {\url{https://huggingface.co/datasets/CaptionEmporium/dalle3-llama3.2-11b}},
}
```
|
RyanYr/self-reflect_mini8Bit-t0_mistlarge-t12_om2-6_binlabel | RyanYr | "2024-12-03T05:07:46Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:07:39Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
- name: response@0_ans
sequence: string
- name: response@0_correctness
sequence: bool
- name: response@2_ans
sequence: string
- name: response@2_correctness
sequence: bool
splits:
- name: train
num_bytes: 289952192
num_examples: 50000
download_size: 125970107
dataset_size: 289952192
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KaranCirusbug/guanaco-llama2-1k | KaranCirusbug | "2024-12-03T05:28:19Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:26:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/Magpie_rank1_chunk8_interleaf | chiyuanhsiao | "2024-12-03T05:42:45Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:30:31Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11857625455.375
num_examples: 10023
download_size: 11586124879
dataset_size: 11857625455.375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
recursal/OKReddit-Visionary | recursal | "2024-12-03T08:28:48Z" | 0 | 0 | [
"task_categories:question-answering",
"task_categories:image-to-text",
"source_datasets:original",
"language:en",
"size_categories:100K<n<1M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [
"question-answering",
"image-to-text"
] | "2024-12-03T05:30:32Z" | ---
size_categories:
- 10K<n<100K
pretty_name: OKReddit Visionary
task_categories:
- question-answering
- image-to-text
source_datasets:
- original
language:
- en
---
<div>
<a href="https://soundcloud.com/lemmino/biosignature"><img src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/jh7lskqN9TnF53HmKnFlh.png" title=""We've switched style models from 1.5 to SDXL! Yay! And yes, it's a Style lora once more."" style="margin-left:auto;margin-right:auto"></a>
</div>
# Dataset Summary
OKReddit Visionary is a collection of **50 GiB** (~74K pairs) of image Question & Answers. This dataset has been prepared for research or archival purposes.
- **Curated by:** KaraKaraWitch
- **Funded by:** Recursal.ai
- **Shared by:** KaraKaraWitch
- **Special Thanks:** [harrison](https://huggingface.co/harrisonvanderbyl) (Suggestion)
- **Language(s) (NLP):** Mainly English.
- **License:** Refer to [Licensing Information](#licensing-information) for data license.
### Dataset Sources
- **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.)
## Supported Tasks and Leaderboards
The dataset may be used for a variety of natural language processing (NLP) tasks including:
- Visual Questioning: Dataset contains question and answer pairs
- Text to Image (and vice versa).
## Languages
All the questions and answers should be in english at this size point.
## Dataset Structure
### Data Instances
The dataset can be loaded with webdataset. Do note that there are multiple extensions to check: `jpg`, `jpeg` or `png`. They have not been reconverted to preserve the original file from reddit.
```py
import webdataset as wds
# After concatting, you may use the file like a regular dataset.
# The dataset is compatible with WebDataset format. Example...
tar_file = "PackedTar.tar"
hf_dataset = wds.WebDataset(str(tar_root)).decode("pil")
```
# Dataset Creation
## Curation Rationale
Some subreddits are more often than not, Q&A subreddits. Where the submission author asks a question (+An Image), and recieve responses back.
### Subreddits Picked
Following a suggestion from harrison, I've selected the following subreddits for this dataset:
- PeterExplainsTheJoke
- whatisthisanimal
- whatisthisbug
- whatisthiscar
- whatisthisthing
Some subreddits were not present in the base OKReddit-RC3 (/r/PeterExplainsTheJoke for example) dataset and had to be pulled from an intermediary step. But the same quality metrics where used in the final subreddit filtering.
### Picking good threads
After filtering threads, another round of filtering threads by score is done by:
1. Selecting submissions with >7 score.
2. Select replies from said submissions when it's not a bot (`AutoModerator`) and the score is > 5.
3. Scrape all images (Excluding galleries)
4. Pack data into tar format.
# Additional Information
## Recursal's Vision
> To make AI accessible to everyone, regardless of language, or economical status
This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it.
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.
### About RWKV
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.
You can find out more about the project, and latest models, at the following
- [https://blog.rwkv.com](https://blog.rwkv.com)
- [https://wiki.rwkv.com](https://wiki.rwkv.com)
### About Recursal AI
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.
The following dataset/models provided here, is part of that commitment.
You can find out more about recursal AI here
- [https://recursal.ai](https://recursal.ai)
- [https://blog.recursal.ai](https://blog.recursal.ai)
### Licensing Information
Since this dataset is derived from a public crawl of reddit, the original content may be subject to copyright and other licensing terms set by the original site owner and/or the content creators.
Additionally, this dataset is for research and archival purposes only.
Recursal Waifus (The banner image) are licensed under CC-BY-SA.
They do not represent the related websites in any official capacity unless otherwise or announced by the website.
You may use them as a banner image. However, you must always link back to the dataset.
### Citation Information
If you use this dataset in your research or project, please cite it as follows:
```TeX
@dataset{OKRedditVisionary,
title = {OKReddit-Visionary},
year = {2024},
publisher = {KaraKaraWitch},
url = {<https://huggingface.co/datasets/recursal/OKReddit-Visionary>}
}
```
Additionally, pleace cite the following source bibtex as well.
```TeX
@article{,
title= {Reddit comments/submissions 2005-06 to 2023-12},
journal= {},
author= {stuck_in_the_matrix, Watchful1, RaiderBDev},
year= {},
url= {},
abstract= {Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev.
These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps
The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift},
keywords= {reddit},
terms= {},
license= {},
superseded= {}
}
``` |
chiyuanhsiao/Magpie_rank3_chunk8_interleaf | chiyuanhsiao | "2024-12-03T05:42:49Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:30:53Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11784967219.625
num_examples: 10017
download_size: 11513571501
dataset_size: 11784967219.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tensorists/SD2_images | Tensorists | "2024-12-03T05:52:06Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:48:37Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Airplane
'1': Automobile
'2': Bird
'3': Cat
'4': Deer
'5': Dog
'6': Frog
'7': Horse
'8': Ship
'9': Truck
splits:
- name: train
num_bytes: 1360784016.124
num_examples: 2734
download_size: 1544616944
dataset_size: 1360784016.124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lightblue/response-dataset-plus-qwen-judged | lightblue | "2024-12-03T05:51:16Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:51:08Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
- name: translation
struct:
- name: error
dtype: string
- name: text
dtype: string
- name: response
struct:
- name: error
dtype: string
- name: text
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: row_id
dtype: int64
- name: qwen_response
dtype: string
- name: qwen_stop_reason
dtype: string
- name: judgement_gpt_x_qwe_y
struct:
- name: judgement
dtype: string
- name: output_logprobs
sequence: float64
- name: output_logprobs_top5
list:
list:
- name: logprob
dtype: float64
- name: tok
dtype: string
- name: judgement_qwe_x_gpt_y
struct:
- name: judgement
dtype: string
- name: output_logprobs
sequence: float64
- name: output_logprobs_top5
list:
list:
- name: logprob
dtype: float64
- name: tok
dtype: string
- name: has_unanimous_winner
dtype: bool
- name: unanimous_winner
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 462315005
num_examples: 50782
download_size: 215128813
dataset_size: 462315005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
marcelsun/wos_hierarchical_multi_label_text_classification | marcelsun | "2024-12-03T06:51:01Z" | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2411.19119",
"region:us",
"NLP",
"LLM",
"hierarchical",
"multi-label",
"classification"
] | [
"text-classification"
] | "2024-12-03T06:26:09Z" | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
tags:
- NLP
- LLM
- hierarchical
- multi-label
- classification
pretty_name: WOS Hierarchical Multi-Label Text Classification
size_categories:
- 10K<n<100K
---
Introduced by du Toit and Dunaiski (2024) [Introducing Three New Benchmark Datasets for Hierarchical Text Classification](https://arxiv.org/abs/2411.19119).
The WOS Hierarchical Text Classification are three dataset variants created from Web of Science (WOS) title and abstract data categorised into a hierarchical, multi-label class structure. The aim of the sampling and filtering methodology used was to create well-balanced class distributions (at chosen hierarchical levels). Furthermore, the WOS_JTF variant was also created with the aim to only contain publication data such that their class assignments results is classes instances that semantically more similar.
The three dataset variants have the following properties:
1. WOS_JT comprises 43,366 total samples (train=30356, dev=6505, test=6505) and only uses the journal-based classifications as labels.
2. WOS_CT comprises 65,200 total samples (train=45640, dev=9780, test=9780) and only uses citation-based classifications as labels.
3. WOS_JTF comprises 42,926 total samples (train=30048, dev=6439, test=6439) and uses a filtered set of papers based on journal and citation classification.
Dataset details:
*.json:
- concatenated title and abstract mapped to a list each associated class label.
depth2label.pt: dictionary where:
- key = depth of classification hierarchy.
- value = list of classes associated with depth.
path_list.pt:
- list of tuples for every edge between classes in the hierarchical classification. This specifies the acyclic graph.
slot.pt: dictionary where:
- key = label_id of parent class.
- value = label_ids of children classes.
value2slot.pt: dictionary where:
- key = label_id.
- value = label_id of parent class.
value_dict.pt: dictionary where:
- key = label_id.
- value = string representation of class. |
infinite-dataset-hub/EthicalEatingEmotions | infinite-dataset-hub | "2024-12-03T06:32:40Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-12-03T06:32:39Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# EthicalEatingEmotions
tags: vegan, psychology, dietary choices
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'EthicalEatingEmotions' dataset contains anonymized user-generated content from various platforms discussing the emotional aspects of adopting a vegan diet. The data is gathered from social media posts, blog comments, and forum discussions. Each entry includes the original text, a sentiment analysis score, and a label reflecting the user's emotional stance towards veganism (e.g., positive, neutral, negative).
**CSV Content Preview:**
```
text,sentiment_score,labels
"I've been vegan for 5 years now and I feel healthier than ever!",0.9,"positive"
"Trying to be vegan has been challenging but worth it for the planet.",0.7,"positive"
"The taste of vegan food can sometimes be off-putting, but I'm learning.",0.6,"neutral"
"I'm disappointed by the lack of vegan options at my favorite restaurant.",0.3,"negative"
"Veganism isn't for everyone, and that's okay. I respect people's choices.",0.5,"neutral"
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'vegan':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=vegan&dataset=EthicalEatingEmotions&tags=vegan,+psychology,+dietary+choices
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
infinite-dataset-hub/VeganProductPreferences | infinite-dataset-hub | "2024-12-03T06:33:57Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-12-03T06:33:56Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# VeganProductPreferences
tags: Consumer Behavior, Product Recommendation, Veganism
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'VeganProductPreferences' dataset captures consumer preferences for various vegan products. It includes survey data collected from individuals who follow a vegan lifestyle, with a focus on the type of products they prefer, their reasons for choosing those products, and their willingness to recommend them to others.
**CSV Content Preview:**
```csv
id,product_name,product_category,user_age,user_gender,user_location,preference_rating,reason_for_preference,likelihood_of_recommendation,labels
1,Vegan Chocolate Bar,"Chocolates",28,"Female","San Francisco, CA",4.5,"Taste and ethical reasons",4.2,"Ethical consumerism, Plant-based desserts"
2,Organic Soy Milk,"Dairy Alternatives",34,"Male","New York, NY",4.8,"Health benefits and environmental concerns",4.7,"Health-conscious, Environmentalism"
3,Vegan Leather Wallet,"Accessories",23,"Non-binary","Austin, TX",3.9,"Lifestyle choice, Durability",3.8,"Sustainable fashion, Lifestyle choices"
4,Plant-Based Burgers,"Food",29,"Female","Chicago, IL",4.2,"Taste and dietary preferences",4.1,"Culinary adventures, Plant-based diet"
5,Cruelty-Free Beauty Products,"Beauty",37,"Female","Los Angeles, CA",4.7,"Ethical reasons, Skin health",4.6,"Animal rights, Skincare"
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'veganism':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=veganism&dataset=VeganProductPreferences&tags=Consumer+Behavior,+Product+Recommendation,+Veganism
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
infinite-dataset-hub/AnimalAdvocacyTracker | infinite-dataset-hub | "2024-12-03T06:37:34Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-12-03T06:35:19Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# AnimalAdvocacyTracker
tags: social_media, sentiment, impact
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'AnimalAdvocacyTracker' dataset contains tweets related to animal rights activism, categorized based on sentiment and impact. Each row represents a tweet with associated metadata, and the dataset includes labels to classify the overall sentiment and potential impact of the message.
**CSV Content Preview:**
```csv
tweet_id,text,created_at,label
1,"Animal testing is cruel, we must stand against it. #AnimalRights #SaveLives",2023-03-21T10:00:00Z,PositiveImpact
2,"Watching documentaries on animal farming is heartbreaking. #AnimalWelfare #StopFactoryFarming",2023-03-21T11:30:00Z,NegativeSentiment
3,"The ban on animal circuses is finally here! Good news for our furry friends! #AnimalFreedom #BanCircuses",2023-03-21T12:45:00Z,PositiveImpact
4,"Huge loss today as our beloved dolphin sanctuary was hit by a storm. #AnimalCare #Loss",2023-03-21T14:15:00Z,NeutralSentiment
5,"Every time I see a video of factory-farmed animals, it breaks my heart. #AnimalCruelty #EndFactoryFarming",2023-03-21T15:05:00Z,NegativeSentiment
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'animal rights activism':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=animal+rights+activism&dataset=AnimalAdvocacyTracker&tags=social_media,+sentiment,+impact
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
infinite-dataset-hub/AnimalAlliesAlgorithm | infinite-dataset-hub | "2024-12-03T06:38:39Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-12-03T06:38:38Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# AnimalAlliesAlgorithm
tags: vegan advocacy, efficacy prediction, animal rights
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'AnimalAlliesAlgorithm' dataset is a curated collection of textual data aimed at understanding the various dimensions of vegan advocacy. It encompasses a range of content, including advocacy campaigns, personal testimonies, scientific studies, and policy discussions related to animal rights and veganism. Each entry has been carefully labeled to reflect its primary focus within the realm of vegan advocacy, from awareness-raising to promoting policy change.
**CSV Content Preview:**
```
id,text,label
001,"The Vegan Society's recent campaign highlighted the environmental benefits of a plant-based diet.",Awareness
002,"An interview with a renowned vegan chef discussing the moral imperative of veganism.",PersonalTestimony
003,"A scientific study on the impact of factory farming on local wildlife populations.",ImpactStudy
004,"A blog post from a legislator advocating for stricter animal welfare laws.",PolicyChange
005,"Vegan bloggers collaborating to share recipes and tips for a cruelty-free lifestyle.",CommunitySupport
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'vegan advocacy':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=vegan+advocacy&dataset=AnimalAlliesAlgorithm&tags=vegan+advocacy,+efficacy+prediction,+animal+rights
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
infinite-dataset-hub/PlantProtein | infinite-dataset-hub | "2024-12-03T06:39:48Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-12-03T06:39:48Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# PlantProtein
tags: vegan, food, nutrition
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'PlantProtein' dataset is a curated collection of various plant-based protein sources that are suitable for a vegan diet. Each entry includes a description of the protein source, its primary nutritional benefits, and common food items it can be found in or used to enhance. The dataset is meant to help vegans find diverse protein options to ensure a balanced diet.
**CSV Content Preview:**
```
"Protein Source","Description","Nutritional Benefits","Common Uses","Label"
"Lentils","Small, green legume with high protein content.","Rich in iron, folate, and potassium.","Used in soups, stews, and salads.","Complete Protein"
"Chickpeas","Bold-flavored bean with a good amount of protein.","Contains protein, fiber, and minerals like magnesium and zinc.","Found in hummus, salads, and curries.","Legume Protein"
"Quinoa","Sprouted seed with a balanced amino acid profile.","High in fiber, magnesium, B vitamins, iron, potassium, calcium, phosphorus, vitamin E, and various beneficial antioxidants.","Can be eaten as a grain or used in salads and vegan meat substitutes.","Complete Protein"
"Hemp Seeds","Seeds from the hemp plant, containing all nine essential amino acids.","Rich in omega-3 and omega-6 fatty acids, protein, and fiber.","Can be added to smoothies, oatmeal, and salads.","Complete Protein"
"Spirulina","Blue-green algae, packed with protein, vitamins B1, B2, B3, copper, and manganese.","High in protein and antioxidants.","Available as a supplement or added to juices and smoothies.","Microalgae Protein"
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'vegan':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=vegan&dataset=PlantProtein&tags=vegan,+food,+nutrition
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
infinite-dataset-hub/EthicalEatingTrends | infinite-dataset-hub | "2024-12-03T06:40:35Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-12-03T06:40:34Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# EthicalEatingTrends
tags: Social Media Influence, Consumer Patterns, Veganism
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'EthicalEatingTrends' dataset aims to analyze the influence of social media on consumer patterns, specifically focusing on veganism. It includes data collected from various social media platforms and public forums to understand how ethical eating trends spread and evolve over time. The dataset features user comments, articles, and posts discussing veganism, consumer behavior, and the impact of ethical eating on the environment and society.
**CSV Content Preview:**
```csv
"label","user_id","comment","date","source"
"positive","user123","Loving the new plant-based menu at Green Sprout Café! #VeganLife #SustainableEating",2023-01-15,"SocialMedia"
"neutral","user456","I'm not sure about giving up cheese, but I'm trying it out. #VeganExperiment",2023-01-20,"SocialMedia"
"negative","user789","Vegan options are usually overpriced. Why can't regular food be sustainable?",2023-01-12,"Forum"
"positive","user321","Appreciating how the city's vegan festival is spreading awareness about ethical eating! #EcoFriendly",2023-02-05,"NewsArticle"
"neutral","user654","Eating vegan can be challenging during holidays. Need more support from restaurants.",2023-01-25,"Blog"
```
This CSV snippet provides a preview of how the dataset may be structured, including different labels indicating the sentiment or stance towards veganism, as well as other relevant details that can be used for analysis.
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'veganism':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=veganism&dataset=EthicalEatingTrends&tags=Social+Media+Influence,+Consumer+Patterns,+Veganism
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
EuclidAI/Geoperception | EuclidAI | "2024-12-03T21:57:41Z" | 0 | 1 | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"region:us",
"multi-modal-qa",
"math-qa",
"figure-qa",
"geometry-qa",
"math-word-problem",
"vqa",
"geometry-reasoning",
"numeric-common-sense",
"scientific-reasoning",
"logical-reasoning",
"geometry-diagram",
"synthetic-scene",
"scientific-figure",
"function-plot",
"abstract-scene",
"mathematics"
] | [
"question-answering",
"visual-question-answering"
] | "2024-12-03T06:40:37Z" | ---
license: apache-2.0
task_categories:
- question-answering
- visual-question-answering
tags:
- multi-modal-qa
- math-qa
- figure-qa
- geometry-qa
- math-word-problem
- vqa
- geometry-reasoning
- numeric-common-sense
- scientific-reasoning
- logical-reasoning
- geometry-diagram
- synthetic-scene
- scientific-figure
- function-plot
- abstract-scene
- mathematics
pretty_name: Geoperception
size_categories:
- 10K<n<100K
language:
- en
---
# Dataset Card for Geoperception
A Benchmark for Low-level Geometric Perception
## Dataset Details
### Dataset Description
Geoperception is sourced from the Geometry-3K corpus, which offers precise logical forms for geometric diagrams, compiled from popular high-school textbooks.
### Dataset Sources
- **Repository:** [https://github.com/euclid-multimodal/Euclid-Model]
- **Paper:** [More Information Needed]
- **Demo:** [More Information Needed]
## Uses
Evaluation of multimodal LLM's ability of low-level visual perception in 2D geometry domain.
## Dataset Structure
### Fields
[More Information Needed]
## Citation
**BibTeX:**
[More Information Needed] |
JoeLeelyf/OVBench | JoeLeelyf | "2024-12-03T06:47:20Z" | 0 | 0 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-12-03T06:47:20Z" | ---
license: cc-by-nc-sa-4.0
---
|
hirundo-io/unlearning-500-persons-multiple-details | hirundo-io | "2024-12-03T07:23:19Z" | 0 | 0 | [
"license:bsd-2-clause",
"region:us"
] | null | "2024-12-03T06:57:50Z" | ---
license: bsd-2-clause
---
|
mengcy/LAION-SG | mengcy | "2024-12-03T12:04:36Z" | 0 | 0 | [
"task_categories:image-to-text",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text"
] | "2024-12-03T07:05:19Z" | ---
license: mit
task_categories:
- image-to-text
language:
- en
---
# Dataset Card for LAION-SG
<!-- Provide a quick summary of the dataset. -->
<!-- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
-->
LAION-SG is a large-scale dataset with high-quality structural annotations of scene graphs (SG), which precisely describe attributes and relationships of multiple objects, effectively representing the semantic structure in complex scenes.
## Dataset Details
<!-- ### Dataset Description -->
<!-- Provide a longer summary of what this dataset is. -->
<!-- - **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed] -->
- **Language(s) :** All of annotations use English as primary language.
- **License:** MIT License.
<!-- ### Dataset Sources [optional] -->
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper:** [More Information Needed]
<!-- - **Demo [optional]:** [More Information Needed] -->
### Data Splits
- A total of 540,005 SG-image pairs annotated with objects, attributes, and relationships.
- 480,005 samples for taining
- 10,000 samples for validation
- 50,000 samples for test
## Uses
<!--只提供json里面的url,不提供我们自己的images -->
<!-- Address questions around how the dataset is intended to be used. -->
<!-- ### Direct Use -->
<!-- This section describes suitable use cases for the dataset. -->
<!-- [More Information Needed] -->
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
<!-- [More Information Needed] -->
## Dataset Structure
### Data Instances
An example is as follows.
```
{
"img_id": "482027",
"name": "8040361340228797010.jpg",
"caption_ori": "Yosemite Falls reflection in Merced River, Yosemite, California Poster by Tom Norring / Danita Delimont for $102.50 CAD",
"score": "6.669650077819824",
"url": "http://images.artgalore.ca/artgalore_images/PDD/US05TNO0060.jpg",
"items": [
{
"item_id": 0,
"label": "mountains",
"attributes": [
"rocky",
"tall"
],
"global_item_id": 3201429
},
{
"item_id": 1,
"label": "trees",
"attributes": [
"leafless",
"slender"
],
"global_item_id": 3201430
},
{
"item_id": 2,
"label": "trees",
"attributes": [
"leafless",
"slender"
],
"global_item_id": 3201431
},
{
"item_id": 3,
"label": "snow",
"attributes": [
"white",
"cold"
],
"global_item_id": 3201432
},
{
"item_id": 4,
"label": "river",
"attributes": [
"reflective",
"calm"
],
"global_item_id": 3201433
}
],
"relations": [
{
"triple_id": 0,
"item1": 3,
"relation": "adjacent to",
"item2": 4,
"global_relation_id": 2118313
},
{
"triple_id": 1,
"item1": 1,
"relation": "growing near",
"item2": 4,
"global_relation_id": 2118314
},
{
"triple_id": 2,
"item1": 2,
"relation": "growing near",
"item2": 4,
"global_relation_id": 2118315
}
]
},
```
### Data Fields
- ```"img_id"```: Unique numeric ID of the image.
- ```"name"```: Name of source image.
- ```"caption_ori"```: Original caption of the image in LAION-Aesthetics.
- ```"score"```: Aesthetic score of the image.
- ```"url"```: URL of source image.
- ```"items"```: List of objects recognized in the image.
- ```"item_id"```: Unique numeric ID of the object in current image.
- ```"label"```: Label of the object.
- ```"attributes"```: List of attributes of the object.
- ```"global_item_id"```: Unique numeric ID of the object in all images in LAION-SG.
- ```"relations"```: List of relations recognized in the image.
- ```"triple_id"```: Unique numeric ID of the relation in current image.
- ```"item1"```: The item_id of the subject in scene graph triplet <subject, relation, object>.
- ```"relation"```: The relation between the subject and the object in scene graph triplet <subject, relation, object>.
- ```"item2"```: The item_id of the object in scene graph triplet <subject, relation, object>.
- ```"global_relation_id"```: Unique numeric ID of the relation in all images in LAION-SG.
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
<!-- [More Information Needed] -->
## Dataset Creation
### Source Data
All images are from the LAION-Aestheics V2 (6.5+) dataset.
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
![pipeline](https://huggingface.co/datasets/mengcy/LAION-SG/resolve/main/figure2-pipeline-2.png)
From the paper:
> Our LAION-SG dataset is built on high-quality images in LAION-Aesthetic V2 (6.5+) with automated annotation performed using GPT-4o. LAION-Aesthetics V2 (6.5+) is a subset of LAION-5B, comprising 625,000 image-text pairs with predicted aesthetic scores over 6.5, curated using the LAION-Aesthetics Predictor V2 model.
> During our construction, only 540,005 images are available in the dataset due to copyright or other issues.
>
> Through prompt engineering, we devised a set of specific requirements for scene graph annotations to ensure comprehensiveness, systematic structure, and precision in the annotation results. The above figure illustrates the detailed construction pipeline of LAION-SG.
> Each component plays a crucial role in achieving high-quality automated annotation.
>
> First, as scene graphs typically contain multiple objects and their relations, the prompt requires “identification of as many objects, attributes, and their relations within the image as possible”.
> This design encourages that all objects and interactions in a scene are annotated.
> Each object is assigned a unique ID, even for multiple objects of the same type, ensuring that the entirety of the scene's structure and hierarchy is accurately represented.
>
> Second, the attribute section mandates that each object must have at least one abstract adjective attribute, while avoiding the use of other objects as attributes. This design is especially important in complex scenes as it helps differentiate objects' appearance, state, and characteristics from the background and other elements, maintaining consistency and clarity in annotations.
> By avoiding the confusion between specific objects and abstract attributes, the annotations become more interpretable and generalizable.
>
> In the relation section, we specify the use of concrete verbs to describe relations between objects rather than relying solely on spatial orientation.
> This is because relations are often more critical in scene graphs than mere spatial information.
> By using precise verbs like “standing on” or “holding”, we capture dynamic interactions within the scene, which is essential for complex scene generation.
>
> Leveraging these prompts with the multimodal large language model GPT-4o, we generate annotations representing scene graphs.
> Our annotation is expect to achieve accuracy for every object, attribute, and relationship, thoroughly covering each detail in the scene and providing robust data support for subsequent compositional image generation tasks.
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
<!-- [More Information Needed] -->
<!-- #### Who are the source data producers? -->
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
<!-- [More Information Needed] -->
<!-- ### Annotations -->
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
<!-- #### Annotation process -->
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
<!-- [More Information Needed] -->
<!-- #### Who are the annotators? -->
<!-- This section describes the people or systems who created the annotations. -->
<!-- [More Information Needed] -->
<!-- #### Personal and Sensitive Information -->
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
<!-- [More Information Needed] -->
<!-- ## Bias, Risks, and Limitations -->
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- [More Information Needed] -->
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
<!-- **APA:**
[More Information Needed]
## Glossary [optional]
If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card.
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] --> |
taufiqsyed/salami_truncsplit_finetune__enriched | taufiqsyed | "2024-12-03T07:24:50Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:23:00Z" | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: string
- name: structure
dtype: string
- name: start_time
dtype: float64
- name: end_time
dtype: float64
- name: tempos
dtype: string
- name: keys
dtype: string
- name: instruments
dtype: string
- name: genres
dtype: string
- name: moods
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 2199008170.0
num_examples: 831
- name: eval
num_bytes: 84679315.0
num_examples: 32
download_size: 2235466097
dataset_size: 2283687485.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
FlippyDora/dpo_pair_data | FlippyDora | "2024-12-03T07:38:20Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:38:15Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_score
dtype: float64
- name: rejected_score
dtype: float64
- name: step
dtype: int64
- name: num_steps
dtype: int64
splits:
- name: train
num_bytes: 321550584
num_examples: 148945
download_size: 40008369
dataset_size: 321550584
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EdsonKanou/new_sql_training | EdsonKanou | "2024-12-03T16:44:08Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:39:18Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 30110
num_examples: 78
download_size: 2933
dataset_size: 30110
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WayneMao/RoboMatrix | WayneMao | "2024-12-03T10:23:17Z" | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-12-03T07:40:57Z" | ---
license: mit
---
|
denkCF/UsersCodeforcesSubmissionsEnd2024 | denkCF | "2024-12-03T08:30:52Z" | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"codeforces",
"submissions"
] | null | "2024-12-03T07:52:37Z" | ---
datasets:
- name: usersCodeforcesSubmissionsEnd2024
size: 200MB
task_categories:
- other
languages:
- en
licenses:
- cc-by-4.0
tags:
- codeforces
- competitive-programming
- submissions
pretty_name: Codeforces Users Submissions (End of 2024)
description: >
This dataset contains anonymized submission data of ≈15,000 Codeforces
users, spanning from the inception of Codeforces to the end of November
2024.
download_size: 100MB
dataset_size: 1.2GB
license: cc-by-4.0
language:
- en
tags:
- codeforces
- submissions
size_categories:
- 10M<n<100M
---
# Codeforces Users Submissions Dataset (End of 2024)
This project provides the usersCodeforcesSubmissionsEnd2024.csv file, containing anonymized submission data of approximately 15,000 active Codeforces users. The dataset includes all submissions from the inception of Codeforces up to the end of November 2024. It is designed to support AI and data-driven projects. (All data was collected using the open Codeforces API)
## Dataset Overview
The file contains **17,607,999 rows** with the following columns:
- **`handle`**: An anonymized and shuffled user nickname (e.g., `user{i}`).
- **`rating_at_submission`**: User's rating at the time of submission.
- **`problem_rating`**: Problem difficulty rating.
- **`id_of_submission_task`**: Unique problem identifier on Codeforces.
- **`verdict`**: Result of the submission (e.g., `OK`, `WRONG_ANSWER`).
- **`time`**: Time of submission (in seconds since the Unix epoch).
## Purpose of the Dataset
1. **AI Development**: This dataset can be used to create intelligent systems to enhance user learning on Codeforces.
_(Example: The author of this dataset is currently working on the project "Codeforces User Analysis System for Generating Individual Training Recommendations," which aims to recommend tasks to users based on their weaknesses.)_
2. **Time Saving**: Collecting such data manually can be time-consuming (it took ≈7 hours for this dataset). By providing it in a ready-to-use format, we aim to save your time and effort.
3. **Reduce Server Load**: This dataset minimizes repetitive data scraping, thereby reducing the load on Codeforces servers.
## License
This dataset is shared under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/). You are free to use it for your projects with proper attribution.
## How to Use
1. Download the `usersCodeforcesSubmissionsEnd2024.zip` file.
2. Unzip the file to access the `usersCodeforcesSubmissionsEnd2024.csv` dataset:
- On Linux/macOS: Use the `unzip` command in the terminal.
- On Windows: Right-click the file and select "Extract All."
3. Load the CSV file into your favorite data analysis tool:
```python
import pandas as pd
df = pd.read_csv("usersCodeforcesSubmissionsEnd2024.csv")
# Good luck with your projects :) |
procit002/STT_2.0.1_generatedByRikesh_EnhancedData | procit002 | "2024-12-03T08:14:25Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:56:25Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 22401105547.24
num_examples: 70252
- name: test
num_bytes: 2800032279.34
num_examples: 8782
- name: validation
num_bytes: 2777406854.47
num_examples: 8781
download_size: 26241693521
dataset_size: 27978544681.050003
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
sdiazlor/my-distiset-3b08370d | sdiazlor | "2024-12-03T08:01:13Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T08:01:09Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': negative
'2': positive
splits:
- name: train
num_bytes: 370
num_examples: 1
download_size: 3801
dataset_size: 370
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-3b08370d
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-3b08370d/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-3b08370d/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 0,
"text": "I recently purchased the new laptop and although it has a lot of great features, I\u0027m not entirely satisfied with the battery life. It lasts about 6 hours, which is less than the 8 hours advertised. The keyboard is also a bit too small for my liking. However, the display is fantastic and the performance is fast. Overall, it\u0027s a good product but not perfect."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-3b08370d", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-3b08370d")
```
</details>
|
ocisd4/language_detection | ocisd4 | "2024-12-03T08:01:20Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:01:09Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 65524866.82
num_examples: 196
download_size: 60589113
dataset_size: 65524866.82
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-29371374 | sdiazlor | "2024-12-03T08:05:28Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T08:05:24Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 543
num_examples: 1
download_size: 4937
dataset_size: 543
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-29371374
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-29371374/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-29371374/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 0,
"text": "The recent changes to the company\u0027s algorithm have resulted in a decrease in the efficiency of the recommendation system, leading to a substantial increase in the time spent by users searching for relevant content. The new system\u0027s reliance on opaque black box models hinders transparency and trust among users, which is critical for the success of the platform. Furthermore, the absence of clear explanations for the recommendations undermines the value proposition of the platform, causing potential users to turn to competitors."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-29371374", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-29371374")
```
</details>
|
sdiazlor/my-distiset-0732d793 | sdiazlor | "2024-12-03T08:07:02Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T08:06:58Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: generations
sequence: string
- name: ratings_overall-rating
sequence: int64
- name: rationale_for_ratings_overall-rating
sequence: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 2659
num_examples: 1
download_size: 15201
dataset_size: 2659
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-0732d793
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0732d793/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0732d793/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"generations": [
" Sure, I\u0027d be happy to help! Here are the RDF triplets for the input sentence:\n\n[AFC Ajax (amateurs), hasGround, Sportpark De Toekomst]\n[Ajax Youth Academy, playsAt, Sportpark De Toekomst]\n\nExplanation:\n\n* AFC Ajax (amateurs) is the subject of the first triplet, and hasGround is the predicate that describes the relationship between AFC Ajax (amateurs) and Sportpark De Toekomst.\n* Ajax Youth Academy is the subject of the second triplet, and playsAt is the predicate that describes the relationship between Ajax Youth Academy and Sportpark De Toekomst.\n\nNote that there may be other possible RDF triplets that could be derived from the input sentence, but the above triplets capture the main relationships present in the sentence.",
"[\n [\"AFC Ajax (amateurs)\", \"has ground\", \"Sportpark De Toekomst\"],\n [\"Ajax Youth Academy\", \"plays at\", \"Sportpark De Toekomst\"]\n]"
],
"instruction": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:",
"model_name": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"ratings_overall-rating": [
4,
5
],
"rationale_for_ratings_overall-rating": [
"The output provides accurate RDF triplets that capture the relationships between AFC Ajax (amateurs), Ajax Youth Academy, and Sportpark De Toekomst. Both triplets are relevant and correctly formatted. However, it could be argued that there is an additional possible triplet for Ajax Youth Academy\u0027s relationship with Sportpark De Toekomst, which is \"Ajax Youth Academy, hasGround, Sportpark De Toekomst\" since it\u0027s also mentioned that the \"ground\" is Sportpark De Toekomst. Nevertheless, the output is mostly correct, and the explanation provided clarifies the reasoning behind the chosen triplets.",
"The output is accurate, and the RDF triplets are correctly formatted. The two triplets captured accurately represent the relationships between AFC Ajax (amateurs), Ajax Youth Academy, and Sportpark De Toekomst. This output is more concise and directly presents the triplets without extra explanation, which is in line with the instruction\u0027s expectation that the output is a list of triplets."
]
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0732d793", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0732d793")
```
</details>
|
chiyuanhsiao/Magpie_rank3_chunk9_interleaf | chiyuanhsiao | "2024-12-03T08:26:04Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:07:50Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11933331070.5
num_examples: 10020
download_size: 11658373162
dataset_size: 11933331070.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-0732d7 | sdiazlor | "2024-12-03T08:09:13Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T08:09:10Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: generation
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 87
num_examples: 1
download_size: 2417
dataset_size: 87
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-0732d7
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0732d7/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-0732d7/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"generation": "{\"shortness\" : 3}",
"model_name": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"prompt": "What\u0027s A?",
"response": "A letter"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0732d7", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-0732d7")
```
</details>
|
chiyuanhsiao/Magpie_rank1_chunk9_interleaf | chiyuanhsiao | "2024-12-03T08:29:17Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:09:28Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11883350233.75
num_examples: 10022
download_size: 11609076135
dataset_size: 11883350233.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit002/test_10_dataset | procit002 | "2024-12-03T08:09:52Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:09:50Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: string
- name: speaker_name
dtype: string
- name: age
dtype: string
- name: accent
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: audiopath
dtype: string
- name: gender
dtype: string
- name: audio
dtype: audio
- name: normalized_text
dtype: string
splits:
- name: train
num_bytes: 3643072.0
num_examples: 10
download_size: 3334451
dataset_size: 3643072.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit002/test_30_dataset | procit002 | "2024-12-03T08:10:05Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:10:04Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: string
- name: speaker_name
dtype: string
- name: age
dtype: string
- name: accent
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: audiopath
dtype: string
- name: gender
dtype: string
- name: audio
dtype: audio
- name: normalized_text
dtype: string
splits:
- name: train
num_bytes: 11828153.0
num_examples: 30
download_size: 10673102
dataset_size: 11828153.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit002/test_50_dataset | procit002 | "2024-12-03T08:10:17Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:10:15Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: string
- name: speaker_name
dtype: string
- name: age
dtype: string
- name: accent
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: audiopath
dtype: string
- name: gender
dtype: string
- name: audio
dtype: audio
- name: normalized_text
dtype: string
splits:
- name: train
num_bytes: 18820859.0
num_examples: 50
download_size: 17381323
dataset_size: 18820859.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Susan774/STT_2.0.1_generatedByRikesh_EnhancedData | Susan774 | "2024-12-03T08:33:17Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:14:52Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 22401105547.24
num_examples: 70252
- name: test
num_bytes: 2800032279.34
num_examples: 8782
- name: validation
num_bytes: 2777406854.47
num_examples: 8781
download_size: 26241693521
dataset_size: 27978544681.050003
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_64_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:15:44Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:15:41Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228625
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_64_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:16:09Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:16:06Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228374
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_32_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:16:30Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:16:27Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228390
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_32_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:16:52Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:16:50Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228560
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_16_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:17:14Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:17:12Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228590
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
sdiazlor/my-distiset-2d7d7002 | sdiazlor | "2024-12-03T08:17:34Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T08:17:19Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': positive
'2': negative
splits:
- name: train
num_bytes: 223
num_examples: 1
download_size: 2783
dataset_size: 223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-2d7d7002
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-2d7d7002/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-2d7d7002/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 0,
"text": "The speaker\u0027s narrative employs a peculiar blend of nostalgic sentiment and a somewhat jaded tone, which creates an interesting dynamic but also makes it challenging to pinpoint a definitive emotional undertone."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-2d7d7002", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-2d7d7002")
```
</details>
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_64_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:17:37Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:17:34Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228346
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_16_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:18:00Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:17:57Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228579
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_64_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:18:23Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:18:21Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228253
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_64_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:18:45Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:18:43Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228278
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_32_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:19:09Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:19:07Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228602
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_32_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:19:31Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:19:29Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228514
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_32_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:19:55Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:19:53Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228434
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
dgambettaphd/D_gen5_run2_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-12-03T08:20:06Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:20:03Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523943
num_examples: 1000
download_size: 288773
dataset_size: 523943
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_16_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:20:18Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:20:16Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228468
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_64_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:20:41Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:20:39Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228486
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_16_0.01_64_BestF1_it | ferrazzipietro | "2024-12-03T08:21:06Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:21:03Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228489
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
all-oj-gen/ds_coder_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin | all-oj-gen | "2024-12-03T08:21:27Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:21:25Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: chosen_probs
dtype: float64
- name: chosen_probs_win
dtype: float64
- name: chosen_probs_lose
dtype: float64
splits:
- name: train
num_bytes: 21045501
num_examples: 5305
download_size: 9250215
dataset_size: 21045501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_32_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:21:28Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:21:26Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228566
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
all-oj-gen/ds_coder_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-full_resp_trace | all-oj-gen | "2024-12-03T08:21:29Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:21:27Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text_prompt
dtype: string
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 54080058
num_examples: 5305
download_size: 21032273
dataset_size: 54080058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-full_resp_trace"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
all-oj-gen/ds_coder_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin_all_pairs | all-oj-gen | "2024-12-03T08:21:30Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:21:29Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 40862878
num_examples: 10169
download_size: 12285543
dataset_size: 40862878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_16_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:21:50Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:21:47Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228594
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_16_0.05_64_BestF1_it | ferrazzipietro | "2024-12-03T08:22:14Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:22:11Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228574
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_16_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:22:48Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:22:45Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248654
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_16_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:23:11Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:23:08Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248498
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_16_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:23:33Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:23:30Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248486
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_16_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:23:57Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:23:54Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248386
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_64_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:24:20Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:24:17Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248228
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_64_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:24:42Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:24:39Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248064
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_32_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:25:04Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:25:02Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248406
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_32_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:25:26Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:25:23Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248492
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_64_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:25:47Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:25:45Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248292
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_16_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:26:10Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:26:07Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248321
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_64_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:26:32Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:26:30Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248623
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_64_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:26:55Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:26:52Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248308
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_32_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:27:17Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:27:14Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248375
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_64_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:27:39Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:27:36Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248096
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_32_0.05_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:28:01Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:27:59Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248458
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
r1v3r/bitflags-filterbyLLM-verified | r1v3r | "2024-12-03T09:17:45Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:28:15Z" | ---
dataset_info:
features:
- name: repo
dtype: string
- name: issue_numbers
sequence: string
- name: version
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: base_commit
dtype: string
- name: pull_number
dtype: int64
- name: instance_id
dtype: string
- name: test_patch
dtype: string
- name: patch
dtype: string
- name: environment_setup_commit
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: FAIL_TO_FAIL
sequence: string
- name: PASS_TO_FAIL
sequence: 'null'
splits:
- name: train
num_bytes: 146844
num_examples: 8
download_size: 53602
dataset_size: 146844
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_32_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:28:23Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:28:21Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248385
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_32_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:28:46Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:28:44Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248339
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_16_0.01_64_BestF1_sk | ferrazzipietro | "2024-12-03T08:29:08Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:29:05Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248376
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_32_0.01_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:29:44Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:29:40Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242053
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_16_0.05_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:30:08Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:30:05Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242025
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_64_0.05_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:30:31Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:30:28Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 241970
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_64_64_0.01_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:30:53Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:30:51Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242327
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_64_0.01_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:31:15Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:31:13Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242042
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_16_0.05_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:31:37Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:31:35Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 241988
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_16_0.05_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:31:59Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:31:57Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242045
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_64_0.05_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:32:24Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:32:21Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 241586
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d30e7a76-647b-4e0a-9d8f-3327d2a3ccd9 | argilla-internal-testing | "2024-12-03T08:32:33Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:32:32Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_48503ed1-9c86-4d10-a41f-b40823ff650d | argilla-internal-testing | "2024-12-03T08:32:40Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:32:39Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_32_64_0.05_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:32:45Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:32:43Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 241799
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_bd080b98-5420-46fe-9e41-2a4cf3fb054e | argilla-internal-testing | "2024-12-03T08:33:00Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:32:58Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0c958668-a039-42ce-bc3f-abc303522e12 | argilla-internal-testing | "2024-12-03T08:33:02Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:33:01Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_ae395be0-6c08-4531-8698-3c13ad1eed43 | argilla-internal-testing | "2024-12-03T08:33:01Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:33:01Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-all-revised_NoQuant_16_16_0.01_64_BestF1_sl | ferrazzipietro | "2024-12-03T08:33:08Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:33:05Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242317
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_fd26c500-4f5c-4d05-a20e-4dfd6147eff5 | argilla-internal-testing | "2024-12-03T08:33:07Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:33:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9882ad92-3c06-4ebe-a6dd-664cd9a43922 | argilla-internal-testing | "2024-12-03T08:33:09Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:33:07Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_5c2cf429-f784-47aa-a90b-56ca191017e7 | argilla-internal-testing | "2024-12-03T08:33:08Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T08:33:08Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|