sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
c9c5f26698bc6a2dcf5ad6c6f71091b74718bdce | abdulhady/ckb | [
"license:other",
"region:us"
] | 2022-04-03T09:49:55+00:00 | {"license": "other"} | 2022-04-03T09:52:39+00:00 |
|
c1c124ba6da774db9f83a25fc3f2ee70aa4400d1 |
# Dataset Card for Architext
## Dataset Description
This is the raw training data used to train the Architext models referenced in "Architext: Language-Driven Generative Architecture Design" .
- **Homepage:** https://architext.design/
- **Paper:** https://arxiv.org/abs/2303.07519
- **Point of Contact:** Theodoros Galanos (https://twitter.com/TheodoreGalanos)
## Dataset Creation
The data were synthetically generated by a parametric design script in Grasshopper 3D, a virtual algorithmic environment in the design software Rhinoceros 3D.
## Considerations for Using the Data
The data describe once instance of architectural design, specifically layout generation for residential appartments. Even in that case, the data is limited in the possible shapes they can represent, size, and typologies. Additionally, the annotations used as language prompts to generate a design are restricted to automatically generated annotations based on layout characteristics (adjacency, typology, number of spaces).
### Licensing Information
The dataset is licensed under the Apache 2.0 license.
### Citation Information
If you use the dataset please cite:
```
@article{galanos2023architext,
title={Architext: Language-Driven Generative Architecture Design},
author={Galanos, Theodoros and Liapis, Antonios and Yannakakis, Georgios N},
journal={arXiv preprint arXiv:2303.07519},
year={2023}
}
``` | THEODOROS/Architext_v1 | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"architecture",
"architext",
"arxiv:2303.07519",
"region:us"
] | 2022-04-03T11:03:17+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "architext_v1", "tags": ["architecture", "architext"]} | 2023-05-21T06:19:23+00:00 |
b5abf557ea371d452dfe7e6847f9f1f2e6f31ef4 | # RockingFace
A distribution of Amp-Space Dataset of x/y pairs audio pairs of input signals and recorded outputs after being processed by an audio effect (amplifier, stompbox, studio tools, etc.) | narad/rockingface | [
"region:us"
] | 2022-04-03T11:21:33+00:00 | {} | 2022-10-03T22:34:40+00:00 |
aa4acbaa7537aa9ae6dc5447dc82e59146ec083e | A synthetic dataset for GAN experiments.
Created with a CLOOB Conditioned Latent Diffusion model (https://github.com/JD-P/cloob-latent-diffusion)
For each color in a list of standard CSS color names, a set of images was generated using the following command:
```
python cfg_sample.py --autoencoder autoencoder_kl_32x32x4\
--checkpoint yfcc-latent-diffusion-f8-e2-s250k.ckpt\
--method plms\
--cond-scale 1.0\
--seed 34\
--steps 25\
-n 36\
"A glass orb with {color} spacetime fire burning inside"
```
| johnowhitaker/colorbs | [
"region:us"
] | 2022-04-03T11:24:32+00:00 | {} | 2022-04-04T05:52:33+00:00 |
39816326bf8c3499e150a27e13336760e7c3d904 |
## Description
An adaptation of [eHealth-KD Challenge 2020 dataset](https://knowledge-learning.github.io/ehealthkd-2020/), filtered only for the task of NER. Some adaptation of the original dataset have been made:
- BIO annotations
- Errors fixing
- Overlapped entities has been processed as an unique entity
## Dataset loading
datasets = load_dataset('json', data_files={'train': ['@YOUR_PATH@/training_anns_bio.json'], 'testing': ['@YOUR_PATH@/testing_anns_bio.json'], 'validation': ['@YOUR_PATH@/development_anns_bio.json']}) | fmmolina/eHealth-KD-Adaptation | [
"license:afl-3.0",
"region:us"
] | 2022-04-03T13:04:06+00:00 | {"license": "afl-3.0"} | 2022-04-11T06:16:13+00:00 |
193e876ac72cd6b2a7e1ec68ddec7915ef5ff324 |
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- [CAES corpus](http://galvan.usc.es/caes/) (Martínez et al., 2019): the "Corpus de Aprendices del Español" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.
### Languages
Spanish
## Dataset Structure
Texts are tokenized to create a paragraph-based dataset
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: simple or complex.
- **Level-3:** standardized readability level: basic, intermediate or advanced.
- **Text:** original text formatted into sentences.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
| hackathon-pln-es/readability-es-caes | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"readability",
"region:us"
] | 2022-04-03T20:42:19+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "readability-es-caes", "tags": ["readability"]} | 2023-04-13T07:51:40+00:00 |
ed0fe1b82f32972a3312f1b5b75c9f97650ce56e |
# Dataset Card of "unam_tesis"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
- [yiselclavel@gmail.com](mailto:yiselclavel@gmail.com)
- [isaac7isaias@gmail.com](mailto:isaac7isaias@gmail.com)
### Dataset Summary
El dataset unam_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.
### Supported Tasks and Leaderboards
text-classification
### Languages
Español (es)
## Dataset Structure
### Data Instances
Las instancias del dataset son de la siguiente forma:
El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría
| Carreras | Número de instancias |
|--------------|----------------------|
| Actuaría | 200 |
| Derecho| 200 |
| Economía| 200 |
| Psicología| 200 |
| Química Farmacéutico Biológica| 200 |
### Data Fields
El dataset está compuesto por los siguientes campos: "texto|titulo|carrera". <br/>
texto: Se refiere al texto de la introducción de la tesis. <br/>
titulo: Se refiere al título de la tesis. <br/>
carrera: Se refiere al nombre de la carrera a la que pertenece la tesis. <br/>
### Data Splits
El dataset tiene 2 particiones: entrenamiento (train) y prueba (test).
| Partición | Número de instancias |
|--------------|-------------------|
| Entrenamiento | 800 |
| Prueba | 200 |
## Dataset Creation
### Curation Rationale
La creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.
### Source Data
#### Initial Data Collection and Normalization
El dataset original (dataset_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01.
Se optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.
Para ello, en primer lugar se consultó la Oferta Académica (http://oferta.unam.mx/indice-alfabetico.html) de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.
Este scraper obtiene de esta base de datos:
- Nombres del Autor
- Apellidos del Autor
- Título de la Tesis
- Año de la Tesis
- Carrera de la Tesis
A la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el "Resumen/Introduccion/Conclusion de la tesis", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.
#### Who are the source language producers?
Los datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.
### Annotations
El dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: "texto|autor_nombre|autor_apellido|titulo|año|carrera".
#### Annotation process
Se extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.
Luego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset_tesis_procesado):
- convertir a minúsculas
- tokenización
- eliminar palabras que no son alfanuméricas
- eliminar palabras vacías
- stemming: eliminar plurales
#### Who are the annotators?
Las anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
El presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/).
### Discussion of Biases
El texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miembros del equipo (user de Hugging Face):
[Isacc Isahias López López](https://huggingface.co/MajorIsaiah)
[Yisel Clavel Quintero](https://huggingface.co/clavel)
[Dionis López](https://huggingface.co/inoid)
[Ximena Yeraldin López López](https://huggingface.co/Ximyer)
### Licensing Information
La versión 1.0.0 del dataset unam_tesis está liberada bajo la licencia <a href='http://www.apache.org/licenses/LICENSE-2.0'/> Apache-2.0 License </a>.
### Citation Information
"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: https://huggingface.co/hackathon-pln-es."
Para citar este dataset, por favor, use el siguiente formato de cita:
@inproceedings{Hackathon 2022 de PLN en Español,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Hackathon 2022 de PLN en Español},
year={2022}
}
### Contributions
Gracias a [@yiselclavel](https://github.com/yiselclavel) y [@IsaacIsaias](https://github.com/IsaacIsaias) por agregar este dataset.
| hackathon-pln-es/unam_tesis | [
"task_categories:text-classification",
"task_ids:language-modeling",
"annotations_creators:MajorIsaiah",
"annotations_creators:Ximyer",
"annotations_creators:clavel",
"annotations_creators:inoid",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n=200",
"source_datasets:original",
"language:es",
"license:apache-2.0",
"region:us"
] | 2022-04-03T22:25:31+00:00 | {"annotations_creators": ["MajorIsaiah", "Ximyer", "clavel", "inoid"], "language_creators": ["crowdsourced"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n=200"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["language-modeling"], "pretty_name": "UNAM Tesis"} | 2023-10-11T13:57:54+00:00 |
e94ac1f1b72be4a83408f20a8d49ffd98e9724b1 | # Extracción de datos de Reddit
Se descargaron todos los titulos de los hilos de algunas comunidades en español de Reddit entre marzo del 2017 y enero del 2022:
| Comunidad | N° de hilos |
|----------------------------|-------------|
|AskRedditespanol | 28072 |
| BOLIVIA | 4935 |
| PERU | 20735 |
| argentina | 214986 |
| chile | 69077 |
|espanol | 39376 |
| mexico | 136984 |
| preguntaleareddit | 37300 |
| uruguay | 55693 |
| vzla | 42909 |
# Etiquetas
Luego, se etiquetaron manualmente algunos de los hilos para marcar AMA vs No AMA.
Se etiquetaron 757 hilos (AMA: 290, No AMA: 458), siguiendo una estrategia de query by committee.
En el archivo `etiqueta_ama.csv` se puede revisar esto.
Con estos 757 hilos se ejecuto un algoritmo de label spreading para identificar los hilos AMA restantes, esto dío un total de 3519 hilos.
En el archivo `autoetiquetado_ama.csv` se puede revisar esto.
Para identificar las profesiones de las personas que crearon los hilos se utilizó la siguiente lista:
https://raw.githubusercontent.com/davoclavo/adigmatangadijolachanga/master/profesiones.txt
Para lograr abarcar todas las posibilidades, se agregaron tanto las versiones que terminaban en "a" como en "o" de todas las profesiones.
Luego se agruparon las profesiones similares, para lograr un numero similar de hilos por profesión, para lo que se utilizo el siguiente diccionario:
```
sinonimos = {
'sexologo': 'psicologo',
'enfermero': 'medico',
'farmaceutico': 'medico',
'cirujano': 'medico',
'doctor': 'medico',
'radiologo': 'medico',
'dentista': 'odontologo',
'matron': 'medico',
'patologo': 'medico',
'educador': 'profesor',
'maestro': 'profesor',
'programador': 'ingeniero',
'informatico': 'ingeniero',
'juez': 'abogado',
'fiscal': 'abogado',
'oficial': 'abogado',
'astronomo': 'ciencias',
'fisico': 'ciencias',
'ecologo': 'ciencias',
'filosofo': 'ciencias',
'biologo': 'ciencias',
'zoologo': 'ciencias',
'quimico': 'ciencias',
'matematico': 'ciencias',
'meteorologo': 'ciencias',
'periodista': 'humanidades',
'dibujante': 'humanidades',
'fotografo': 'humanidades',
'traductor': 'humanidades',
'presidente': 'jefe',
'gerente': 'jefe'
}
```
Se descargaron todos los comentarios de los hilos AMA que contenian algunas de estas profesiones y luego se agruparon incluyendo solamente los que contenian algún signo de pregunta y que tuviesen una respuesta del autor del hilo, formando un par de pregunta respuesta.
Finalmente, se mantuvieron todas las profesiones que contenian más de 200 pares de pregunta respuesta, las que incluyen alrededor de 3000 pares pregunta respuesta.
En el archivo `qa_corpus_profesion.csv` se puede revisar esto. | hackathon-pln-es/ITAMA-DataSet | [
"region:us"
] | 2022-04-04T00:21:26+00:00 | {} | 2022-04-04T02:32:20+00:00 |
66d3e93c84abc82d96ad84beb30bef404f0957ac | The Dataset was built on 2022/03/29 to contribute to improve the representation of the Spanish language in NLP tasks tasks in the HuggingFace platform.
The dataset contains 2,471 tweets obtained from their tweet_id. The dataset considers the following columns:
- Column 1( Status_id): Corresponds to the unique identification number of the tweet in the social network.
- Column 2( text): Corresponds to the text (in Spanish) linked to the corresponding "Status_Id", which is used to perform the sexism analysis.
- Column 3 (Category): Corresponds to the classification that has been made when analyzing the text (in Spanish), considering three categories: (SEXIST,NON_SEXIST,DOUBTFUL)
The dataset has been built thanks to the previous work of : F. Rodríguez-Sánchez, J. Carrillo-de-Albornoz and L. Plaza. from MeTwo Machismo and Sexism Twitter Identification dataset.
For more information on the categorization process check: https://ieeexplore.ieee.org/document/9281090 | ManRo/Sexism_Twitter_MeTwo | [
"license:apache-2.0",
"region:us"
] | 2022-04-04T01:15:01+00:00 | {"license": "apache-2.0"} | 2022-04-04T10:46:05+00:00 |
31a4e42369811edeaf35dc8c28ca148fa3eeb496 | amandakonet/climate_fever_adopted | [
"region:us"
] | 2022-04-04T01:55:52+00:00 | {} | 2022-04-16T21:41:13+00:00 |
|
ad894516a8db0f6d292da5b7194b2729f47c02f9 |
Using Google Translation, we have translated SQuAD 2.0 dataset into multiple languages.
Here is the translated dataset of SQuAD 2.0 in French language.
Shared by [Pragnakalp Techlabs](https://www.pragnakalp.com) | pragnakalp/squad_v2_french_translated | [
"multilinguality:monolingual",
"multilinguality:translation",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2022-04-04T04:44:07+00:00 | {"language": "fr", "license": "apache-2.0", "multilinguality": ["monolingual", "translation"]} | 2022-08-29T06:49:15+00:00 |
8d5f91d054aafc2a98eacfc2715c031113cd1bc0 | Kaggle based dataset for text classification task. The data has been cleaned and processed for preparation into any model for classification based tasks. This is just 40% of the entire dataset. | ikekobby/40-percent-cleaned-preprocessed-fake-real-news | [
"region:us"
] | 2022-04-04T08:26:47+00:00 | {} | 2022-04-04T08:41:40+00:00 |
dae4dcc041f173bc7134be9d562d0f996693aa07 | # Neural Audio Fingerprint Dataset
(c) 2021 by Sungkyun Chang
https://github.com/mimbres/neural-audio-fp
This dataset includes all music sources, background noise and impulse-reponses
(IR) samples that have been used in the work ["Neural Audio Fingerprint for
High-specific Audio Retrieval based on Contrastive Learning"]
(https://arxiv.org/abs/2010.11910).
### Format:
16-bit PCM Mono WAV, Sampling rate 8000 Hz
### Description:
```
/
fingerprint_dataset_icassp2021/
├── aug
│ ├── bg <=== Pub/cafe etc. background noise mix
│ ├── ir <=== IR data for microphone and room reverb simulatio
│ └── speech <=== English conversation, NOT USED IN THE PAPER RESULT
├── extras
│ └── fma_info <=== Meta data for music sources.
└── music
├── test-dummy-db-100k-full <== 100K songs of full-lengths
├── test-query-db-500-30s <== 500 songs (30s) and 2K synthesized queries
├── train-10k-30s <== 10K songs (30s) for training
└── val-query-db-500-30s <== 500 songs (30s) for validation/mini-search
```
### Data source:
• Bacgkound noise from Audioset was retrieved using key words ['subway',
'metro', 'underground', 'not music']
• Cochlear.ai pub-noise was recorded at the Strabucks branches in Seoul by
Jeongsoo Park.
• Random noise was generated by Donmoon Lee.
• Room/space IR data was collected from Aachen IR and OpenAIR 1.4 dataset.
• Portions of MIC IRs were from Vintage MIC (http://recordinghacks.com/), and
pre-processed with room/space IR data.
• Portions of MIC IRs were recorded by Donmoon Lee, Jeonsu Park and Hyungui Lim
using mobile devices in the anechoic chamber at Seoul National University.
• All music sources were taken from the Free Music Archive (FMA) data set,
and converted from `stereo 44Khz` to `mono 8Khz`.
• train-10k-30s contains all 8K songs from FMA_small. The remaining 2K songs
were from FMA_medium.
• val- and test- data were isolated from train-, and taken from FMA_medium.
• test-query-db-500-30s/query consists of the pre-synthesized 2,000 queries
of 10s each (SNR 0~10dB) and their corresponding 500 songs of 30s each.
• Additionally, query_fixed_SNR directory contains synthesized queries with
fixed SNR of 0dB and -3dB.
• dummy-db-100k was taken from FMA_full, and duplicates with other sets were
removed.
### License:
This dataset is distributed under the CC BY-SA 2.0 license separately from the
github source code, and licenses for composites from other datasets are
attached to each sub-directory.
| arch-raven/music-fingerprint-dataset | [
"arxiv:2010.11910",
"region:us"
] | 2022-04-04T09:06:23+00:00 | {} | 2022-04-05T10:48:05+00:00 |
b53451cabd92c2481263cf09a14a670e1d8e6f8c |
# Dataset Card for [readability-es-sentences]
## Dataset Description
Compilation of short Spanish articles for readability assessment.
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- **Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016):** collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.
- **[kwiziq](https://www.kwiziq.com/):** a language learner assistant
- **[hablacultura.com](https://hablacultura.com/):** Spanish resources for students and teachers. We have downloaded the available content in their websites.
### Languages
Spanish
## Dataset Structure
The dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: complex or simple.
- **Level-3:** standardized readability level: basic, intermediate or advanced
- **Text:** original text formatted into sentences.
Not all the entries contain usable values for `category`, `level` and `level-3`, but all of them should contain at least one of `level`, `level-3`. When the corresponding information could not be derived, we use the special `"N/A"` value to indicate so.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
| hackathon-pln-es/readability-es-hackathon-pln-public | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"readability",
"region:us"
] | 2022-04-04T09:26:51+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "readability-es-sentences", "tags": ["readability"]} | 2023-04-13T07:51:15+00:00 |
2e1b744445b279b21a6d1aeacfb3dff8d2acf7fa | This dataset contains images from iNaturalist of butterflies (superfamily Papilionoidea) with at least one fave. Check the descriptions - some images have a licence like CC-BY-NC and can't be used for commercial purposes.
The list of observations was exported from iNaturalist after a query similar to https://www.inaturalist.org/observations?place_id=any&popular&taxon_id=47224
The images were downloaded with img2dataset and uploaded to the huggingface hub by @johnowhitaker using this colab notebook: https://colab.research.google.com/drive/14qwFV_G4dh6evizzqHP08qDUAHtzfuiW?usp=sharing
The goal is to have a dataset of butterflies in different poses and settings, to use for GAN training and to compare with datasets built with museum collections of pinned specimens (which tend to be much cleaner and have more consistency of pose etc)
I'm not familiar with the nuances of creative commons licencing but you may wish to filter out images which are no-derivatices (CC-...-ND) when training a GAN or creating new images. | huggan/inat_butterflies | [
"region:us"
] | 2022-04-04T09:34:36+00:00 | {} | 2022-04-04T09:53:19+00:00 |
d73ccef8b255c317a226912071e92b272c55dc43 |
# Dataset Card for "huggingartists/olga-buzova"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.164278 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/efacbc8bb2d22ab78e494539bba61b3e.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/olga-buzova">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ольга Бузова (Olga Buzova)</div>
<a href="https://genius.com/artists/olga-buzova">
<div style="text-align: center; font-size: 14px;">@olga-buzova</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/olga-buzova).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/olga-buzova")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|66| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/olga-buzova")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/olga-buzova | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-04-04T10:18:31+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"], "models": ["huggingartists/olga-buzova"]} | 2022-10-25T09:03:54+00:00 |
9fd68bd28031a1f936845bdde6eb3aeb59eeadc9 |
# Dataset Card for "Abkhaz text"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
- **Point of Contact:** [Nart Tlisha](mailto:daniel.abzakh@gmail.com)
- **Size of the generated dataset:** 176 MB
### Dataset Summary
The Abkhaz language monolingual dataset is a collection of 1,470,480 sentences extracted from different sources. The dataset is available under the Creative Commons Universal Public Domain License. Part of it is also available as part of [Common Voice](https://commonvoice.mozilla.org/ab), another part is from the [Abkhaz National Corpus](https://clarino.uib.no/abnc)
## Dataset Creation
### Source Data
Here is a link to the source of a large part of the data on [github](https://github.com/danielinux7/Multilingual-Parallel-Corpus/blob/master/ebooks/reference.md)
## Considerations for Using the Data
### Other Known Limitations
The accuracy of the dataset is around 95% (gramatical, arthographical errors)
| Nart/abkhaz_text | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ab",
"license:cc0-1.0",
"region:us"
] | 2022-04-04T10:57:51+00:00 | {"language_creators": ["expert-generated"], "language": ["ab"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Abkhaz monolingual corpus"} | 2022-11-01T10:53:17+00:00 |
49f91f486696456ead1685e46fbd63e6520f2537 | Filtered version of https://huggingface.co/datasets/huggan/inat_butterflies
To pick the best images, CLIP was used to compare each image with a text description of a good image ("")
Notebook for the filtering: https://colab.research.google.com/drive/1OEqr1TtL4YJhdj_bebNWXRuG3f2YqtQE?usp=sharing
See the original dataset for sources and licence caveats (tl;dr check the image descriptions to make sure you aren't breaking a licence like CC-BY-NC-ND which some images have) | huggan/inat_butterflies_top10k | [
"region:us"
] | 2022-04-04T11:45:06+00:00 | {} | 2022-04-04T11:50:28+00:00 |
596623eb34923ccd0eb540ea1f737cd09c304e58 |
# Dataset Description
## Dataset Summary
This dataset was parsed from the Human-HIV Interaction dataset maintained by the NCBI.
It contains a >16,000 pairs of interactions between HIV and Human proteins.
Sequences of the interacting proteins were retrieved from the NCBI protein database and added to the dataset.
The raw data is available from the [NBCI FTP site](https://ftp.ncbi.nlm.nih.gov/gene/GeneRIF/hiv_interactions.gz) and the curation strategy is described in the [NAR Research paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4383939/) announcing the dataset.
## Dataset Structure
### Data Instances
Data Fields: hiv_protein_product, hiv_protein_name, interaction_type, human_protein_product, human_protein_name, reference_list, description, hiv_protein_sequence, human_protein_sequence
Data Splits: None
## Dataset Creation
Curation Rationale: This dataset was curated train models to recognize proteins that interact with HIV.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 4/4/2022 but the most recent update of the underlying NCBI database was 2016.
## Considerations for Using the Data
Discussion of Biases: This dataset of protein interactions was manually curated by experts utilizing published scientific literature.
This inherently biases the collection to well-studied proteins and known interactions.
The dataset does not contain _negative_ interactions.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA | damlab/human_hiv_ppi | [
"license:mit",
"region:us"
] | 2022-04-04T13:24:30+00:00 | {"license": "mit"} | 2022-04-04T13:38:49+00:00 |
00712474bff3c7b433e6e4286a3ed2381850c05d | met/mm | [
"license:apache-2.0",
"region:us"
] | 2022-04-04T17:39:59+00:00 | {"license": "apache-2.0"} | 2022-04-04T17:42:01+00:00 |
|
484a5ad065c06cb4e04333ed4e4947a7e0373192 |
Collection of pinned butterfly images from the Smithsonian https://www.si.edu/spotlight/buginfo/butterfly
Doesn't include metadata yet!
Url pattern: "https://ids.si.edu/ids/deliveryService?max_w=550&id=ark:/65665/m3c70e17cf30314fd4ad86afa7d1ebf49f"
Added sketch versions!
sketch_pidinet is generated by : https://github.com/zhuoinoulu/pidinet
sketch_pix2pix is generated by : https://github.com/mtli/PhotoSketch
| huggan/smithsonian-butterfly-lowres | [
"license:cc0-1.0",
"region:us"
] | 2022-04-04T17:45:28+00:00 | {"license": "cc0-1.0"} | 2022-04-06T18:57:24+00:00 |
3b6940038258b4660e398ee7b29e3774e79fe0dd | met/Meti_ICT | [
"license:ms-pl",
"region:us"
] | 2022-04-04T18:33:34+00:00 | {"license": "ms-pl"} | 2022-04-05T10:56:09+00:00 |
|
dd2d9cbe7ba3139d1f48096e3f19ce2eba4d27eb |
# Dataset Card for the-reddit-dataset-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-dataset-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
### Dataset Summary
A meta dataset of Reddit's own /r/datasets community.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/the-reddit-dataset-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-04T19:47:35+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"]} | 2022-07-01T16:55:48+00:00 |
21d357ddf012a439d4b98b5dcf3367da55cca87d | rafay/upside_down_detection_cifar100 | [
"license:afl-3.0",
"region:us"
] | 2022-04-05T05:43:32+00:00 | {"license": "afl-3.0"} | 2022-04-05T05:51:09+00:00 |
|
c50846883a030dd8930ee5788524902b10439b63 |
# Dataset Card for JetClass
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jet-universe/particle_transformer
- **Paper:** https://arxiv.org/abs/2202.03772
- **Leaderboard:**
- **Point of Contact:** [Huilin Qu](mailto:huilin.qu@cern.ch)
### Dataset Summary
JetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:
* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the
LHC.
* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.
Jets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance
parameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use the JetClass dataset, please cite:
```
@article{Qu:2022mxj,
author = "Qu, Huilin and Li, Congqiao and Qian, Sitian",
title = "{Particle Transformer for Jet Tagging}",
eprint = "2202.03772",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
month = "2",
year = "2022"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
| jet-universe/jetclass | [
"license:mit",
"arxiv:2202.03772",
"region:us"
] | 2022-04-05T06:32:22+00:00 | {"license": "mit"} | 2022-05-27T18:00:45+00:00 |
8a7e41314267b68ddb15d3c9da012b9c98bf2a78 |
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of disk used:** ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
```py
from datasets import load_dataset
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("PolyAI/all", "all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
## Dataset Structure
We show detailed information the example configurations `fr-FR` of the dataset.
All other configurations have the same structure.
### Data Instances
**fr-FR**
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config `fr-FR` looks as follows:
```
{
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"audio": {
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"array": array(
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
),
"sampling_rate": 8000,
},
"transcription": "je souhaite changer mon adresse",
"english_transcription": "I want to change my address",
"intent_class": 1,
"lang_id": 6,
}
```
### Data Fields
The data fields are the same among all splits.
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **transcription** (str): Transcription of the audio file
- **english_transcription** (str): English transcription of the audio file
- **intent_class** (int): Class id of intent
- **lang_id** (int): Id of language
### Data Splits
Every config only has the `"train"` split containing of *ca.* 600 examples.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@article{DBLP:journals/corr/abs-2104-08524,
author = {Daniela Gerz and
Pei{-}Hao Su and
Razvan Kusztos and
Avishek Mondal and
Michal Lis and
Eshan Singhal and
Nikola Mrksic and
Tsung{-}Hsien Wen and
Ivan Vulic},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
journal = {CoRR},
volume = {abs/2104.08524},
year = {2021},
url = {https://arxiv.org/abs/2104.08524},
eprinttype = {arXiv},
eprint = {2104.08524},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
| PolyAI/minds14 | [
"task_categories:automatic-speech-recognition",
"task_ids:keyword-spotting",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:en",
"language:fr",
"language:it",
"language:es",
"language:pt",
"language:de",
"language:nl",
"language:ru",
"language:pl",
"language:cs",
"language:ko",
"language:zh",
"license:cc-by-4.0",
"arxiv:2104.08524",
"region:us"
] | 2022-04-05T06:46:13+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en", "fr", "it", "es", "pt", "de", "nl", "ru", "pl", "cs", "ko", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "task_categories": ["automatic-speech-recognition", "speech-processing"], "task_ids": ["speech-recognition", "keyword-spotting"], "pretty_name": "MInDS-14", "language_bcp47": ["en", "en-GB", "en-US", "en-AU", "fr", "it", "es", "pt", "de", "nl", "ru", "pl", "cs", "ko", "zh"]} | 2024-01-22T23:15:06+00:00 |
6342d0716fac4e248c53a27039c7d30ccaa9342b | # AutoTrain Dataset for project: sentiment_analysis_project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sentiment_analysis_project.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Realizing that I don`t have school today... or tomorrow... or for the next few months. I really nee[...]",
"target": 1
},
{
"text": "Good morning tweeps. Busy this a.m. but not in a working way",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['negative', 'neutral', 'positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16180 |
| valid | 4047 |
| ramnika003/autotrain-data-sentiment_analysis_project | [
"task_categories:text-classification",
"region:us"
] | 2022-04-05T08:13:43+00:00 | {"task_categories": ["text-classification"]} | 2022-04-05T08:16:59+00:00 |
d98c69e4a1133485a535297c69e231c854fa7877 | met/AMH_MET | [
"license:apache-2.0",
"region:us"
] | 2022-04-05T10:44:56+00:00 | {"license": "apache-2.0"} | 2022-04-05T10:46:16+00:00 |
|
03b8bdea7e37f62de083d91b6d51998afd698b23 | met/Meti_try | [
"license:apache-2.0",
"region:us"
] | 2022-04-05T11:41:41+00:00 | {"license": "apache-2.0"} | 2022-04-05T11:42:25+00:00 |
|
e5669a83db35069d560ee7e565c0af93a289db30 | met/Met | [
"license:apache-2.0",
"region:us"
] | 2022-04-05T12:29:23+00:00 | {"license": "apache-2.0"} | 2022-04-05T12:31:43+00:00 |
|
dbb8ee349ff4e6d6ac0f7f01c9007be3862e3deb | duskvirkus/dafonts-free | [
"license:other",
"region:us"
] | 2022-04-05T15:07:34+00:00 | {"license": "other"} | 2022-04-05T15:30:11+00:00 |
|
5f43ccb5ce480675591f1bd3b8ee19ed6f0de9ca | aayush9753/InterIIT-Bosch-MidPrep-AgeGenderClassificationInCCTV | [
"license:afl-3.0",
"region:us"
] | 2022-04-05T19:12:45+00:00 | {"license": "afl-3.0"} | 2022-04-05T19:33:51+00:00 |
|
8ec4ba6640805906d0c61886e65810c8ee78a982 |
# Dataset Card for the-reddit-place-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-place-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
### Dataset Summary
The written history or /r/Place, in posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/the-reddit-place-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-05T20:25:45+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T16:51:57+00:00 |
1ab7981a2c7960c11a12a32578cf09ceaa76f8cf | dnes1983/train | [
"region:us"
] | 2022-04-06T03:18:43+00:00 | {} | 2022-04-06T03:22:23+00:00 |
|
3ddcf36a47551096e85303f46a160239f7c37427 | Jianxin1111/juicycollection | [
"license:artistic-2.0",
"region:us"
] | 2022-04-06T03:27:33+00:00 | {"license": "artistic-2.0"} | 2022-04-06T03:27:33+00:00 |
|
66f430a1252ea1732413a80a56a1b6e8bc74264e |
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: advertissement
1: budget
2: email
3: file folder
4: form
5: handwritten
6: invoice
7: letter
8: memo
9: news article
10: presentation
11: questionnaire
12: resume
13: scientific publication
14: scientific report
15: specification
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/.
| chainyo/rvl-cdip | [
"license:other",
"region:us"
] | 2022-04-06T06:06:56+00:00 | {"license": "other"} | 2022-04-06T15:49:20+00:00 |
b646090ef0d09981da9c9765c4d376b407aa5955 |
# An Amharic News Text classification Dataset
> In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments.
```
@misc{https://doi.org/10.48550/arxiv.2103.05639,
doi = {10.48550/ARXIV.2103.05639},
url = {https://arxiv.org/abs/2103.05639},
author = {Azime, Israel Abebe and Mohammed, Nebil},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {An Amharic News Text classification Dataset},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| israel/Amharic-News-Text-classification-Dataset | [
"license:cc-by-4.0",
"arxiv:2103.05639",
"region:us"
] | 2022-04-06T08:20:35+00:00 | {"license": "cc-by-4.0"} | 2022-04-06T08:27:52+00:00 |
d559852d2b232e0fcf195e775866964f0564f2b5 |
## Dataset Description
- **Homepage:** https://www.wikiart.org/
### Dataset Summary
Dataset containing 81,444 pieces of visual art from various artists, taken from WikiArt.org,
along with class labels for each image :
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
On WikiArt.org, the description for the "Artworks by Genre" page reads :
A genre system divides artworks according to depicted themes and objects. A classical hierarchy of genres was developed in European culture by the 17th century. It ranked genres in high – history painting and portrait, - and low – genre painting, landscape and still life. This hierarchy was based on the notion of man as the measure of all things. Landscape and still life were the lowest because they did not involve human subject matter. History was highest because it dealt with the noblest events of humanity. Genre system is not so much relevant for a contemporary art; there are just two genre definitions that are usually applied to it: abstract or figurative.
The "Artworks by Style" page reads :
A style of an artwork refers to its distinctive visual elements, techniques and methods. It usually corresponds with an art movement or a school (group) that its author is associated with.
## Dataset Structure
* "image" : image
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
### Source Data
Files taken from this [archive](https://archive.org/download/wikiart-dataset/wikiart.tar.gz), curated from the [WikiArt website](https://www.wikiart.org/).
## Additional Information
Note:
* The WikiArt dataset can be used only for non-commercial research purpose.
* The images in the WikiArt dataset were obtained from WikiArt.org.
* The authors are neither responsible for the content nor the meaning of these images.
By using the WikiArt dataset, you agree to obey the terms and conditions of WikiArt.org.
### Contributions
[`gigant`](https://huggingface.co/gigant) added this dataset to the hub. | huggan/wikiart | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"license:unknown",
"art",
"region:us"
] | 2022-04-06T08:40:18+00:00 | {"license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification", "text-to-image", "image-to-text"], "license_details": "Data files \u00a9 Original Authors", "tags": ["art"]} | 2023-03-22T13:56:08+00:00 |
6aa6bccd5e72aac4a0e6d32b140564390a8a165a | - This is a personal convenience copy of the binary Hate Speech (HS) dataset used in the T-Miner paper on defending against trojan attacks on text classifiers: https://arxiv.org/pdf/2103.04264.pdf
- The dataset is sourced from the original paper\'s Github repository: https://github.com/reza321/T-Miner
- Label mapping:
- 0 = hate speech
- 1 = normal speech
- If you use this dataset please cite the T-Miner paper (see bibtex below), and the two original papers from which T-Miner constructed the dataset (see paper for references):
```@inproceedings{azizi21tminer,
title={T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification},
author={Azizi, Ahmadreza and Tahmid, Ibrahim and Waheed, Asim and Mangaokar, Neal amd Pu, Jiameng and Javed, Mobin and Reddy, Chandan K. and Viswanath, Bimal},
booktitle={Proc. of USENIX Security},
year={2021}}
``` | nealmgkr/tminer_hs | [
"arxiv:2103.04264",
"region:us"
] | 2022-04-06T08:41:13+00:00 | {} | 2022-04-06T08:45:48+00:00 |
12299c16f191d1c2976dd01907dd009a3393e19a | dalton72/twitter-sent | [
"region:us"
] | 2022-04-06T09:11:02+00:00 | {} | 2022-04-06T09:17:23+00:00 |
|
1cad77bdc16e9965ba15285d5fc9ca347d6cec3a |
# Dataset Card for MTet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://translate.vietai.org/
- **Repository:** https://github.com/vietai/mTet
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
The languages in the dataset are:
- Vietnamese (`vi`)
- English (`en`)
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'He said that existing restrictions would henceforth be legally enforceable, and violators would be fined.',
'vi': 'Ông nói những biện pháp hạn chế hiện tại sẽ được nâng lên thành quy định pháp luật, và những ai vi phạm sẽ chịu phạt.'
}
}
```
### Data Fields
- `translation`:
- `en`: Parallel text in English.
- `vi`: Parallel text in Vietnamese.
### Data Splits
The dataset is in a single "train" split.
| | train |
|--------------------|--------:|
| Number of examples | 4163853 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@article{mTet2022,
author = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
title = {MTet: Multi-domain Translation for English and Vietnamese},
journal = {https://github.com/vietai/mTet},
year = {2022},
}
```
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
| albertvillanova/mtet | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|bible_para",
"source_datasets:extended|kde4",
"source_datasets:extended|opus_gnome",
"source_datasets:extended|open_subtitles",
"source_datasets:extended|tatoeba",
"language:en",
"language:vi",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-04-06T09:25:42+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en", "vi"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original", "extended|bible_para", "extended|kde4", "extended|opus_gnome", "extended|open_subtitles", "extended|tatoeba"], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation"], "pretty_name": "MTet"} | 2022-10-08T06:42:34+00:00 |
8d2332e07e64ae6adbc81586b8778785e2a80c29 |
# Dataset Card for french-open-fiscal-texts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://echanges.dila.gouv.fr/OPENDATA/JADE/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat".
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
fr-FR
## Dataset Structure
### Data Instances
```json
{
"file": "CETATEXT000007584427.xml",
"title": "Cour administrative d'appel de Marseille, 3�me chambre - formation � 3, du 21 octobre 2004, 00MA01080, in�dit au recueil Lebon",
"summary": "",
"content": "Vu la requête, enregistrée le 22 mai 2000, présentée pour M. Roger X, par Me Luherne, élisant domicile ...), et les mémoires complémentaires en date des 28 octobre 2002, 22 mars 2004 et 16 septembre 2004 ; M. X demande à la Cour :\n\n\n \n 11/ d'annuler le jugement n° 951520 en date du 16 mars 2000 par lequel le Tribunal administratif de Montpellier a rejeté sa requête tendant à la réduction des cotisations supplémentaires à l'impôt sur le revenu et des pénalités dont elles ont été assorties, auxquelles il a été assujetti au titre des années 1990, 1991 et 1992 ;\n\n\n \n 22/ de prononcer la réduction desdites cotisations ;\n\n\n \n 3°/ de condamner de l'Etat à lui verser une somme de 32.278 francs soit 4.920,75 euros"
}
```
### Data Fields
`file`: identifier on the JADE OPENDATA file
`title`: Name of the law case
`summary`: Summary provided by JADE (may be missing)
`content`: Text content of the case law
### Data Splits
train
test
## Dataset Creation
### Curation Rationale
This dataset is an attempt to gather multiple tax related french text law.
The first intent it to build model to summarize law cases
### Source Data
#### Initial Data Collection and Normalization
Collected from the https://echanges.dila.gouv.fr/OPENDATA/
- Filtering xml files containing "Code général des impôts" (tax related)
- Extracting content, summary, identifier, title
#### Who are the source language producers?
DILA
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | StanBienaives/french-open-fiscal-texts | [
"language:fr",
"region:us"
] | 2022-04-06T10:42:06+00:00 | {"language": ["fr"]} | 2024-01-15T10:05:08+00:00 |
1d8fa78d643f0207bfac31f2e42c056769e16fed |
## Common User Intentions
#### Greetings
- Wasemaje
- uko aje btw
- oyah...
- Form
- Alafu niaje
- Poa Sana Mambo
- Niko poa
- Pia Mimi Niko salama
- Hope siku yako iko poa
- Siko poa kabisa
- Nimekuwa poa
- Umeshindaje
- Hope uko poa
- uko poa
- Sasa
- Vipi vipi
- Niko salama
- ..its been long.
- Nko fiti
- niko fiti
- Nmeamka fity..
- Vipi
- Unasemaje
- Aaaah...itakuaje sasaa..
- .iz vipi..itakuaje..
- Form ni gani bro...
- iz vipi
#### Affirm
- Hapo sawa...
- Fty
- sai
- Hio si ni better hadi
- Imebidi.
- Eeeh mazee
- mazeee
- Fity fity
- Oooh poapoa
- Yap
- Inakaa poa
- Yeah itabidi
- Ooooh...
- Si ndo nadaaiii😅
- Oooh sawa
- Okay sawa basi
- Venye utaamua ni sawa
- Sawa wacha tungoje
- lazima
- apa umenena
- Sawa basi
- walai
- Oooh
- inaweza mbaya
- itaweza mbaya
- ni sawa
- Iko poa
- Iko tu sawa hivo
- ilinbamba.
- Nimemada
- Btw hao ata mimi naona
- but inaeleweka
- pia mimi
- iende ikiendaga
- We jua ivo
- Hata Mimi
- Nataka
- Ooh.
- Chezea tu hapo
- isorait
- Ata yako ni kali
- Ntaicheck out Leo
- hmm. Okay
- Mimi sina shida
- ooooh io iko fity...
- hii ni ngori
- maze
- sawa
- banaa
- Aaah kumbe
- Safiii..
- Sasawa
- hio ni fityyy
- Yeah nliona
- Vizii...
- Eeeeh nmekua naiona...
- Yea
- Haina nomA
- katambe
- accept basi
- ni sawa
- Issaplan
- nmeget
- nimedai tu
- eeh
- Hio ni poa
- nadai sa hii
- Eeeeh
- mi nadai tu
- firi
- Hapo freshi
#### Deny
- Sipendi
- aih
- Nimegive up
- Yangu bado
- siezi make
- Sina😊
- Haileti
- Haiwezi
- Io sikuwa nikwambie
- Sikuwa
- Wacha ata
- ata sijui
- Sijasema
- Sijai
- hiyo haiezi
- Bado.
- Uku tricks...
- sidai
- achana nayo
- ziii
- si fityy
- Nimekataa Mimi
- Sijui
- Aiwezekani
- Bado sioni
#### Courtesy
- Imefika... shukran
- Haina ngori
- Inafaa hivo
- Utakuwa umeniokolea manzee
- Karibu
- Nyc one
- Hakuna pressure
- Gai. Pole
- Usijali I will
- Nimekufeel hapo
- Waah izaa
- Pole lkn
- Pole
- plz
- okay...pole
- thanks for pulling up lkn..
- shukran
- Eeeeh nyc
- Thanx for the info
- Uko aje
- haina pressure
- eih, iko fiti.
- vitu kama hizo
- sahii
#### Asking clarification
- check alafu unishow
- Sasa msee akishabuy anafanya aje
- Umeenda wapi
- nlikuwa nadai
- Nlikua nataka
- Ulipata
- leo jioni utakuwa?
- uko
- umelostia wapi?
- ingine?
- hii inamaanisha?
- Wewe Sasa ni nani?
- warrathos
- kwani nisiende sasa
- unadai zingine?
- Kwani
- Haiya...
- Unadu?
- inakuanga mangapiii...
- Kuna nn
- Nauliza
- Hakuna kwanini
- Nadai kujua what
- Kwanini hakuna
- Kwa nini hakuna
- Uliniambia
- Mbona
- Nlikua nashangaa
- Unadu nini
- Oooh mara moja
- Unaeza taka?
- unaeza make?
- Umeipata?
- wapi kwingine tena
- kuna yenye natafuta
- Sijajua bado
- Niko na ingine
- ulikuwa unataka
- ulinishow?
- ulinsho
- Umepata
- Ata stage hakuna?
- Huku hakuna kibandaski?
- Sai ndio uko available
- Ivo
- Inaeza
- Naeza
- Btw, nikuulize
- Uliza
- hadi sa hii
- Nauliza ndio nijue kama bado iko
- Btw ile hoteli tulienda na wewe apo kiimbo huendangi?
#### Comedy
- Ata kama
- Wasikupee pressure
- umeanza jokes
- Ulisumbua sana
- Unaeza niambia ivo
- usinicheke
- Hakuna😁😁kwanini
- aki wewe.
- naskia mpaka ulipiga sherehe
- sio?
- uko na kakitu
- Aaaaii
- .uko fity nayo..
- icome through mbaya...
#### Small talk
- Kuchil tu bana
- Inafaa hivo
- Acha niskizie
- Skujua hii stuff
- nacheza chini
- hii imesink deep.
- mi Niko
- khai, gai, ghaiye
- Woiye
- ndo nmeland
- Nimekuona
- Kaaai
- Nambie
- bado nashangaa aliipull thru maze
- Niambie
- Najua uko kejani
- Bado uko
- Utakuwa sawa
- Niko poa ata kama uniliacha hanging jana
- issa deal
- Walai io nilijua utasema
- hujawai sahau hii
- Sijajua bado
- Ni maroundi tu
- Enyewe imetoka mbali
- Hadi nimekuwa Tao leo
- Ni mnoma mbaya
- Anyway mambo ni polepole
- Imagine
- Sina la kusema
- Sai
- Najua umeboeka
#### Resolute
- Nataka leo
- hayo ndo maisha Sasa
- vile itakuja maze
- Acha tu
- Waaah Leo haiwezi
- Ni sawa tu
- Imeisha
- Itabidi
- siendagi
- siezi kuangusha
- nachangamkia hii
- Weno ivi...
- Hii price iko poa...
#### implore
- but nimetry tena
- aminia tu
- Ebu try
- Alafu
- naona hufeel kuongea
- Watu hawaongei?
- Itabidi tu umesort
- Naona huna shughuli yangu
- tufanye pamoja
- khai, gai, ghaiye
- so kalunch
- ama?
- Sahii ni the best time
- Kwanza sahii
- hii weekend
- Kaanza next weekend ni fity
- this weekend
- Acha ntacheki
- izo sasa..
- Acha tuone
- So tunafikanga ivor morning mapemaa
- naona uko rada
- mapema kiasi
- nimchapie niskie...
- Naisaka walai
#### Bye
- Ama kesho
- Ngoja nta rudi baadaye
- nacheki tu rada ya kesho
- Nitakusort kesho morning
- Ni hivo nimekafunga
- nitakushow
- Nextweek ndio inaeza
- Ntakuchapia kama ntamake
- Freshi
#### Sample Bot Responses
- tulia tu hana mambo mob
- si you know how we do it
- Form ni gani
- Oooh nmekuget
- znaeza kupea stress
- Hues make leo
- nshow password
- Nmeichangamkia design ya ngori
- Oooh nmekuget...
- ilicome through
- Naisaka walai
- kesho ntakuchapia
- nichapie niskie
- Aaaah..😅
- Alafu ile story ya
- Ooooh ebu ntasaka
- Saa ngapi uko free..
- Ama unasema ya
- Safiii..naona uko rada
- Ilkulemea🤣
- Acha ntacheki
- imeharibia form..
- Nmeitafuta
- Ndio nimeget
- inaeza saidia mtu
- Email yako ni gani
- Wacha niangalie
- nangoja ulipe
- nimeshikika
- Sawa tuma email
- Kwani ulimwambia nini
- Najua ata most of the time
- mara most btw
- Unajua tu ni risky
- unadai tu niseme mi ni robot
- kwanini
- ndio usiulizwe
- Ukiangalia niambie
- Last time ukinipigia nilikuwa nimeenda kuoshwa
- ikishaenda kwa mganga hairudi
- Hata Mimi ni hayo mambo madogo madogo ndio imenieka.
- We jua nafikirianga mingi ni venye zingine huwa sisemi
- Na najua
- unarelax
- mm ata sko tensed
- sahii ata ni risky
- but ntakuchapia
- oooh waah..
- aaaah ata ww
- hii si fityy
- maze itabidi tudunde virtual
- tunadunda wapiiii..
- kwani sa mi ndo nafaa kumshow kila time coz this is not the first time namwambia🤦♀️
- Wacha hizo.
- Yeah niko hapa
- Niko
- Give me sometime.
- Maze...nmecheza ki mimi
- Uko busy
- Chill kiasi
- Wacha nikusort
- ntakushow
- looking for you hupatikani
- Mnaniogopa ama
- Wewe unapenda free
- Nakusort sai chill mazee
- Kiasi
- relax mkubwa
- Sahii uko sorted sindio
- Ni juu
- bringing the future to us
- hiyo ni form yangu daily
- Ata mimi sitaki ufala 😂
- Imagine
- Uko sawa
- Uko sawa ama unaitaji ingine
- ka unaeza
- utanichapia tu
- unasemaje lakini
- Niulize
- Uko na number
- Ukiboeka wewe nitext
- unadai sa hii ?
- skuwa nimeona
- Acha nicheki
- Ni Friday bana
- Niko chilled tu
- Unadai aje.
- Utanichapia basi
- Umenyamaza sana bana
- imekam through ama
- Nategea umalize ndo nikushow ile form
- Guidance tu kiasi
- Tutadiscuss pia stori
- Nakwelewa
- tujue niaje
- itaweza mbaya
- Kuna hopes za kulearn
| JeunesseAfricaine/sheng_nlu | [
"license:mit",
"region:us"
] | 2022-04-06T10:51:04+00:00 | {"license": "mit"} | 2022-04-06T12:03:27+00:00 |
556fad8e53bba25cc7d41d3204dca87254bc6f5d | met/MetaIct | [
"license:other",
"region:us"
] | 2022-04-06T13:02:54+00:00 | {"license": "other"} | 2022-04-06T13:09:52+00:00 |
|
3a46cbfae3f5b348449335f300666a0ae330f121 | Jeneral/fer-2013 | [
"license:apache-2.0",
"region:us"
] | 2022-04-06T14:31:07+00:00 | {"license": "apache-2.0"} | 2022-04-06T17:24:30+00:00 |
|
70b2d68664a3c8e841f426cf8e43f4f669a75017 |
⚠️ This only a subpart of the original dataset, containing only `questionnaire`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. | chainyo/rvl-cdip-questionnaire | [
"license:other",
"region:us"
] | 2022-04-06T15:34:49+00:00 | {"license": "other"} | 2022-04-06T15:45:26+00:00 |
fad615c9ceaecb4476b0a01f29c0a15b276b3a2b |
⚠️ This only a subpart of the original dataset, containing only `invoice`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. | chainyo/rvl-cdip-invoice | [
"license:other",
"region:us"
] | 2022-04-06T15:52:14+00:00 | {"license": "other"} | 2022-04-06T15:57:20+00:00 |
65bffe2a1449459207f82c5ed130487e74916cbf | # Dataset Card for Ukr-Synth
## Dataset Description
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
## Dataset Structure
### Data Splits
| name |train |validation|
|---------|-------:|---------:|
|conll2003|1000000| 10000|
## Dataset Creation
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
## Additional Information
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | ukr-models/Ukr-Synth | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:uk",
"license:mit",
"region:us"
] | 2022-04-06T16:13:34+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["uk"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "parsing", "part-of-speech"], "pretty_name": "Ukrainian synthetic dataset in conllu format"} | 2023-08-31T08:35:43+00:00 |
66651ce605381e1e099d82f992864db3396870e3 | openclimatefix/era5 | [
"license:mit",
"doi:10.57967/hf/0881",
"region:us"
] | 2022-04-06T18:44:56+00:00 | {"license": "mit"} | 2022-09-07T15:25:48+00:00 |
|
3e483c44d3dd6525f3b9662a426ca047179868f0 | ucl-snlp-group-11/guardian_crosswords | [
"license:afl-3.0",
"region:us"
] | 2022-04-06T19:50:54+00:00 | {"license": "afl-3.0"} | 2022-04-06T19:51:18+00:00 |
|
3ae28881776a1a2f797fa1c2273f16136908c3ff |
# Dataset Card for BibleNLP Corpus
### Dataset Summary
Partial and complete Bible translations in 833 languages, aligned by verse.
### Languages
aai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc, aui, auy, avt, awb, awk, awx, azb, azg, azz, bao, bba, bbb, bbr, bch, bco, bdd, bea, bef, bel, ben, beo, beu, bgs, bgt, bhg, bhl, big, bjk, bjp, bjr, bjv, bjz, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bmu, bnp, boa, boj, bon, box, bpr, bps, bqc, bqp, bre, bsj, bsn, bsp, bss, buk, bus, bvd, bvr, bxh, byr, byx, bzd, bzh, bzj, caa, cab, cac, caf, cak, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ceb, cek, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, ckb, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, cpy, crn, crx, cso, csy, cta, cth, ctp, ctu, cub, cuc, cui, cuk, cut, cux, cwe, cya, daa, dad, dah, dan, ded, deu, dgc, dgr, dgz, dhg, dif, dik, dji, djk, djr, dob, dop, dov, dwr, dww, dwy, ebk, eko, emi, emp, eng, enq, epo, eri, ese, esk, etr, ewe, faa, fai, far, ffm, for, fra, fue, fuf, fuh, gah, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, guj, gul, gum, gun, guo, gup, gux, gvc, gvf, gvn, gvs, gwi, gym, gyr, hat, hau, haw, hbo, hch, heb, heg, hin, hix, hla, hlt, hmo, hns, hop, hot, hrv, hto, hub, hui, hun, hus, huu, huv, hvn, ian, ign, ikk, ikw, ilo, imo, inb, ind, ino, iou, ipi, isn, ita, iws, ixl, jac, jae, jao, jic, jid, jiv, jni, jpn, jvn, kan, kaq, kbc, kbh, kbm, kbq, kdc, kde, kdl, kek, ken, kew, kgf, kgk, kgp, khs, khz, kik, kiw, kiz, kje, kjn, kjs, kkc, kkl, klt, klv, kmg, kmh, kmk, kmo, kms, kmu, kne, knf, knj, knv, kos, kpf, kpg, kpj, kpr, kpw, kpx, kqa, kqc, kqf, kql, kqw, ksd, ksj, ksr, ktm, kto, kud, kue, kup, kvg, kvn, kwd, kwf, kwi, kwj, kyc, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, lbk, lcm, leu, lex, lgl, lid, lif, lin, lit, llg, lug, luo, lww, maa, maj, mal, mam, maq, mar, mau, mav, maz, mbb, mbc, mbh, mbj, mbl, mbs, mbt, mca, mcb, mcd, mcf, mco, mcp, mcq, mcr, mdy, med, mee, mek, meq, met, meu, mgc, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkj, mkl, mkn, mks, mle, mlh, mlp, mmo, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpt, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, mto, mux, muy, mva, mvn, mwc, mwe, mwf, mwp, mxb, mxp, mxq, mxt, mya, myk, myu, myw, myy, mzz, nab, naf, nak, nas, nay, nbq, nca, nch, ncj, ncl, ncu, ndg, ndj, nfa, ngp, ngu, nhe, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nii, nin, nko, nld, nlg, nmw, nna, nnq, noa, nop, not, nou, npi, npl, nsn, nss, ntj, ntp, ntu, nuy, nvm, nwi, nya, nys, nyu, obo, okv, omw, ong, ons, ood, opm, ory, ote, otm, otn, otq, ots, pab, pad, pah, pan, pao, pes, pib, pio, pir, piu, pjt, pls, plu, pma, poe, poh, poi, pol, pon, por, poy, ppo, prf, pri, ptp, ptu, pwg, qub, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, reg, rgu, rkb, rmc, rmy, ron, roo, rop, row, rro, ruf, rug, rus, rwo, sab, san, sbe, sbk, sbs, seh, sey, sgb, sgz, shj, shp, sim, sja, sll, smk, snc, snn, snp, snx, sny, som, soq, soy, spa, spl, spm, spp, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, taj, tam, tav, taw, tbc, tbf, tbg, tbl, tbo, tbz, tca, tcs, tcz, tdt, tee, tel, ter, tet, tew, tfr, tgk, tgl, tgo, tgp, tha, thd, tif, tim, tiw, tiy, tke, tku, tlf, tmd, tna, tnc, tnk, tnn, tnp, toc, tod, tof, toj, ton, too, top, tos, tpa, tpi, tpt, tpz, trc, tsw, ttc, tte, tuc, tue, tuf, tuo, tur, tvk, twi, txq, txu, tzj, tzo, ubr, ubu, udu, uig, ukr, uli, ulk, upv, ura, urb, urd, uri, urt, urw, usa, usp, uvh, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbi, wbp, wed, wer, wim, wiu, wiv, wmt, wmw, wnc, wnu, wol, wos, wrk, wro, wrs, wsk, wuv, xav, xbi, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yle, yml, yon, yor, yrb, yre, yss, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zap, zar, zas, zat, zav, zaw, zca, zga, zia, ziw, zlm, zos, zpc, zpl, zpm, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
## Dataset Structure
### Data Fields
**translation**
- **languages** - an N length list of the languages of the translations, sorted alphabetically
- **translation** - an N length list with the translations each corresponding to the language specified in the above field
**files**
- **lang** - an N length list of the languages of the files, in order of input
- **file** - an N length list of the filenames from the corpus on github, each corresponding with the lang above
**ref** - the verse(s) contained in the record, as a list, with each represented with: ``<a three letter book code> <chapter number>:<verse number>``
**licenses** - an N length list of licenses, corresponding to the list of files above
**copyrights** - information on copyright holders, corresponding to the list of files above
### Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as ``languages = ['eng', 'fra']``.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ``pair='single'``. If only the maximum range pairing is desired use ``pair='range'`` (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
## Sources
https://github.com/BibleNLP/ebible-corpus | bible-nlp/biblenlp-corpus | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aai",
"language:aak",
"language:aau",
"language:aaz",
"language:abt",
"language:abx",
"language:aby",
"language:acf",
"language:acr",
"language:acu",
"language:adz",
"language:aer",
"language:aey",
"language:agd",
"language:agg",
"language:agm",
"language:agn",
"language:agr",
"language:agt",
"language:agu",
"language:aia",
"language:aii",
"language:aka",
"language:ake",
"language:alp",
"language:alq",
"language:als",
"language:aly",
"language:ame",
"language:amf",
"language:amk",
"language:amm",
"language:amn",
"language:amo",
"language:amp",
"language:amr",
"language:amu",
"language:amx",
"language:anh",
"language:anv",
"language:aoi",
"language:aoj",
"language:aom",
"language:aon",
"language:apb",
"language:ape",
"language:apn",
"language:apr",
"language:apu",
"language:apw",
"language:apz",
"language:arb",
"language:are",
"language:arl",
"language:arn",
"language:arp",
"language:asm",
"language:aso",
"language:ata",
"language:atb",
"language:atd",
"language:atg",
"language:att",
"language:auc",
"language:aui",
"language:auy",
"language:avt",
"language:awb",
"language:awk",
"language:awx",
"language:azb",
"language:azg",
"language:azz",
"language:bao",
"language:bba",
"language:bbb",
"language:bbr",
"language:bch",
"language:bco",
"language:bdd",
"language:bea",
"language:bef",
"language:bel",
"language:ben",
"language:beo",
"language:beu",
"language:bgs",
"language:bgt",
"language:bhg",
"language:bhl",
"language:big",
"language:bjk",
"language:bjp",
"language:bjr",
"language:bjv",
"language:bjz",
"language:bkd",
"language:bki",
"language:bkq",
"language:bkx",
"language:bla",
"language:blw",
"language:blz",
"language:bmh",
"language:bmk",
"language:bmr",
"language:bmu",
"language:bnp",
"language:boa",
"language:boj",
"language:bon",
"language:box",
"language:bpr",
"language:bps",
"language:bqc",
"language:bqp",
"language:bre",
"language:bsj",
"language:bsn",
"language:bsp",
"language:bss",
"language:buk",
"language:bus",
"language:bvd",
"language:bvr",
"language:bxh",
"language:byr",
"language:byx",
"language:bzd",
"language:bzh",
"language:bzj",
"language:caa",
"language:cab",
"language:cac",
"language:caf",
"language:cak",
"language:cao",
"language:cap",
"language:car",
"language:cav",
"language:cax",
"language:cbc",
"language:cbi",
"language:cbk",
"language:cbr",
"language:cbs",
"language:cbt",
"language:cbu",
"language:cbv",
"language:cco",
"language:ceb",
"language:cek",
"language:ces",
"language:cgc",
"language:cha",
"language:chd",
"language:chf",
"language:chk",
"language:chq",
"language:chz",
"language:cjo",
"language:cjv",
"language:ckb",
"language:cle",
"language:clu",
"language:cme",
"language:cmn",
"language:cni",
"language:cnl",
"language:cnt",
"language:cof",
"language:con",
"language:cop",
"language:cot",
"language:cpa",
"language:cpb",
"language:cpc",
"language:cpu",
"language:cpy",
"language:crn",
"language:crx",
"language:cso",
"language:csy",
"language:cta",
"language:cth",
"language:ctp",
"language:ctu",
"language:cub",
"language:cuc",
"language:cui",
"language:cuk",
"language:cut",
"language:cux",
"language:cwe",
"language:cya",
"language:daa",
"language:dad",
"language:dah",
"language:dan",
"language:ded",
"language:deu",
"language:dgc",
"language:dgr",
"language:dgz",
"language:dhg",
"language:dif",
"language:dik",
"language:dji",
"language:djk",
"language:djr",
"language:dob",
"language:dop",
"language:dov",
"language:dwr",
"language:dww",
"language:dwy",
"language:ebk",
"language:eko",
"language:emi",
"language:emp",
"language:eng",
"language:enq",
"language:epo",
"language:eri",
"language:ese",
"language:esk",
"language:etr",
"language:ewe",
"language:faa",
"language:fai",
"language:far",
"language:ffm",
"language:for",
"language:fra",
"language:fue",
"language:fuf",
"language:fuh",
"language:gah",
"language:gai",
"language:gam",
"language:gaw",
"language:gdn",
"language:gdr",
"language:geb",
"language:gfk",
"language:ghs",
"language:glk",
"language:gmv",
"language:gng",
"language:gnn",
"language:gnw",
"language:gof",
"language:grc",
"language:gub",
"language:guh",
"language:gui",
"language:guj",
"language:gul",
"language:gum",
"language:gun",
"language:guo",
"language:gup",
"language:gux",
"language:gvc",
"language:gvf",
"language:gvn",
"language:gvs",
"language:gwi",
"language:gym",
"language:gyr",
"language:hat",
"language:hau",
"language:haw",
"language:hbo",
"language:hch",
"language:heb",
"language:heg",
"language:hin",
"language:hix",
"language:hla",
"language:hlt",
"language:hmo",
"language:hns",
"language:hop",
"language:hot",
"language:hrv",
"language:hto",
"language:hub",
"language:hui",
"language:hun",
"language:hus",
"language:huu",
"language:huv",
"language:hvn",
"language:ian",
"language:ign",
"language:ikk",
"language:ikw",
"language:ilo",
"language:imo",
"language:inb",
"language:ind",
"language:ino",
"language:iou",
"language:ipi",
"language:isn",
"language:ita",
"language:iws",
"language:ixl",
"language:jac",
"language:jae",
"language:jao",
"language:jic",
"language:jid",
"language:jiv",
"language:jni",
"language:jpn",
"language:jvn",
"language:kan",
"language:kaq",
"language:kbc",
"language:kbh",
"language:kbm",
"language:kbq",
"language:kdc",
"language:kde",
"language:kdl",
"language:kek",
"language:ken",
"language:kew",
"language:kgf",
"language:kgk",
"language:kgp",
"language:khs",
"language:khz",
"language:kik",
"language:kiw",
"language:kiz",
"language:kje",
"language:kjn",
"language:kjs",
"language:kkc",
"language:kkl",
"language:klt",
"language:klv",
"language:kmg",
"language:kmh",
"language:kmk",
"language:kmo",
"language:kms",
"language:kmu",
"language:kne",
"language:knf",
"language:knj",
"language:knv",
"language:kos",
"language:kpf",
"language:kpg",
"language:kpj",
"language:kpr",
"language:kpw",
"language:kpx",
"language:kqa",
"language:kqc",
"language:kqf",
"language:kql",
"language:kqw",
"language:ksd",
"language:ksj",
"language:ksr",
"language:ktm",
"language:kto",
"language:kud",
"language:kue",
"language:kup",
"language:kvg",
"language:kvn",
"language:kwd",
"language:kwf",
"language:kwi",
"language:kwj",
"language:kyc",
"language:kyf",
"language:kyg",
"language:kyq",
"language:kyz",
"language:kze",
"language:lac",
"language:lat",
"language:lbb",
"language:lbk",
"language:lcm",
"language:leu",
"language:lex",
"language:lgl",
"language:lid",
"language:lif",
"language:lin",
"language:lit",
"language:llg",
"language:lug",
"language:luo",
"language:lww",
"language:maa",
"language:maj",
"language:mal",
"language:mam",
"language:maq",
"language:mar",
"language:mau",
"language:mav",
"language:maz",
"language:mbb",
"language:mbc",
"language:mbh",
"language:mbj",
"language:mbl",
"language:mbs",
"language:mbt",
"language:mca",
"language:mcb",
"language:mcd",
"language:mcf",
"language:mco",
"language:mcp",
"language:mcq",
"language:mcr",
"language:mdy",
"language:med",
"language:mee",
"language:mek",
"language:meq",
"language:met",
"language:meu",
"language:mgc",
"language:mgh",
"language:mgw",
"language:mhl",
"language:mib",
"language:mic",
"language:mie",
"language:mig",
"language:mih",
"language:mil",
"language:mio",
"language:mir",
"language:mit",
"language:miz",
"language:mjc",
"language:mkj",
"language:mkl",
"language:mkn",
"language:mks",
"language:mle",
"language:mlh",
"language:mlp",
"language:mmo",
"language:mmx",
"language:mna",
"language:mop",
"language:mox",
"language:mph",
"language:mpj",
"language:mpm",
"language:mpp",
"language:mps",
"language:mpt",
"language:mpx",
"language:mqb",
"language:mqj",
"language:msb",
"language:msc",
"language:msk",
"language:msm",
"language:msy",
"language:mti",
"language:mto",
"language:mux",
"language:muy",
"language:mva",
"language:mvn",
"language:mwc",
"language:mwe",
"language:mwf",
"language:mwp",
"language:mxb",
"language:mxp",
"language:mxq",
"language:mxt",
"language:mya",
"language:myk",
"language:myu",
"language:myw",
"language:myy",
"language:mzz",
"language:nab",
"language:naf",
"language:nak",
"language:nas",
"language:nay",
"language:nbq",
"language:nca",
"language:nch",
"language:ncj",
"language:ncl",
"language:ncu",
"language:ndg",
"language:ndj",
"language:nfa",
"language:ngp",
"language:ngu",
"language:nhe",
"language:nhg",
"language:nhi",
"language:nho",
"language:nhr",
"language:nhu",
"language:nhw",
"language:nhy",
"language:nif",
"language:nii",
"language:nin",
"language:nko",
"language:nld",
"language:nlg",
"language:nmw",
"language:nna",
"language:nnq",
"language:noa",
"language:nop",
"language:not",
"language:nou",
"language:npi",
"language:npl",
"language:nsn",
"language:nss",
"language:ntj",
"language:ntp",
"language:ntu",
"language:nuy",
"language:nvm",
"language:nwi",
"language:nya",
"language:nys",
"language:nyu",
"language:obo",
"language:okv",
"language:omw",
"language:ong",
"language:ons",
"language:ood",
"language:opm",
"language:ory",
"language:ote",
"language:otm",
"language:otn",
"language:otq",
"language:ots",
"language:pab",
"language:pad",
"language:pah",
"language:pan",
"language:pao",
"language:pes",
"language:pib",
"language:pio",
"language:pir",
"language:piu",
"language:pjt",
"language:pls",
"language:plu",
"language:pma",
"language:poe",
"language:poh",
"language:poi",
"language:pol",
"language:pon",
"language:por",
"language:poy",
"language:ppo",
"language:prf",
"language:pri",
"language:ptp",
"language:ptu",
"language:pwg",
"language:qub",
"language:quc",
"language:quf",
"language:quh",
"language:qul",
"language:qup",
"language:qvc",
"language:qve",
"language:qvh",
"language:qvm",
"language:qvn",
"language:qvs",
"language:qvw",
"language:qvz",
"language:qwh",
"language:qxh",
"language:qxn",
"language:qxo",
"language:rai",
"language:reg",
"language:rgu",
"language:rkb",
"language:rmc",
"language:rmy",
"language:ron",
"language:roo",
"language:rop",
"language:row",
"language:rro",
"language:ruf",
"language:rug",
"language:rus",
"language:rwo",
"language:sab",
"language:san",
"language:sbe",
"language:sbk",
"language:sbs",
"language:seh",
"language:sey",
"language:sgb",
"language:sgz",
"language:shj",
"language:shp",
"language:sim",
"language:sja",
"language:sll",
"language:smk",
"language:snc",
"language:snn",
"language:snp",
"language:snx",
"language:sny",
"language:som",
"language:soq",
"language:soy",
"language:spa",
"language:spl",
"language:spm",
"language:spp",
"language:sps",
"language:spy",
"language:sri",
"language:srm",
"language:srn",
"language:srp",
"language:srq",
"language:ssd",
"language:ssg",
"language:ssx",
"language:stp",
"language:sua",
"language:sue",
"language:sus",
"language:suz",
"language:swe",
"language:swh",
"language:swp",
"language:sxb",
"language:tac",
"language:taj",
"language:tam",
"language:tav",
"language:taw",
"language:tbc",
"language:tbf",
"language:tbg",
"language:tbl",
"language:tbo",
"language:tbz",
"language:tca",
"language:tcs",
"language:tcz",
"language:tdt",
"language:tee",
"language:tel",
"language:ter",
"language:tet",
"language:tew",
"language:tfr",
"language:tgk",
"language:tgl",
"language:tgo",
"language:tgp",
"language:tha",
"language:thd",
"language:tif",
"language:tim",
"language:tiw",
"language:tiy",
"language:tke",
"language:tku",
"language:tlf",
"language:tmd",
"language:tna",
"language:tnc",
"language:tnk",
"language:tnn",
"language:tnp",
"language:toc",
"language:tod",
"language:tof",
"language:toj",
"language:ton",
"language:too",
"language:top",
"language:tos",
"language:tpa",
"language:tpi",
"language:tpt",
"language:tpz",
"language:trc",
"language:tsw",
"language:ttc",
"language:tte",
"language:tuc",
"language:tue",
"language:tuf",
"language:tuo",
"language:tur",
"language:tvk",
"language:twi",
"language:txq",
"language:txu",
"language:tzj",
"language:tzo",
"language:ubr",
"language:ubu",
"language:udu",
"language:uig",
"language:ukr",
"language:uli",
"language:ulk",
"language:upv",
"language:ura",
"language:urb",
"language:urd",
"language:uri",
"language:urt",
"language:urw",
"language:usa",
"language:usp",
"language:uvh",
"language:uvl",
"language:vid",
"language:vie",
"language:viv",
"language:vmy",
"language:waj",
"language:wal",
"language:wap",
"language:wat",
"language:wbi",
"language:wbp",
"language:wed",
"language:wer",
"language:wim",
"language:wiu",
"language:wiv",
"language:wmt",
"language:wmw",
"language:wnc",
"language:wnu",
"language:wol",
"language:wos",
"language:wrk",
"language:wro",
"language:wrs",
"language:wsk",
"language:wuv",
"language:xav",
"language:xbi",
"language:xed",
"language:xla",
"language:xnn",
"language:xon",
"language:xsi",
"language:xtd",
"language:xtm",
"language:yaa",
"language:yad",
"language:yal",
"language:yap",
"language:yaq",
"language:yby",
"language:ycn",
"language:yka",
"language:yle",
"language:yml",
"language:yon",
"language:yor",
"language:yrb",
"language:yre",
"language:yss",
"language:yuj",
"language:yut",
"language:yuw",
"language:yva",
"language:zaa",
"language:zab",
"language:zac",
"language:zad",
"language:zai",
"language:zaj",
"language:zam",
"language:zao",
"language:zap",
"language:zar",
"language:zas",
"language:zat",
"language:zav",
"language:zaw",
"language:zca",
"language:zga",
"language:zia",
"language:ziw",
"language:zlm",
"language:zos",
"language:zpc",
"language:zpl",
"language:zpm",
"language:zpo",
"language:zpq",
"language:zpu",
"language:zpv",
"language:zpz",
"language:zsr",
"language:ztq",
"language:zty",
"language:zyp",
"language:be",
"language:br",
"language:cs",
"language:ch",
"language:zh",
"language:de",
"language:en",
"language:eo",
"language:fr",
"language:ht",
"language:he",
"language:hr",
"language:id",
"language:it",
"language:ja",
"language:la",
"language:nl",
"language:ru",
"language:sa",
"language:so",
"language:es",
"language:sr",
"language:sv",
"language:to",
"language:uk",
"language:vi",
"license:cc-by-4.0",
"license:other",
"region:us"
] | 2022-04-07T02:04:02+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["aai", "aak", "aau", "aaz", "abt", "abx", "aby", "acf", "acr", "acu", "adz", "aer", "aey", "agd", "agg", "agm", "agn", "agr", "agt", "agu", "aia", "aii", "aka", "ake", "alp", "alq", "als", "aly", "ame", "amf", "amk", "amm", "amn", "amo", "amp", "amr", "amu", "amx", "anh", "anv", "aoi", "aoj", "aom", "aon", "apb", "ape", "apn", "apr", "apu", "apw", "apz", "arb", "are", "arl", "arn", "arp", "asm", "aso", "ata", "atb", "atd", "atg", "att", "auc", "aui", "auy", "avt", "awb", "awk", "awx", "azb", "azg", "azz", "bao", "bba", "bbb", "bbr", "bch", "bco", "bdd", "bea", "bef", "bel", "ben", "beo", "beu", "bgs", "bgt", "bhg", "bhl", "big", "bjk", "bjp", "bjr", "bjv", "bjz", "bkd", "bki", "bkq", "bkx", "bla", "blw", "blz", "bmh", "bmk", "bmr", "bmu", "bnp", "boa", "boj", "bon", "box", "bpr", "bps", "bqc", "bqp", "bre", "bsj", "bsn", "bsp", "bss", "buk", "bus", "bvd", "bvr", "bxh", "byr", "byx", "bzd", "bzh", "bzj", "caa", "cab", "cac", "caf", "cak", "cao", "cap", "car", "cav", "cax", "cbc", "cbi", "cbk", "cbr", "cbs", "cbt", "cbu", "cbv", "cco", "ceb", "cek", "ces", "cgc", "cha", "chd", "chf", "chk", "chq", "chz", "cjo", "cjv", "ckb", "cle", "clu", "cme", "cmn", "cni", "cnl", "cnt", "cof", "con", "cop", "cot", "cpa", "cpb", "cpc", "cpu", "cpy", "crn", "crx", "cso", "csy", "cta", "cth", "ctp", "ctu", "cub", "cuc", "cui", "cuk", "cut", "cux", "cwe", "cya", "daa", "dad", "dah", "dan", "ded", "deu", "dgc", "dgr", "dgz", "dhg", "dif", "dik", "dji", "djk", "djr", "dob", "dop", "dov", "dwr", "dww", "dwy", "ebk", "eko", "emi", "emp", "eng", "enq", "epo", "eri", "ese", "esk", "etr", "ewe", "faa", "fai", "far", "ffm", "for", "fra", "fue", "fuf", "fuh", "gah", "gai", "gam", "gaw", "gdn", "gdr", "geb", "gfk", "ghs", "glk", "gmv", "gng", "gnn", "gnw", "gof", "grc", "gub", "guh", "gui", "guj", "gul", "gum", "gun", "guo", "gup", "gux", "gvc", "gvf", "gvn", "gvs", "gwi", "gym", "gyr", "hat", "hau", "haw", "hbo", "hch", "heb", "heg", "hin", "hix", "hla", "hlt", "hmo", "hns", "hop", "hot", "hrv", "hto", "hub", "hui", "hun", "hus", "huu", "huv", "hvn", "ian", "ign", "ikk", "ikw", "ilo", "imo", "inb", "ind", "ino", "iou", "ipi", "isn", "ita", "iws", "ixl", "jac", "jae", "jao", "jic", "jid", "jiv", "jni", "jpn", "jvn", "kan", "kaq", "kbc", "kbh", "kbm", "kbq", "kdc", "kde", "kdl", "kek", "ken", "kew", "kgf", "kgk", "kgp", "khs", "khz", "kik", "kiw", "kiz", "kje", "kjn", "kjs", "kkc", "kkl", "klt", "klv", "kmg", "kmh", "kmk", "kmo", "kms", "kmu", "kne", "knf", "knj", "knv", "kos", "kpf", "kpg", "kpj", "kpr", "kpw", "kpx", "kqa", "kqc", "kqf", "kql", "kqw", "ksd", "ksj", "ksr", "ktm", "kto", "kud", "kue", "kup", "kvg", "kvn", "kwd", "kwf", "kwi", "kwj", "kyc", "kyf", "kyg", "kyq", "kyz", "kze", "lac", "lat", "lbb", "lbk", "lcm", "leu", "lex", "lgl", "lid", "lif", "lin", "lit", "llg", "lug", "luo", "lww", "maa", "maj", "mal", "mam", "maq", "mar", "mau", "mav", "maz", "mbb", "mbc", "mbh", "mbj", "mbl", "mbs", "mbt", "mca", "mcb", "mcd", "mcf", "mco", "mcp", "mcq", "mcr", "mdy", "med", "mee", "mek", "meq", "met", "meu", "mgc", "mgh", "mgw", "mhl", "mib", "mic", "mie", "mig", "mih", "mil", "mio", "mir", "mit", "miz", "mjc", "mkj", "mkl", "mkn", "mks", "mle", "mlh", "mlp", "mmo", "mmx", "mna", "mop", "mox", "mph", "mpj", "mpm", "mpp", "mps", "mpt", "mpx", "mqb", "mqj", "msb", "msc", "msk", "msm", "msy", "mti", "mto", "mux", "muy", "mva", "mvn", "mwc", "mwe", "mwf", "mwp", "mxb", "mxp", "mxq", "mxt", "mya", "myk", "myu", "myw", "myy", "mzz", "nab", "naf", "nak", "nas", "nay", "nbq", "nca", "nch", "ncj", "ncl", "ncu", "ndg", "ndj", "nfa", "ngp", "ngu", "nhe", "nhg", "nhi", "nho", "nhr", "nhu", "nhw", "nhy", "nif", "nii", "nin", "nko", "nld", "nlg", "nmw", "nna", "nnq", "noa", "nop", "not", "nou", "npi", "npl", "nsn", "nss", "ntj", "ntp", "ntu", "nuy", "nvm", "nwi", "nya", "nys", "nyu", "obo", "okv", "omw", "ong", "ons", "ood", "opm", "ory", "ote", "otm", "otn", "otq", "ots", "pab", "pad", "pah", "pan", "pao", "pes", "pib", "pio", "pir", "piu", "pjt", "pls", "plu", "pma", "poe", "poh", "poi", "pol", "pon", "por", "poy", "ppo", "prf", "pri", "ptp", "ptu", "pwg", "qub", "quc", "quf", "quh", "qul", "qup", "qvc", "qve", "qvh", "qvm", "qvn", "qvs", "qvw", "qvz", "qwh", "qxh", "qxn", "qxo", "rai", "reg", "rgu", "rkb", "rmc", "rmy", "ron", "roo", "rop", "row", "rro", "ruf", "rug", "rus", "rwo", "sab", "san", "sbe", "sbk", "sbs", "seh", "sey", "sgb", "sgz", "shj", "shp", "sim", "sja", "sll", "smk", "snc", "snn", "snp", "snx", "sny", "som", "soq", "soy", "spa", "spl", "spm", "spp", "sps", "spy", "sri", "srm", "srn", "srp", "srq", "ssd", "ssg", "ssx", "stp", "sua", "sue", "sus", "suz", "swe", "swh", "swp", "sxb", "tac", "taj", "tam", "tav", "taw", "tbc", "tbf", "tbg", "tbl", "tbo", "tbz", "tca", "tcs", "tcz", "tdt", "tee", "tel", "ter", "tet", "tew", "tfr", "tgk", "tgl", "tgo", "tgp", "tha", "thd", "tif", "tim", "tiw", "tiy", "tke", "tku", "tlf", "tmd", "tna", "tnc", "tnk", "tnn", "tnp", "toc", "tod", "tof", "toj", "ton", "too", "top", "tos", "tpa", "tpi", "tpt", "tpz", "trc", "tsw", "ttc", "tte", "tuc", "tue", "tuf", "tuo", "tur", "tvk", "twi", "txq", "txu", "tzj", "tzo", "ubr", "ubu", "udu", "uig", "ukr", "uli", "ulk", "upv", "ura", "urb", "urd", "uri", "urt", "urw", "usa", "usp", "uvh", "uvl", "vid", "vie", "viv", "vmy", "waj", "wal", "wap", "wat", "wbi", "wbp", "wed", "wer", "wim", "wiu", "wiv", "wmt", "wmw", "wnc", "wnu", "wol", "wos", "wrk", "wro", "wrs", "wsk", "wuv", "xav", "xbi", "xed", "xla", "xnn", "xon", "xsi", "xtd", "xtm", "yaa", "yad", "yal", "yap", "yaq", "yby", "ycn", "yka", "yle", "yml", "yon", "yor", "yrb", "yre", "yss", "yuj", "yut", "yuw", "yva", "zaa", "zab", "zac", "zad", "zai", "zaj", "zam", "zao", "zap", "zar", "zas", "zat", "zav", "zaw", "zca", "zga", "zia", "ziw", "zlm", "zos", "zpc", "zpl", "zpm", "zpo", "zpq", "zpu", "zpv", "zpz", "zsr", "ztq", "zty", "zyp", "be", "br", "cs", "ch", "zh", "de", "en", "eo", "fr", "ht", "he", "hr", "id", "it", "ja", "la", "nl", "ru", "sa", "so", "es", "sr", "sv", "to", "uk", "vi"], "license": ["cc-by-4.0", "other"], "multilinguality": ["translation", "multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "biblenlp-corpus"} | 2023-07-21T10:56:30+00:00 |
b8bbfeb80905e6d66dc06a47ec6e37b502ea6c69 |
# NEREL dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
NEREL dataset (https://doi.org/10.48550/arXiv.2108.13112) is
a Russian dataset for named entity recognition and relation extraction.
NEREL is significantly larger than existing Russian datasets:
to date it contains 56K annotated named entities and 39K annotated relations.
Its important difference from previous datasets is annotation of nested named
entities, as well as relations within nested entities and at the discourse
level. NEREL can facilitate development of novel models that can extract
relations between nested named entities, as well as relations on both sentence
and document levels. NEREL also contains the annotation of events involving
named entities and their roles in the events.
You can see full entity types list in a subset "ent_types"
and full list of relation types in a subset "rel_types".
## Dataset Structure
There are three "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({features: ['type', 'link']})
) where "link" is a knowledge base name used in entity linking task.
Using
`load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']`
you can download list of entity types (
Dataset({features: ['type', 'arg1', 'arg2']})
) where "arg1" and "arg2" are lists of entity types that can take part in such
"type" of relation. \<ENTITY> stands for any type.
Using
`load_dataset('MalakhovIlya/NEREL', 'data')` or `load_dataset('MalakhovIlya/NEREL')`
you can download the data itself,
DatasetDict with 3 splits: "train", "test" and "dev".
Each of them contains text document with annotated entities, relations and
links.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
"links" are used in entity linking task (see https://en.wikipedia.org/wiki/Entity_linking)
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
Each link is represented by a string of the following format:
`"<id>\tReference <ent_id> <link>\t<text>"`, where
`<id>` is a link id,
`<ent_id>` is an entity id,
`<link>` is a reference to knowledge base entity (example: "Wikidata:Q1879675" if link exists, else "Wikidata:NULL"),
`<text>` is a name of entity in knowledge base if link exists, else empty string.
## Citation Information
@article{loukachevitch2021nerel,
title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events},
author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena},
journal={arXiv preprint arXiv:2108.13112},
year={2021}
}
| iluvvatar/NEREL | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-04-07T08:03:51+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "NEREL"} | 2023-03-30T12:37:20+00:00 |
b2805658ae38990172679479369a78b86de8c390 | mteb/reddit-clustering | [
"language:en",
"region:us"
] | 2022-04-07T08:12:22+00:00 | {"language": ["en"]} | 2022-09-27T18:13:31+00:00 |
|
7fb2f514ea683c5282dfec0a9672ece8de90ac50 |
This file contains news texts (sentences) belonging to 5 different news categories (political, business, technology, sports and Entertainment). The original dataset was released by Nisansa de Silva (*Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*). The original dataset is processed and cleaned of single word texts, English only sentences etc.
If you use this dataset, please cite {*Nisansa de Silva, Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*} and {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} | NLPC-UOM/Sinhala-News-Category-classification | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:si",
"license:mit",
"region:us"
] | 2022-04-07T11:21:01+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["si"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "sinhala-news-category-classification"} | 2022-10-25T09:03:58+00:00 |
ac4d14eeb68efbef95e247542d4432ce674faeb1 |
This dataset contains Sinhala news headlines extracted from 9 news sources (websites) (Sri Lanka Army, Dinamina, GossipLanka, Hiru, ITN, Lankapuwath, NewsLK,
Newsfirst, World Socialist Web Site-Sinhala). This is a processed version of the corpus created by *Sachintha, D., Piyarathna, L., Rajitha, C., and Ranathunga, S. (2021). Exploiting parallel corpora to improve multilingual embedding based document and sentence alignment*. Single word sentences, invalid characters have been removed from the originally extracted corpus and also subsampled to handle class imbalance.
If you use this dataset please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} | NLPC-UOM/Sinhala-News-Source-classification | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:si",
"license:mit",
"region:us"
] | 2022-04-07T11:43:58+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["si"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "sinhala-news-source-classification"} | 2022-10-25T09:04:01+00:00 |
70a89468f6dccacc6aa2b12a6eac54e74328f235 | mteb/stackexchange-clustering | [
"language:en",
"region:us"
] | 2022-04-07T12:42:09+00:00 | {"language": ["en"]} | 2022-09-27T18:11:56+00:00 |
|
091a54f9a36281ce7d6590ec8c75dd485e7e01d4 | mteb/twentynewsgroups-clustering | [
"language:en",
"region:us"
] | 2022-04-07T12:46:04+00:00 | {"language": ["en"]} | 2022-09-27T18:13:51+00:00 |
|
46d3e24187694e12e7b4ae59b94c80b86ab774d8 |
# Dataset Card for KoBEST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/SKT-LSL/KoBEST_datarepo
- **Paper:**
- **Point of Contact:** https://github.com/SKT-LSL/KoBEST_datarepo/issues
### Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
### Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### KB-BoolQ
An example of a data point looks as follows.
```
{'paragraph': '두아 리파(Dua Lipa, 1995년 8월 22일 ~ )는 잉글랜드의 싱어송라이터, 모델이다. BBC 사운드 오브 2016 명단에 노미닛되었다. 싱글 "Be the One"가 영국 싱글 차트 9위까지 오르는 등 성과를 보여주었다.',
'question': '두아 리파는 영국인인가?',
'label': 1}
```
#### KB-COPA
An example of a data point looks as follows.
```
{'premise': '물을 오래 끓였다.',
'question': '결과',
'alternative_1': '물의 양이 늘어났다.',
'alternative_2': '물의 양이 줄어들었다.',
'label': 1}
```
#### KB-WiC
An example of a data point looks as follows.
```
{'word': '양분',
'context_1': '토양에 [양분]이 풍부하여 나무가 잘 자란다. ',
'context_2': '태아는 모체로부터 [양분]과 산소를 공급받게 된다.',
'label': 1}
```
#### KB-HellaSwag
An example of a data point looks as follows.
```
{'context': '모자를 쓴 투수가 타자에게 온 힘을 다해 공을 던진다. 공이 타자에게 빠른 속도로 다가온다. 타자가 공을 배트로 친다. 배트에서 깡 소리가 난다. 공이 하늘 위로 날아간다.',
'ending_1': '외야수가 떨어지는 공을 글러브로 잡는다.',
'ending_2': '외야수가 공이 떨어질 위치에 자리를 잡는다.',
'ending_3': '심판이 아웃을 외친다.',
'ending_4': '외야수가 공을 따라 뛰기 시작한다.',
'label': 3}
```
#### KB-SentiNeg
An example of a data point looks as follows.
```
{'sentence': '택배사 정말 마음에 듬',
'label': 1}
```
### Data Fields
### KB-BoolQ
+ `paragraph`: a `string` feature
+ `question`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-COPA
+ `premise`: a `string` feature
+ `question`: a `string` feature
+ `alternative_1`: a `string` feature
+ `alternative_2`: a `string` feature
+ `label`: an answer candidate label, with possible values `alternative_1`(0) and `alternative_2`(1)
### KB-WiC
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-HellaSwag
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-SentiNeg
+ `sentence`: a `string` feature
+ `label`: a classification label, with possible values `Negative`(0) and `Positive`(1)
### Data Splits
#### KB-BoolQ
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-COPA
+ train: 3,076
+ dev: 1,000
+ test: 1,000
#### KB-WiC
+ train: 3,318
+ dev: 1,260
+ test: 1,260
#### KB-HellaSwag
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-SentiNeg
+ train: 3,649
+ dev: 400
+ test: 397
+ test_originated: 397 (Corresponding training data where the test set is originated from.)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
```
@misc{https://doi.org/10.48550/arxiv.2204.04541,
doi = {10.48550/ARXIV.2204.04541},
url = {https://arxiv.org/abs/2204.04541},
author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
publisher = {arXiv},
year = {2022},
}
```
[More Information Needed]
### Contributions
Thanks to [@MJ-Jang](https://github.com/MJ-Jang) for adding this dataset. | skt/kobest_v1 | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2204.04541",
"region:us"
] | 2022-04-07T12:54:23+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ko"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "pretty_name": "KoBEST"} | 2022-08-22T08:00:17+00:00 |
1ff93b40a787d18186bfbf3abc594e62ea3f7e37 | # AutoTrain Dataset for project: test-21312
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project test-21312.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"id": 300,
"target": 1,
"Pclass": 1,
"Name": "Baxter, Mrs. James (Helene DeLaudeniere Chaput)",
"Sex": "female",
"Age": 50.0,
"SibSp": 0,
"Parch": 1,
"Ticket": "PC 17558",
"Fare": 247.5208,
"Cabin": "B58 B60",
"Embarked": "C"
},
{
"id": 858,
"target": 1,
"Pclass": 1,
"Name": "Daly, Mr. Peter Denis ",
"Sex": "male",
"Age": 51.0,
"SibSp": 0,
"Parch": 0,
"Ticket": "113055",
"Fare": 26.55,
"Cabin": "E17",
"Embarked": "S"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"id": "Value(dtype='int64', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)",
"Pclass": "Value(dtype='int64', id=None)",
"Name": "Value(dtype='string', id=None)",
"Sex": "Value(dtype='string', id=None)",
"Age": "Value(dtype='float64', id=None)",
"SibSp": "Value(dtype='int64', id=None)",
"Parch": "Value(dtype='int64', id=None)",
"Ticket": "Value(dtype='string', id=None)",
"Fare": "Value(dtype='float64', id=None)",
"Cabin": "Value(dtype='string', id=None)",
"Embarked": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 146 |
| valid | 37 |
| victor/autotrain-data-test-21312 | [
"region:us"
] | 2022-04-07T13:18:31+00:00 | {} | 2022-04-07T13:19:26+00:00 |
b54efd9e872e2df7c82afec86d0ef898dd3b6b72 | kniemiec/crack-segm | [
"region:us"
] | 2022-04-07T16:01:35+00:00 | {} | 2022-04-07T16:11:32+00:00 |
|
5171fedc217c7bc893fa08f0e1d353a2cf666423 | image-classification
---
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality:
- monolingual
pretty_name: airplanes
size_categories: []
source_datasets: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for airplanes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
three classes of airplanes: drone, UAV, and fighter
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Drone images were taken from:
Wang, Ye, Yueru Chen, Jongmoo Choi, and C-C. Jay Kuo. “Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks.” APSIPA Transactions on Signal and Information Processing 8 (2019).
[mcl-drone-dataset](https://mcl.usc.edu/mcl-drone-dataset/) | johnnydevriese/airplanes | [
"region:us"
] | 2022-04-07T19:34:25+00:00 | {} | 2022-09-16T14:28:53+00:00 |
a860423bf48f6e01bb0ff7a28744eb589e0d7ddf | openclimatefix/swedish-rainfall-radar | [
"license:mit",
"doi:10.57967/hf/0884",
"region:us"
] | 2022-04-08T10:53:30+00:00 | {"license": "mit"} | 2022-07-23T13:11:57+00:00 |
|
83551fe521307e2a05274a2150d1d554f898d083 | # GEM Submission
Submission name: ENT
| GEM-submissions/ratishsp__ent__1649421332 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T11:35:32+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "ENT", "tags": ["evaluation", "benchmark"]} | 2022-04-08T11:35:35+00:00 |
822ca2e2310fc76c47ac7e02c2316a260f63d83d | # GEM Submission
Submission name: NCP_CC
| GEM-submissions/ratishsp__ncp_cc__1649422112 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T11:48:32+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "NCP_CC", "tags": ["evaluation", "benchmark"]} | 2022-04-08T11:48:34+00:00 |
8e91091fcdcf73d0dca08f4e73cd7b1cbf5c7b51 | # GEM Submission
Submission name: ENT
| GEM-submissions/ratishsp__ent__1649422569 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T11:56:09+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "ENT", "tags": ["evaluation", "benchmark"]} | 2022-04-08T11:56:11+00:00 |
f6f5797f4852eb1ac0dad141ce7894ed6d71bf8a | # GEM Submission
Submission name: NCP_CC
| GEM-submissions/ratishsp__ncp_cc__1649422863 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-08T12:01:03+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "NCP_CC", "tags": ["evaluation", "benchmark"]} | 2022-04-08T12:01:05+00:00 |
b43970c4be4cff8c5259b043cc78202fa34e2bc3 |
# Dataset Card for pl-corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [UlyssesNER-Br homepage](https://github.com/Convenio-Camara-dos-Deputados/ulyssesner-br-propor)
- **Repository:** [UlyssesNER-Br repository](https://github.com/Convenio-Camara-dos-Deputados/ulyssesner-br-propor)
- **Paper:** [UlyssesNER-Br: A corpus of brazilian legislative documents for named entity recognition. In: Computational Processing of the Portuguese Language](https://link.springer.com/chapter/10.1007/978-3-030-98305-5_1)
- **Point of Contact:** [Hidelberg O. Albuquerque](mailto:hidelberg.albuquerque@ufrpe.br)
### Dataset Summary
PL-corpus is part of the UlyssesNER-Br, a corpus of Brazilian Legislative Documents for NER with quality baselines The presented corpus consists of 150 public bills from Brazilian Chamber of Deputies, manually annotated. Its contains semantic categories and types.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Brazilian Portuguese.
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@InProceedings{ALBUQUERQUE2022,
author="Albuquerque, Hidelberg O.
and Costa, Rosimeire
and Silvestre, Gabriel
and Souza, Ellen
and da Silva, N{\'a}dia F. F.
and Vit{\'o}rio, Douglas
and Moriyama, Gyovana
and Martins, Lucas
and Soezima, Luiza
and Nunes, Augusto
and Siqueira, Felipe
and Tarrega, Jo{\~a}o P.
and Beinotti, Joao V.
and Dias, Marcio
and Silva, Matheus
and Gardini, Miguel
and Silva, Vinicius
and de Carvalho, Andr{\'e} C. P. L. F.
and Oliveira, Adriano L. I.",
title="{UlyssesNER-Br}: A Corpus of Brazilian Legislative Documents for Named Entity Recognition",
booktitle="Computational Processing of the Portuguese Language",
year="2022",
pages="3--14",
}
``` | bergoliveira/pl-corpus | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:pt",
"license:unknown",
"legal",
"legislative",
"region:us"
] | 2022-04-08T14:15:10+00:00 | {"language": ["pt"], "license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "pretty_name": "plcorpus", "tags": ["legal", "legislative"]} | 2023-05-01T13:25:22+00:00 |
67e283fee4cd7cbabbe771d1df88382b043e914c | annotations_creators: []
language_creators: []
languages: []
licenses: []
multilinguality: []
pretty_name: humor_train
size_categories: []
source_datasets: []
task_categories: []
task_ids: [] | lm233/humor_train | [
"region:us"
] | 2022-04-08T17:10:37+00:00 | {} | 2022-04-08T17:13:45+00:00 |
66cd1dbf5577c653ecb99b385200f08e15e12f30 | # Dataset Card for TopiOCQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/)
- **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa)
- **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768)
- **Point of Contact:** [Vaibhav Adlakha](mailto:vaibhav.adlakha@mila.quebec)
### Dataset Summary
TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.
### Languages
The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.
## Additional Information
### Licensing Information
TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@inproceedings{adlakha2022topiocqa,
title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching},
author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva},
journal={Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {468-483},
year = {2022},
month = {04},
year={2022},
issn = {2307-387X},
doi = {10.1162/tacl_a_00471},
url = {https://doi.org/10.1162/tacl\_a\_00471},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf},
}
``` | McGill-NLP/TopiOCQA | [
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"language:en",
"license:cc-by-nc-sa-4.0",
"conversational-question-answering",
"arxiv:2110.00768",
"region:us"
] | 2022-04-08T17:29:53+00:00 | {"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "task_categories": ["text-retrieval", "text-generation"], "task_ids": ["language-modeling", "open-domain-qa"], "pretty_name": "Open-domain Conversational Question Answering with Topic Switching", "tags": ["conversational-question-answering"]} | 2023-09-29T18:37:48+00:00 |
545613aee11c3c7fa3748b8ca9cdfd1a92e64292 | nateraw/quickdraw | [
"license:cc-by-4.0",
"region:us"
] | 2022-04-08T18:48:21+00:00 | {"license": "cc-by-4.0"} | 2022-04-08T18:48:58+00:00 |
|
b14fd6edb25ad7646d25599565008cadc013f952 |
# Dataset Card for [Smithsonian Butterflies]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
### Dataset Summary
High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled
### Supported Tasks and Leaderboards
Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.
### Languages
English
## Dataset Structure
### Data Instances
# Example data
```
{'image_url': 'https://ids.si.edu/ids/deliveryService?id=ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'image_alt': 'view Aholibah Underwing digital asset number 1',
'id': 'ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'name': 'Aholibah Underwing',
'scientific_name': 'Catocala aholibah',
'gender': None,
'taxonomy': 'Animalia, Arthropoda, Hexapoda, Insecta, Lepidoptera, Noctuidae, Catocalinae',
'region': None,
'locality': None,
'date': None,
'usnm_no': 'EO400317-DSP',
'guid': 'http://n2t.net/ark:/65665/39b506292-715f-45a7-8511-b49bb087c7de',
'edan_url': 'edanmdm:nmnheducation_10866595',
'source': 'Smithsonian Education and Outreach collections',
'stage': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2000x1328 at 0x7F57D0504DC0>,
'image_hash': '27a5fe92f72f8b116d3b7d65bac84958',
'sim_score': 0.8440760970115662}
```
### Data Fields
sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc)
### Data Splits
No specific split exists.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Doesn't include all butterfly species ## Additional Information
### Dataset Curators
Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections
### Licensing Information
Only results marked: CC0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. | ceyda/smithsonian_butterflies | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-04-08T23:38:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "Smithsonian Butterflies"} | 2022-07-13T08:32:27+00:00 |
6b37397565bdbd6ede10e362e6a1be4c62083bb3 | Dataset Summary
- Natural Language Processing with Disaster Tweets: https://www.kaggle.com/competitions/nlp-getting-started/data
- This particular challenge is perfect for data scientists looking to get started with Natural Language Processing. The competition dataset is not too big, and even if you don’t have much personal computing power, you can do all of the work in our free, no-setup, Jupyter Notebooks environment called Kaggle Notebooks.
Columns
- id - a unique identifier for each tweet
- text - the text of the tweet
- location - the location the tweet was sent from (may be blank)
- keyword - a particular keyword from the tweet (may be blank)
- target - in train.csv only, this denotes whether a tweet is about a real disaster (1) or not (0)
| gdwangh/kaggle-nlp-getting-start | [
"region:us"
] | 2022-04-09T07:03:46+00:00 | {} | 2022-04-09T07:13:03+00:00 |
5ce8dc4c178d59d0fcb8f3e580f93fa95ed57901 | # Data Summary
This dataset contains images of Moroccan Chebakia (Traditional Ramadan Sweets).
# Data Source
All of the images were web scrapped using a google image search API.
### Contributions
[`Ilyas Moutawwakil`](https://huggingface.co/IlyasMoutawwakil) added this dataset to the hub. | huggan/chebakia | [
"region:us"
] | 2022-04-09T15:37:31+00:00 | {} | 2022-05-27T10:53:19+00:00 |
cf40283692122fe32d2c1d009f5b1a674be473ad | #flowersdataset #segmentation #VGG
# Dataset Card for Flowers Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Official VGG'S README.md](#official-vggs-README.md)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/flowers/17/index.html
- **Repository:** https://huggingface.co/datasets/Guldeniz/flower_dataset
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
VGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.
You can find the split files in the link, as a mat file.
### Official VGG's README.md
17 Flower Category Database
----------------------------------------------
This set contains images of flowers belonging to 17 different categories.
The images were acquired by searching the web and taking pictures. There are
80 images for each category.
The database was used in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
The datasplits used in this paper are specified in datasplits.mat
There are 3 separate splits. The results in the paper are averaged over the 3 splits.
Each split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)
and a testfile (tst1, tst2 or tst3).
Segmentation Ground Truth
------------------------------------------------
The ground truth is given for a subset of the images from 13 different
categories.
More details can be found in:
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz).
The ground truth file also contains the file imlist.mat, which indicated
which images in the original database that have been anotated.
Distance matrices
-----------------------------------------------
We provide two set of distance matrices:
1. distancematrices17gcfeat06.mat
- Distance matrices using the same features and segmentation as detailed in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
2. distancematrices17itfeat08.mat
- Distance matrices using the same features as described in:
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback08.{pdf,ps.gz}.
and the iterative segmenation scheme detailed in
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz). | Guldeniz/flower_dataset | [
"region:us"
] | 2022-04-09T19:36:46+00:00 | {} | 2022-04-09T19:52:59+00:00 |
c8356096c3ce93ad76030b135e33f4ccd099816e |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/dooggies).
Model is available [here](https://huggingface.co/huggingnft/dooggies).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/dooggies")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/dooggies | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-09T19:54:53+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/dooggies"]} | 2022-04-16T16:59:05+00:00 |
5ca85c638c922bdae8dfd4fbdf7d172ecb0c28d1 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptoadz-by-gremplin).
Model is available [here](https://huggingface.co/huggingnft/cryptoadz-by-gremplin).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptoadz-by-gremplin")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/cryptoadz-by-gremplin | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:20:00+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cryptoadz-by-gremplin"]} | 2022-04-16T16:59:06+00:00 |
81ecc730edb35304a79c59ee811c056bd68775e8 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cyberkongz).
Model is available [here](https://huggingface.co/huggingnft/cyberkongz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cyberkongz")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/cyberkongz | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:33:51+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cyberkongz"]} | 2022-04-16T16:59:06+00:00 |
fffef77aafbde453e1e78f72adc287fbbac3bc15 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/mini-mutants).
Model is available [here](https://huggingface.co/huggingnft/mini-mutants).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/mini-mutants")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/mini-mutants | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:42:22+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/mini-mutants"]} | 2022-04-16T16:59:06+00:00 |
f8ff5ec9ffd286d395a88ea1407957bc457df703 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/theshiboshis).
Model is available [here](https://huggingface.co/huggingnft/theshiboshis).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/theshiboshis")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/theshiboshis | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:48:07+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/theshiboshis"]} | 2022-04-16T16:59:06+00:00 |
9c963cdf5cd5df0924c0cd0fcd0d44acae67a15a |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptopunks).
Model is available [here](https://huggingface.co/huggingnft/cryptopunks).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptopunks")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/cryptopunks | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:52:12+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/cryptopunks"]} | 2022-04-16T16:59:07+00:00 |
a7b35e95225cdeca125e0ba77f29ccebedc3d48d |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/nftrex).
Model is available [here](https://huggingface.co/huggingnft/nftrex).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/nftrex")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/nftrex | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:55:12+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/nftrex"]} | 2022-04-16T16:59:07+00:00 |
4ab22a713cd38dc0275a53b7b945975ce63fead8 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/etherbears).
Model is available [here](https://huggingface.co/huggingnft/etherbears).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/etherbears")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/etherbears | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T07:57:17+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/etherbears"]} | 2022-04-16T16:59:07+00:00 |
3ca436a670f55f3fb909dacf588c575885b8aaa2 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/alpacadabraz).
Model is available [here](https://huggingface.co/huggingnft/alpacadabraz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/alpacadabraz")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/alpacadabraz | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T08:01:03+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/alpacadabraz"]} | 2022-04-16T16:59:07+00:00 |
9ccb67fe13acc0f05adbaf8883ba978d4673f857 |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/trippytoadznft).
Model is available [here](https://huggingface.co/huggingnft/trippytoadznft).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/trippytoadznft")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/trippytoadznft | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T08:07:49+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/trippytoadznft"]} | 2022-04-16T16:59:07+00:00 |
2d572e61e1204fe8374ca7768511f0a6b57639ac |
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/boredapeyachtclub).
Model is available [here](https://huggingface.co/huggingnft/boredapeyachtclub).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/boredapeyachtclub")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| huggingnft/boredapeyachtclub | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-10T08:14:53+00:00 | {"license": "mit", "tags": ["huggingnft", "nft", "huggan", "gan", "image", "images"], "task": ["unconditional-image-generation"], "datasets": ["huggingnft/boredapeyachtclub"]} | 2022-04-16T16:59:08+00:00 |
2aa51f454f9b1c2aded5899f2c865fe0a7bd746b |
# RuREBus dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
RuREBus dataset (https://github.com/dialogue-evaluation/RuREBus) is
a Russian dataset for named entity recognition and relation extraction.
## Dataset Structure
There are two subsets of the dataset.
Using
`load_dataset('MalakhovIlya/RuREBus')`
you can download annotated data (DatasetDict) for named entity recognition task and
relation extraction tasks.
This subset consists of two splits: "train" and "test".
Using
`load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']`
you can download (Dataset) large corpus (~3gb) raw texts of the same subject
area, but without any annotations.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
## Citation Information
@inproceedings{rurebus,
Address = {Moscow, Russia},
Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},
Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},
Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},
Year = {2020}
}
| iluvvatar/RuREBus | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-04-10T08:52:30+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "RuREBus"} | 2023-03-30T12:37:32+00:00 |
5804347ff724db187d3aa0260f2e23e4af5a111c | lewtun/top_quark_tagging_old | [
"license:cc-by-4.0",
"region:us"
] | 2022-04-10T13:54:18+00:00 | {"license": "cc-by-4.0"} | 2022-04-10T15:24:28+00:00 |
|
0c4d3efa8324ce171b8e8393b713786f64c63612 | SCIERC (Luan et al., 2018) via "Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks" (Gururangan et al., 2020) reuploaded because of error encountered when trying to load zj88zj/SCIERC with the huggingfaces/datasets library. | nsusemiehl/SciERC | [
"region:us"
] | 2022-04-10T15:51:23+00:00 | {} | 2022-04-10T15:56:55+00:00 |
6cfe8e5afe107823c07b64d48e333b9b85ae332b | # SKM-TEA Sample Data
This dataset consists of a subset of scans from the [SKM-TEA dataset](https://arxiv.org/abs/2203.06823). It can be used to build tutorials / demos with the SKM-TEA dataset.
To access to the full dataset, please follow instructions on [Github](https://github.com/StanfordMIMI/skm-tea/blob/main/DATASET.md).
**NOTE**: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.
## Details
This mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are [lzf-compressed](http://www.h5py.org/lzf/) to reduce size while maximizing speed for decompression.
## License
By using this dataset, you agree to the [Stanford University Dataset Research Use Agreement](https://stanfordaimi.azurewebsites.net/datasets/4aaeafb9-c6e6-4e3c-9188-3aaaf0e0a9e7).
## Reference
If you use this dataset, please reference the SKM-TEA paper:
```
@inproceedings{
desai2021skmtea,
title={{SKM}-{TEA}: A Dataset for Accelerated {MRI} Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation},
author={Arjun D Desai and Andrew M Schmidt and Elka B Rubin and Christopher Michael Sandino and Marianne Susan Black and Valentina Mazzoli and Kathryn J Stevens and Robert Boutin and Christopher Re and Garry E Gold and Brian Hargreaves and Akshay Chaudhari},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=YDMFgD_qJuA}
}
```
| arjundd/skm-tea-mini | [
"language:en",
"license:other",
"mri",
"quantitative mri",
"reconstruction",
"segmentation",
"detection",
"arxiv:2203.06823",
"region:us"
] | 2022-04-10T16:16:33+00:00 | {"language": "en", "license": "other", "tags": ["mri", "quantitative mri", "reconstruction", "segmentation", "detection"]} | 2022-05-02T19:01:34+00:00 |
456a91903148a8a02f7903b4941ef21ef6f7366f | # AirDrums Data
This dataset contains all data needed for training
`2d_images` contains raw unsegmented image data for 2-dimensional dataset. filenames are representative of timestamp
`3d_images` contains raw unsegmented image data (paired) for 3-dimensional dataset. filenames are representative of timestamp and camera angle
images from both of the previous sets are to be segmented and converted to a coordinate and direction
`2d_imu` contains IMU data for training in 2-dimensional space (xy) with segmented images from above
`3d_imu` contains IMU data for training in 3-dimensional space(xyz) with segmented images from above and front (xy and yz planes)
---
language:
- en
tags:
- sensor
- location
datasets:
- 2d_images
- 3d_images
- 2d_imu
- 3d_imu
---
| mattgmcadams/AirDrums | [
"region:us"
] | 2022-04-10T23:22:39+00:00 | {} | 2022-04-10T23:40:23+00:00 |
4fcdce42bb4668907d572c4ae6ac03307847a7ff |
### About DataSet
The dataset based on NEREL corpus.
For more information about original data, please visit this [source](https://github.com/dialogue-evaluation/RuNNE)
Example of preparing original data illustrated in <Prepare_original_data.ipynb>
### Additional info
The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-".
Frequency for each entity:
- I-AGE: 284
- B-AGE: 247
- B-AWARD: 285
- I-AWARD: 466
- B-CITY: 1080
- I-CITY: 39
- B-COUNTRY: 2378
- I-COUNTRY: 128
- B-CRIME: 214
- I-CRIME: 372
- B-DATE: 2701
- I-DATE: 5437
- B-DISEASE: 136
- I-DISEASE: 80
- B-DISTRICT: 98
- I-DISTRICT: 73
- B-EVENT: 3369
- I-EVENT: 2524
- B-FACILITY: 376
- I-FACILITY: 510
- B-FAMILY: 27
- I-FAMILY: 22
- B-IDEOLOGY: 271
- I-IDEOLOGY: 20
- B-LANGUAGE: 32
- I-LAW: 1196
- B-LAW: 297
- B-LOCATION: 242
- I-LOCATION: 139
- B-MONEY: 147
- I-MONEY: 361
- B-NATIONALITY: 437
- I-NATIONALITY: 41
- B-NUMBER: 1079
- I-NUMBER: 328
- B-ORDINAL: 485
- I-ORDINAL: 6
- B-ORGANIZATION: 3339
- I-ORGANIZATION: 3354
- B-PENALTY: 73
- I-PENALTY: 104
- B-PERCENT: 51
- I-PERCENT: 37
- B-PERSON: 5148
- I-PERSON: 3635
- I-PRODUCT: 48
- B-PRODUCT: 197
- B-PROFESSION: 3869
- I-PROFESSION: 2598
- B-RELIGION: 102
- I-RELIGION: 1
- B-STATE_OR_PROVINCE: 436
- I-STATE_OR_PROVINCE: 154
- B-TIME: 187
- I-TIME: 529
- B-WORK_OF_ART: 133
- I-WORK_OF_ART: 194
You can find mapper for entity ids in <id_to_label_map.pickle> file:
```python
import pickle
with open('id_to_label_map.pickle', 'rb') as f:
mapper = pickle.load(f)
``` | surdan/nerel_short | [
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-04-11T05:34:28+00:00 | {"language": "ru", "multilinguality": "monolingual", "task_ids": ["named-entity-recognition"]} | 2022-10-25T09:06:49+00:00 |
527ab728c4a1ffca313d6423f9d837577f477a95 | enimai/MuST-C-de | [
"license:afl-3.0",
"region:us"
] | 2022-04-11T07:23:21+00:00 | {"license": "afl-3.0"} | 2022-04-11T07:25:26+00:00 |
|
820e1da2eaf57add263d470621bc2a3f43a021e7 | This dataset contains 10 examples of the [segments/sidewalk-semantic](https://huggingface.co/datasets/segments/sidewalk-semantic) dataset (i.e. 10 images with corresponding ground-truth segmentation maps). | huggingface/semantic-segmentation-test-sample | [
"region:us"
] | 2022-04-11T08:12:00+00:00 | {} | 2022-04-11T08:15:24+00:00 |
02f598f31161ab47a167d725b0de3dc3c0efdde8 |
## Dataset Description
This dataset provides easier accessibility to the original [MNLI dataset](https://huggingface.co/datasets/multi_nli).
We randomly choose 10% of the original `validation_matched` split and use it as the validation split.
The remaining 90% are used for the test split.
The train split remains unchanged. | westphal-jan/mnli_matched | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"source_datasets:multi_nli",
"region:us"
] | 2022-04-11T09:06:59+00:00 | {"source_datasets": ["multi_nli"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring"]} | 2022-04-16T11:02:51+00:00 |
3b2935a74731f120004bdcbc3f9fd73f7d854c96 |
# Dataset Card for `squad_bn`
## Table of Contents
- [Dataset Card for `squad_bn`](#dataset-card-for-squad_bn)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
- **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is a Question Answering (QA) dataset for Bengali, curated from the [SQuAD 2.0](), [TyDI-QA]() datasets and using the state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglabert)
### Languages
* `Bengali`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/squad_bn")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
"title": "শেখ মুজিবুর রহমান",
"paragraphs": [
{
"qas": [
{
"answers": [
{
"answer_start": 19,
"text": "১৭ মার্চ ১৯২০"
}
],
"id": "bengali--981248442377505718-0-2649",
"question": "শেখ মুজিবুর রহমান কবে জন্মগ্রহণ করেন ?"
}
],
"context": "শেখ মুজিবুর রহমান (১৭ মার্চ ১৯২০ - ১৫ আগস্ট ১৯৭৫) বাংলাদেশের প্রথম রাষ্ট্রপতি ও ভারতীয় উপমহাদেশের একজন অন্যতম প্রভাবশালী রাজনৈতিক ব্যক্তিত্ব যিনি বাঙালীর অধিকার রক্ষায় ব্রিটিশ ভারত থেকে ভারত বিভাজন আন্দোলন এবং পরবর্তীতে পূর্ব পাকিস্তান থেকে বাংলাদেশ প্রতিষ্ঠার সংগ্রামে নেতৃত্ব প্রদান করেন। প্রাচীন বাঙ্গালি সভ্যতার আধুনিক স্থপতি হিসাবে শেখ মুজিবুর রহমানকে বাংলাদেশের জাতির জনক বা জাতির পিতা বলা হয়ে থাকে। তিনি মাওলানা আব্দুল হামিদ খান ভাসানী প্রতিষ্ঠিত আওয়ামী লীগের সভাপতি, বাংলাদেশের প্রথম রাষ্ট্রপতি এবং পরবর্তীতে এদেশের প্রধানমন্ত্রীর দায়িত্ব পালন করেন। জনসাধারণের কাছে তিনি শেখ মুজিব এবং শেখ সাহেব হিসাবে বেশি পরিচিত এবং তার উপাধি বঙ্গবন্ধু। তার কন্যা শেখ হাসিনা বাংলাদেশ আওয়ামী লীগের বর্তমান সভানেত্রী এবং বাংলাদেশের বর্তমান প্রধানমন্ত্রী।"
}
]
}
```
### Data Fields
The data fields are as follows:
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| split |count |
|----------|--------|
|`train`| 127771 |
|`validation`| 2502 |
|`test`| 2504 |
## Dataset Creation
For the training set, we translated the complete [SQuAD 2.0](https://aclanthology.org/N18-1101/) dataset using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.
Since the TyDI-QA Gold Passage task guarantees that the given context contains the answer and we want to pose our QA task analogous to SQuAD 2.0, we also consider examples from the Passage selection task that don't have an answer for the given question. We distribute the resultant examples from the TyDI-QA training and validation sets (which are publicly available) evenly to our test and validation sets.
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglabert)
### Source Data
[SQuAD 2.0](https://arxiv.org/abs/1606.05250), [TyDi-QA](https://arxiv.org/abs/2003.05002)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglabert)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglabert)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglabert)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglabert)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglabert)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@misc{bhattacharjee2021banglabert,
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
year={2021},
eprint={2101.00204},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. | csebuetnlp/squad_bn | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bn",
"license:cc-by-nc-sa-4.0",
"arxiv:2101.00204",
"arxiv:2007.01852",
"arxiv:1606.05250",
"arxiv:2003.05002",
"region:us"
] | 2022-04-11T09:16:26+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["bn"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"]} | 2022-08-21T12:17:43+00:00 |
68b251a36a30a7a5e636ce0f55dcebb43bcd576f | openclimatefix/prepared-batches | [
"license:mit",
"doi:10.57967/hf/0883",
"region:us"
] | 2022-04-11T10:31:40+00:00 | {"license": "mit"} | 2022-04-13T10:31:18+00:00 |
|
2cf230d6428c8e3cb35710b9aa18858cc33084bc | # Dataset Card for [FrozenLake-v1] | AntoineLB/Frozen-lake-dataset | [
"region:us"
] | 2022-04-11T11:55:48+00:00 | {} | 2022-04-21T11:16:39+00:00 |
5aade2e78656abb0c321488d6d21b331f7cdd665 | irenelizihui/Surfer100 | [
"license:wtfpl",
"region:us"
] | 2022-04-11T22:06:56+00:00 | {"license": "wtfpl"} | 2022-04-11T22:06:56+00:00 |
|
643ceefc17441e56cff66f57c03b13615545d42b | Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.
This new dataset has the following advantages over the existing Twitter datasets:
Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements. | raquiba/Sarcasm_News_Headline | [
"region:us"
] | 2022-04-12T02:50:36+00:00 | {} | 2022-04-14T07:19:08+00:00 |
dd723264101153ba5ddf3451e65446346000f496 |
# Inspec Benchmark Dataset for Keyphrase Generation
## About
Inspec is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 2,000 abstracts of scientific papers collected from the [Inspec database](https://www.theiet.org/resources/inspec/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the inspec dataset can be found in the original paper [(Hulth, 2003)][hulth-2003].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset is divided into the following three splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 1,000 | 141.7 | 9.79 | 78.00 | 9.85 | 6.22 | 5.93 |
| Validation | 500 | 132.2 | 9.15 | 77.96 | 9.82 | 6.75 | 5.47 |
| Test | 500 | 134.8 | 9.83 | 78.70 | 9.92 | 6.48 | 4.91 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Hulth, 2003) Anette Hulth. 2003.
[Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028).
In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[hulth-2003]: https://aclanthology.org/W03-1028/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | taln-ls2n/inspec | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | 2022-04-12T07:10:45+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "Inspec"} | 2022-07-21T13:14:59+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.