text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false |
# MIRACL (yo) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-yo-queries-22-12](https://huggingface.... |
false |
# MIRACL (yo) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-yo-queries-22-12](https://huggingface.... |
false |
# MIRACL (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-de-queries-22-12](https://huggingface.... |
false |
# MIRACL (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-de-queries-22-12](https://huggingface.... |
false | |
true | # Mawqif: A Multi-label Arabic Dataset for Target-specific Stance Detection
- *Mawqif* is the first Arabic dataset that can be used for target-specific stance detection.
- This is a multi-label dataset where each data point is annotated for stance, sentiment, and sarcasm.
- We benchmark *Mawqif* dataset on the stan... |
false | # Dataset Card for "nowiki_second_scrape_merged"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structu... |
false |
# Dataset Card for *BioLeaflets* Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dat... |
false | # AutoTrain Dataset for project: square-count-classifier
## Dataset Description
This dataset has been automatically processed by AutoTrain for project square-count-classifier.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks... |
false | # Dataset for project: quick-summarization
## Dataset Description
This dataset has been trained for project quick-summarization.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Ever no... |
false | # DreamBank - Dreams
The dataset is a collection of ~30k textual reports of dreams, originally scraped from the [DreamBank](https://www.dreambank.net/) databased by
[`mattbierner`](https://github.com/mattbierner/DreamScrape). The DreamBank reports are divided into `series`,
which are collections of individuals or re... |
false | persianConversation |
true | # Dataset Card for "yolochess_lichess-elite_2211"
Source: https://database.nikonoel.fr/ - filtered from https://database.lichess.org for November 2022
Features:
- fen = Chess board position in [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format
- move = Move played by a strong human player ... |
false |
## ita2medieval
The **ita2medieval** dataset contains sentences from medieval italian along with paraphrases in contemporary italian (approximately 6.5k pairs in total). The medieval italian sentences are extracted from texts by Dante, Petrarca, Guinizelli and Cavalcanti.
It is intended to perform text-style-transfer... |
true |
# ml4pubmed/pubmed-text-classification-cased
A parsed/cleaned version of the source data retaining case. |
false |
## STR-2022: Dataset Description
The dataset consists of 5500 English sentence pairs that are scored and ranked on a relatedness scale ranging from 0 (least related) to 1 (most related).
## Loading the Dataset
- The sentence pairs, and associated scores, are in the file sem_text_rel_ranked.csv in the root directory.... |
false | ---
task_categories:
- image-segmentation
tags:
- Earth Observation |
false |
# Dataset Card for Unsilencing Colonial Archives via Automated Entity Recognition
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language... |
false |
# Danbooru 2021 SQLite
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is the metadata of danbooru 2021 dataset in SQLite format.
https://gwern.net/danbooru2021
### Supported Tasks and Leaderboards
[More Information Ne... |
false | # Toloker Graph: Interaction of Crowd Annotators
## Dataset Description
- **Repository:** https://github.com/Toloka/TolokerGraph
- **Point of Contact:** research@toloka.ai
### Dataset Summary
This repository contains a graph representing interactions between crowd annotators on a project labeled on the [Toloka](htt... |
false |
# Dataset Card for SAMSum Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instanc... |
false | # Funniest responses dataset
This crowdsourced dataset is a dataset of the funniest answers we've collected over time. The collection started in feb. 8 of 2023.
## Usage
Here's how the data looks.
```
;о
Hello, how are you doing?
Better than you
;а
I have 100 trillion parameters in my brain, that's a lot more than yo... |
true |
# Dataset Card for DDisco
## Dataset Description
The DDisco dataset is a dataset which can be used to train models to classify levels of coherence in _danish_ discourse. Each entry in the dataset is annotated with a discourse coherence label (rating from 1 to 3):
1: low coherence (difficult to understand, unorganiz... |
false | # Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection a... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://www.kaggle.com/datasets/muhammadalbrham/rgb-arabic-alphabets-sign-language-dataset
- **Paper:** https://arxiv.org/abs/2301.11932
- **Point of Contact:** muhammadal-brham@ieee.org
### Dataset Summary
RGB Arabic Alphabet Sign Language (A... |
true | # squad_v2_factuality_v1
This dataset is derived from "squad_v2" training "context" with the following steps.
1. NER is run to extract entities.
2. Lexicon of person's name, date, organisation name and location are collected.
3. 20% of the time, one of the text attribute (person's name, date, organisation name and loca... |
false |
# VRoid Image Dataset Lite
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 or properly licensed.
This dataset is also used to train [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one), which is a latent text-to-im... |
false |
# Dataset Card for SentiCoref
### Dataset Summary
SentiCoref is a Slovenian coreference resolution dataset containing **391962** tokens inside **756** documents*.
Also contains automatically (?) annotated named entities and manually verified lemmas and morphosyntactic tags (MSD).
\* This is the latest version of... |
true | # AutoTrain Dataset for project: dataset-mentions
## Dataset Description
This dataset has been automatically processed by AutoTrain for project dataset-mentions.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
`... |
false | |
false | # AutoTrain Dataset for project: flan-xl-conversation
## Dataset Description
This dataset has been automatically processed by AutoTrain for project flan-xl-conversation.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as fo... |
false | # AutoTrain Dataset for project: flan-large-conv
## Dataset Description
This dataset has been automatically processed by AutoTrain for project flan-large-conv.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
``... |
false | # AutoTrain Dataset for project: exacts
## Dataset Description
This dataset has been automatically processed by AutoTrain for project exacts.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"t... |
false |
# HC3-textgen-qa
- the `Hello-SimpleAI/HC3` reformatted for textgen
- special tokens for question/answer, see dataset preview |
false | # Dataset Card for "curiosamente"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | |
false | |
false |
# Dataset card for personSeg
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset description](#dataset-description)
- [Dataset categories](#dataset-categories)
## Dataset description
- **Homepage:** https://segments.ai/shahardekel/personSeg
This dataset was created using [Segments.ai](https... |
false |
RecipePairs dataset, originally from the 2022 EMNLP paper: ["SHARE: a System for Hierarchical Assistive Recipe Editing"](https://aclanthology.org/2022.emnlp-main.761/) by Shuyang Li, Yufei Li, Jianmo Ni, and Julian McAuley.
This version (1.5.0) has been updated with 6.9M pairs of `base -> target` recipes, alongside t... |
false | |
false |
# `voc_superpixels_edge_wt_only_coord_10`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PascalVOC-SP| Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # Nodes |... |
false |
# `voc_superpixels_edge_wt_only_coord_30`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PascalVOC-SP| Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # Nodes |... |
false |
## Dataset Description
A subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, from 87 programming languages, and 295 extensions.
Each language is in a separate folder under `data/` and contains folders of its extensions. We select samples from 20,000 random files of the original dataset,... |
false |
# `voc_superpixels_edge_wt_coord_feat_10`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PascalVOC-SP| Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # Nodes |... |
false |
# `voc_superpixels_edge_wt_only_coord_30`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PascalVOC-SP| Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # Nodes |... |
false |
# `voc_superpixels_edge_wt_region_boundary_10`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PascalVOC-SP| Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # No... |
false |
# `voc_superpixels_edge_wt_region_boundary_30`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PascalVOC-SP| Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # No... |
false |
# Dataset Card for EusCrawl
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structur... |
false |
The full dataset information can be found in the JSON file named "augmented_cacapo_for_e2e-02_13_2023_22_17_09", which was created with the interactive dataset creator provided by Huggingface. |
false |
Dataset information can be found in the JSON file named "elongated_training_cacapo_updated-02_22_2023_23_23_20.json", which was created with the interactive dataset creator provided by Huggingface. |
false | |
false |
# Dataset card for personSegSmall
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset description](#dataset-description)
- [Dataset categories](#dataset-categories)
## Dataset description
- **Homepage:** https://segments.ai/shahardekel/personSegSmall
This dataset was created using [Segments... |
false |
# Dataset card for personSegSmall
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset description](#dataset-description)
- [Dataset categories](#dataset-categories)
## Dataset description
- **Homepage:** https://segments.ai/shahardekel/personSegSmall
This dataset was created using [Segments... |
false |
# Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboard... |
false | # AutoTrain Dataset for project: code-mixed-language-identification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project code-mixed-language-identification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample f... |
false |
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English mono... |
false |
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a Englis... |
false |
# Dataset Card for SRSD-Feynman (Easy set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [... |
false |
# Dataset Card for SRSD-Feynman (Medium set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
-... |
false |
# Dataset Card for SRSD-Feynman (Hard set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [... |
true |
## Dataset Description
- **Homepage:** https://github.com/gijswijnholds/sick_nl
- **Repository:** https://github.com/gijswijnholds/sick_nl
- **Paper:** https://aclanthology.org/2021.eacl-main.126/
- **Point of Contact:** [Gijs Wijnholds](mailto:gijswijnholds@gmail.com)
### Dataset Summary
An automatically translate... |
true | # AutoTrain Dataset for project: bbc-news-classifier
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bbc-news-classifier.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as foll... |
true | # AutoTrain Dataset for project: new_1000_respostas
## Dataset Description
This dataset has been automatically processed by AutoTrain for project new_1000_respostas.
### Languages
The BCP-47 code for the dataset's language is pt.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
false |
# `peptides-functional`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| Peptides-func | Chemistry | Graph Classification | Atom Encoder (9) | Bond Encoder (3) | AP
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges | ... |
false |
# `peptides-functional`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| Peptides-struct | Chemistry | Graph Regression | Atom Encoder (9) | Bond Encoder (3) | MAE |
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges |... |
false |
# `peptides-functional`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PCQM-Contact | Quantum Chemistry | Link Prediction | Atom Encoder (9) | Bond Encoder (3) | Hits@K, MRR
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. |... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
false |
# Диалоги из анекдотов и шуток
Датасет содержит результат парсинга анекдотов, наскрапленных с разных сайтов.
## Формат
Каждый сэмпл содержит четыре поля:
"context" - контекст диалога, включая все недиалоговые вставки. Обратите внимание, что контекст содержит как предшествующие реплики, так и прочий сопутствующий ... |
true |
## Anthropic red-teaming data augmentation
The aim is to make use of data from Human-generated red teaming data from [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf) to train a safety classifier. The dataset which is already used ... |
true |
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instanc... |
false | # Alexa Answers from [alexaanswers.amazon.com](https://alexaanswers.amazon.com/)
The Alexa Answers community helps to improve Alexa’s knowledge and answer questions asked by Alexa users. Which contains some very quirky and hard question like
Q: what percent of the population has blackhair
A: The most common hair colo... |
true |
# Dataset Card for Skolmat
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More ... |
false |
**Official website**: https://github.com/lfoppiano/SuperMat
### Reference
The paper discussing this datset can be found [here](https://doi.org/10.1080/27660400.2021.1918396).
For citing:
```
@article{doi:10.1080/27660400.2021.1918396,
author = {Luca Foppiano and Sae Dieb and Akira Suzuki and Pedro Baptista de Cast... |
true |
# Dataset Card for "RO-News-Offense"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-in... |
true | |
false | |
false | This is the imdb dataset, https://huggingface.co/datasets/imdb
We've used a reward / sentiment model, https://huggingface.co/lvwerra/distilbert-imdb to compute the rewards of the offline data.
This is so that we can use offline RL on the data. |
false | # AutoTrain Dataset for project: chessbig
## Dataset Description
This dataset has been automatically processed by AutoTrain for project chessbig.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
... |
false | # wikisource
- Source:
- Num examples: 24,339
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikisource_vi")
``` |
false | # COVID-19 News
- Source: https://huggingface.co/datasets/bigscience-data/roots_vi_data_on_covid_19_news_coverage_in_vietnam
- Num examples: 14,925
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/covid_19_news_vi")
``` |
false | # Ted Talks
- Source: https://huggingface.co/datasets/ted_talks_iwslt
- Num examples: 1,566
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/ted_talks_iwslt_vi")
``` |
false | # Ted Talks
- Source: https://huggingface.co/datasets/ted_talks_iwslt
- Num examples: 2,293
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/ted_talks_iwslt_en")
``` |
false | # wiktionary
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wiktionary
- Num examples: 33,976
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wiktionary_vi") |
false | # wiktionary
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wiktionary
- Num examples: 194,570
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wiktionary_en")
``` |
false |
# m0_fine_tuning_ref_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
... |
false |
# m0_fine_tuning_ref_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset... |
false |
# m0_fine_tuning_ocr_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
... |
false |
# m0_fine_tuning_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset... |
false |
# m1_fine_tuning_ref_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset para... |
false |
# m1_fine_tuning_ref_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 1... |
false |
# m1_fine_tuning_ref_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset pa... |
false |
# m1_fine_tuning_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the... |
false |
# m1_fine_tuning_ocr_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset para... |
false |
# m1_fine_tuning_ocr_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset pa... |
false |
# m1_fine_tuning_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the... |
false |
# m2m3_fine_tuning_ref_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset pa... |
false |
# m2m3_fine_tuning_ref_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.