mfgiguere's picture
Update README.md
07cbca3
metadata
license: openrail
language:
  - fr
tags:
  - french
  - philosophy
  - quebec
size_categories:
  - 100K<n<1M

Dataset Card for Dataset Name

Dataset Description

Dataset Summary

This dataset contains all french philosophy that has been published on erudit.org. It has been generated using a Bs4 web parser that you can find in this repo: https://github.com/MFGiguere/french-philosophy-generator.

Supported Tasks and Leaderboards

This dataset could be useful for this (non-exhaustive) set of tasks: detect if a text is philosophical or not, generate philosophical sentences, generate an abstract from an article, ...

Languages

The database includes includes all journals where the main language is french but might include non-french sentences from quotes or special editions.

Dataset Structure

Data Instances

Each row of the databse is a sentence and each column is a text's metadata.

Data Fields

The data is structured as follow, which makes it possible to combine sentences into paragraphs, sections or whole texts.

features = {
        "Journal": str,              #The name of the journal where the text was published
        "Author": str,               #Required to be able to generate texts by author. 
        "Year": str,                 #Will help form a sense of direction on a large scale.  
        "Title": str,                #Can be useful for smaller dataset, but can be inferred with enough files. 
        "section_rank": int,         #Abstract will be 0 and sections will start as 1. 
        "par_rank": int,             #Abstract will be 0 and paragraphs will start as 1. 
        "sent_rank": int,            #no of sentence in the paragraph
        "text": str                  #Will be single sentence at a time. 
        }

Additional Information

Known limitations

Parsing was done in two phase: first part of the parsing was done on a farm with a poor wifi, so some texts might have been partially or entirely skipped. This is the reason we did a second parsing. A second parsing was done to append missing texts in the dataset.

There were also inconsistencies that I tried to capture with the parser, but some inconcistencies remain and no manual validation of data was made afterward.

Contributions

This dataset exists because of the Deepmay 2023 bootcamp instructors who gave us a solid instruction to language models and a friend at the Bootcamp that suggested me to host this dataset publicly on here!