image
image

This dataset endeavors to fill the research void by presenting a meticulously curated collection of misogynistic memes in a code-mixed language of Hindi and English. It introduces two sub-tasks: the first entails a binary classification to determine the presence of misogyny in a meme, while the second task involves categorizing the misogynistic memes into multiple labels, including Objectification, Prejudice, and Humiliation.

For more Information and Citation: Singh, A., Sharma, D., & Singh, V. K. (2024). MIMIC: Misogyny Identification in Multimodal Internet Content in Hindi-English Code-Mixed Language. ACM Transactions on Asian and Low-Resource Language Information Processing. (https://doi.org/10.1145/3656169)

The ZIP folder comprises a CSV file labeled "MIMIC2024" and a directory named "Files" Within the "Files" directory, the memes are stored in JPEG format, while the CSV file contains annotation details for each meme. The CSV file consists of six columns, each described as follows: FileName: Name of the meme in side Files folder ExtractedText: Text extracted from the meme using EasyORC Misogyny: To be used for binary classification task (1-Misogynistic, 0-Non Misogynistic) Objectification, Prejudice, Humiliation: To be used for multi-label classification purposes, indicates the category or categories in which a misogynistic meme falls. (Note: A meme may have multiple labels.)

Downloads last month
0
Edit dataset card