metadata
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- hi
tags:
- Social Media
- News Media
- Sentiment
- Stance
- Emotion
pretty_name: >-
LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media
Content -- Hindi
size_categories:
- 10K<n<100K
dataset_info:
- config_name: Sentiment Analysis
splits:
- name: train
num_examples: 10039
- name: dev
num_examples: 1258
- name: test
num_examples: 1259
- config_name: MC_Hinglish1
splits:
- name: train
num_examples: 5177
- name: dev
num_examples: 2219
- name: test
num_examples: 1000
- config_name: Offensive Speech Detection
splits:
- name: train
num_examples: 2172
- name: dev
num_examples: 318
- name: test
num_examples: 636
- config_name: xlsum
splits:
- name: train
num_examples: 70754
- name: dev
num_examples: 8847
- name: test
num_examples: 8847
- config_name: Hindi-Hostility-Detection-CONSTRAINT-2021
splits:
- name: train
num_examples: 5718
- name: dev
num_examples: 811
- name: test
num_examples: 1651
- config_name: hate-speech-detection
splits:
- name: train
num_examples: 3327
- name: dev
num_examples: 476
- name: test
num_examples: 951
- config_name: fake-news
splits:
- name: train
num_examples: 8393
- name: dev
num_examples: 1417
- name: test
num_examples: 2743
- config_name: Natural Language Inference
splits:
- name: train
num_examples: 1251
- name: dev
num_examples: 537
- name: test
num_examples: 447
configs:
- config_name: Sentiment Analysis
data_files:
- split: test
path: Sentiment Analysis/test.json
- split: dev
path: Sentiment Analysis/dev.json
- split: train
path: Sentiment Analysis/train.json
- config_name: MC_Hinglish1
data_files:
- split: test
path: MC_Hinglish1/test.json
- split: dev
path: MC_Hinglish1/dev.json
- split: train
path: MC_Hinglish1/train.json
- config_name: Offensive Speech Detection
data_files:
- split: test
path: Offensive Speech Detection/test.json
- split: dev
path: Offensive Speech Detection/dev.json
- split: train
path: Offensive Speech Detection/train.json
- config_name: xlsum
data_files:
- split: test
path: xlsum/test.json
- split: dev
path: xlsum/dev.json
- split: train
path: xlsum/train.json
- config_name: Hindi-Hostility-Detection-CONSTRAINT-2021
data_files:
- split: test
path: Hindi-Hostility-Detection-CONSTRAINT-2021/test.json
- split: dev
path: Hindi-Hostility-Detection-CONSTRAINT-2021/dev.json
- split: train
path: Hindi-Hostility-Detection-CONSTRAINT-2021/train.json
- config_name: hate-speech-detection
data_files:
- split: test
path: hate-speech-detection/test.json
- split: dev
path: hate-speech-detection/dev.json
- split: train
path: hate-speech-detection/train.json
- config_name: fake-news
data_files:
- split: test
path: fake-news/test.json
- split: dev
path: fake-news/dev.json
- split: train
path: fake-news/train.json
- config_name: Natural Language Inference
data_files:
- split: test
path: Natural Language Inference/test.json
- split: dev
path: Natural Language Inference/dev.json
- split: train
path: Natural Language Inference/train.json
LlamaLens: Specialized Multilingual LLM Dataset
Overview
LlamaLens is a specialized multilingual LLM designed for analyzing news and social media content. It focuses on 19 NLP tasks, leveraging 52 datasets across Arabic, English, and Hindi.
LlamaLens
This repo includes scripts needed to run our full pipeline, including data preprocessing and sampling, instruction dataset creation, model fine-tuning, inference and evaluation.
Features
- Multilingual support (Arabic, English, Hindi)
- 19 NLP tasks with 52 datasets
- Optimized for news and social media content analysis
📂 Dataset Overview
Hindi Datasets
Task | Dataset | # Labels | # Train | # Test | # Dev |
---|---|---|---|---|---|
Cyberbullying | MC-Hinglish1.0 | 7 | 7,400 | 1,000 | 2,119 |
Factuality | fake-news | 2 | 8,393 | 2,743 | 1,417 |
Hate Speech | hate-speech-detection | 2 | 3,327 | 951 | 476 |
Hate Speech | Hindi-Hostility-Detection-CONSTRAINT-2021 | 15 | 5,718 | 1,651 | 811 |
Natural Language Inference | Natural Language Inference | 2 | 1,251 | 447 | 537 |
Summarization | xlsum | -- | 70,754 | 8,847 | 8,847 |
Offensive Speech | Offensive Speech Detection | 3 | 2,172 | 636 | 318 |
Sentiment | Sentiment Analysis | 3 | 10,039 | 1,259 | 1,258 |
File Format
Each JSONL file in the dataset follows a structured format with the following fields:
id
: Unique identifier for each data entry.original_id
: Identifier from the original dataset, if available.input
: The original text that needs to be analyzed.output
: The label assigned to the text after analysis.dataset
: Name of the dataset the entry belongs.task
: The specific task type.lang
: The language of the input text.instructions
: A brief set of instructions describing how the text should be labeled.text
: A formatted structure including instructions and response for the task in a conversation format between the system, user, and assistant, showing the decision process.
Example entry in JSONL file:
{
"id": "2b1878df-5a4f-4f74-bcd8-e38e1c3c7cf6",
"original_id": null,
"input": "sub गंदा है पर धंधा है ये . .",
"output": "neutral",
"dataset": "Sentiment Analysis",
"task": "Sentiment",
"lang": "hi",
"instruction": "Identify the sentiment in the text and label it as positive, negative, or neutral. Return only the label without any explanation, justification or additional text."
}
📢 Citation
If you use this dataset, please cite our paper:
@article{kmainasi2024llamalensspecializedmultilingualllm,
title={LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content},
author={Mohamed Bayan Kmainasi and Ali Ezzat Shahroor and Maram Hasanain and Sahinur Rahman Laskar and Naeemul Hassan and Firoj Alam},
year={2024},
journal={arXiv preprint arXiv:2410.15308},
volume={},
number={},
pages={},
url={https://arxiv.org/abs/2410.15308},
eprint={2410.15308},
archivePrefix={arXiv},
primaryClass={cs.CL}
}