--- license: cc-by-nc-sa-4.0 task_categories: - text-classification language: - en tags: - Social Media - News Media - Sentiment - Stance - Emotion pretty_name: >- LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content -- English size_categories: - 10K

## LlamaLens This repo includes scripts needed to run our full pipeline, including data preprocessing and sampling, instruction dataset creation, model fine-tuning, inference and evaluation. ### Features - Multilingual support (Arabic, English, Hindi) - 18 NLP tasks with 52 datasets - Optimized for news and social media content analysis ## 📂 Dataset Overview ### English Datasets | **Task** | **Dataset** | **# Labels** | **# Train** | **# Test** | **# Dev** | |---------------------------|------------------------------|--------------|-------------|------------|-----------| | Checkworthiness | CT24_T1 | 2 | 22,403 | 1,031 | 318 | | Claim | claim-detection | 2 | 23,224 | 7,267 | 5,815 | | Cyberbullying | Cyberbullying | 6 | 32,551 | 9,473 | 4,751 | | Emotion | emotion | 6 | 280,551 | 82,454 | 41,429 | | Factuality | News_dataset | 2 | 28,147 | 8,616 | 4,376 | | Factuality | Politifact | 6 | 14,799 | 4,230 | 2,116 | | News Genre Categorization | CNN_News_Articles_2011-2022 | 6 | 32,193 | 5,682 | 9,663 | | News Genre Categorization | News_Category_Dataset | 42 | 145,748 | 41,740 | 20,899 | | News Genre Categorization | SemEval23T3-subtask1 | 3 | 302 | 83 | 130 | | Summarization | xlsum | -- | 306,493 | 11,535 | 11,535 | | Offensive Language | Offensive_Hateful_Dataset_New | 2 | 42,000 | 5,252 | 5,254 | | Offensive Language | offensive_language_dataset | 2 | 29,216 | 3,653 | 3,653 | | Offensive/Hate-Speech | hate-offensive-speech | 3 | 48,944 | 2,799 | 2,802 | | Propaganda | QProp | 2 | 35,986 | 10,159 | 5,125 | | Sarcasm | News-Headlines-Dataset-For-Sarcasm-Detection | 2 | 19,965 | 5,719 | 2,858 | | Sentiment | NewsMTSC-dataset | 3 | 7,739 | 747 | 320 | | Subjectivity | clef2024-checkthat-lab | 2 | 825 | 484 | 219 | ## Results Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refers to the English-instructed model and *"Native"* refers to the model trained with native language instructions. The results are compared against the SOTA (where available) and the Base: **Llama-Instruct 3.1 baseline**. The **Δ** (Delta) column indicates the difference between LlamaLens and the SOTA performance, calculated as (LlamaLens – SOTA). | **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** | |:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:| | Checkworthiness Detection | CT24_checkworthy | f1_pos | 0.753 | 0.404 | 0.942 | 0.942 | 0.189 | | Claim Detection | claim-detection | Mi-F1 | -- | 0.545 | 0.864 | 0.889 | -- | | Cyberbullying Detection | Cyberbullying | Acc | 0.907 | 0.175 | 0.836 | 0.855 | -0.071 | | Emotion Detection | emotion | Ma-F1 | 0.790 | 0.353 | 0.803 | 0.808 | 0.013 | | Factuality | News_dataset | Acc | 0.920 | 0.654 | 1.000 | 1.000 | 0.080 | | Factuality | Politifact | W-F1 | 0.490 | 0.121 | 0.287 | 0.311 | -0.203 | | News Categorization | CNN_News_Articles_2011-2022 | Acc | 0.940 | 0.644 | 0.970 | 0.970 | 0.030 | | News Categorization | News_Category_Dataset | Ma-F1 | 0.769 | 0.970 | 0.824 | 0.520 | 0.055 | | News Genre Categorisation | SemEval23T3-subtask1 | Mi-F1 | 0.815 | 0.687 | 0.241 | 0.253 | -0.574 | | News Summarization | xlsum | R-2 | 0.152 | 0.074 | 0.182 | 0.181 | 0.030 | | Offensive Language Detection | Offensive_Hateful_Dataset_New | Mi-F1 | -- | 0.692 | 0.814 | 0.813 | -- | | Offensive Language Detection | offensive_language_dataset | Mi-F1 | 0.994 | 0.646 | 0.899 | 0.893 | -0.095 | | Offensive Language and Hate Speech | hate-offensive-speech | Acc | 0.945 | 0.602 | 0.931 | 0.935 | -0.014 | | Propaganda Detection | QProp | Ma-F1 | 0.667 | 0.759 | 0.963 | 0.973 | 0.296 | | Sarcasm Detection | News-Headlines-Dataset-For-Sarcasm-Detection | Acc | 0.897 | 0.668 | 0.936 | 0.947 | 0.039 | | Sentiment Classification | NewsMTSC-dataset | Ma-F1 | 0.817 | 0.628 | 0.751 | 0.748 | -0.066 | | Subjectivity Detection | clef2024-checkthat-lab | Ma-F1 | 0.744 | 0.535 | 0.642 | 0.628 | -0.102 | | --- ## File Format Each JSONL file in the dataset follows a structured format with the following fields: - `id`: Unique identifier for each data entry. - `original_id`: Identifier from the original dataset, if available. - `input`: The original text that needs to be analyzed. - `output`: The label assigned to the text after analysis. - `dataset`: Name of the dataset the entry belongs. - `task`: The specific task type. - `lang`: The language of the input text. - `instructions`: A brief set of instructions describing how the text should be labeled. **Example entry in JSONL file:** ``` { "id": "fb6dd1bb-2ab4-4402-adaa-9be9eea6ca18", "original_id": null, "input": "I feel that worldviews that lack the divine tend toward the solipsistic.", "output": "joy", "dataset": "Emotion", "task": "Emotion", "lang": "en", "instructions": "Identify if the given text expresses an emotion and specify whether it is joy, love, fear, anger, sadness, or surprise. Return only the label without any explanation, justification, or additional text." } ``` ## Model [**LlamaLens on Hugging Face**](https://huggingface.co/QCRI/LlamaLens) ## Replication Scripts [**LlamaLens GitHub Repository**](https://github.com/firojalam/LlamaLens) ## 📢 Citation If you use this dataset, please cite our [paper](https://arxiv.org/pdf/2410.15308): ``` @article{kmainasi2024llamalensspecializedmultilingualllm, title={LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content}, author={Mohamed Bayan Kmainasi and Ali Ezzat Shahroor and Maram Hasanain and Sahinur Rahman Laskar and Naeemul Hassan and Firoj Alam}, year={2024}, journal={arXiv preprint arXiv:2410.15308}, volume={}, number={}, pages={}, url={https://arxiv.org/abs/2410.15308}, eprint={2410.15308}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```