--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 - name: predicted_sentiment_facebook/bart-large-mnli dtype: string - name: predicted_sentiment_distilbert-base-uncased dtype: string - name: predicted_sentiment_roberta-base dtype: string splits: - name: train num_bytes: 1361555 num_examples: 1000 download_size: 862047 dataset_size: 1361555 configs: - config_name: default data_files: - split: train path: data/train-* license: odbl task_categories: - text-classification language: - en size_categories: - n<1K --- ### Dataset Description In this Task , we conducted one-shot sentiment analysis on a subset of the IMDb movie reviews dataset using multiple language models. The goal was to predict the sentiment (positive or negative) of movie reviews without fine-tuning the models on the specific task. We utilized three different pre-trained language models for zero-shot classification: BART-large, DistilBERT-base, and RoBERTa-base. For each model, we generated predicted sentiment labels for a subset of 100 movie reviews from the IMDb dataset. The reviews were randomly sampled, ensuring a diverse representation of sentiments. After processing the reviews through each model, we saved the predicted sentiment labels alongside the original reviews in a CSV file named "imdb_reviews_with_labels.csv". This file contains the reviews and the predicted sentiment labels for each model. Additionally, we uploaded both the dataset and the CSV file to the Hugging Face Hub for easy access and sharing. The dataset can be found at the following link after uploading: https://huggingface.co/datasets/Mouwiya/imdb_reviews_with_labels This task demonstrates the effectiveness of zero-shot classification using pre-trained language models for sentiment analysis tasks and provides a valuable resource for further analysis and experimentation. - **Curated by:** [Mouwiya S. A. AlQaisieh]