ckandemir commited on
Commit
a54bdb3
1 Parent(s): 5c35113

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -34,6 +34,56 @@ dataset_info:
34
  num_examples: 2666
35
  download_size: 6391314
36
  dataset_size: 17418436
 
 
 
 
 
 
 
 
37
  ---
 
38
 
39
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  num_examples: 2666
35
  download_size: 6391314
36
  dataset_size: 17418436
37
+ license: apache-2.0
38
+ task_categories:
39
+ - image-classification
40
+ - image-to-text
41
+ language:
42
+ - en
43
+ size_categories:
44
+ - 10K<n<100K
45
  ---
46
+ ## Dataset Creation and Processing Overview
47
 
48
+ This dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.
49
+
50
+ ### Data Loading and Initial Cleaning
51
+ - **Source**: Loaded from the Hugging Face dataset repository (`bprateek/amazon_product_description`).
52
+ - **Conversion to Pandas DataFrame**: For ease of data manipulation.
53
+ - **Null Value Removal**: Rows with null values in the 'About Product' column were discarded.
54
+
55
+ ### Data Cleaning and NLP Processing
56
+ - **Sentence Extraction**: 'About Product' descriptions were split into sentences, identifying common phrases.
57
+ - **Emoji and Special Character Removal**: A regex function removed these elements from the product descriptions.
58
+ - **Common Phrase Elimination**: A function was used to strip common phrases from each product description.
59
+ - **Improving Writing Standards**: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.
60
+
61
+ ### Sentence Similarity Analysis
62
+ - **Model Application**: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.
63
+ - **Sentence Comparison**: Identified the most similar sentence to each product name within the cleaned product descriptions.
64
+ - **Integration of Results**: Added the most similar sentences as a new column 'Most_Similar_Sentence'.
65
+
66
+ ### Dataset Refinement
67
+ - **Column Selection**: Retained relevant columns for final dataset.
68
+ - **Image URL Processing**: Split multiple image URLs into individual URLs, removing specific unwanted URLs.
69
+ - **Column Renaming**: Renamed 'Most_Similar_Sentence' to 'Description'.
70
+
71
+ ### Image Validation
72
+ - **Image URL Validation**: Implemented a function to verify the validity of each image URL.
73
+ - **Filtering Valid Images**: Retained only rows with valid image URLs.
74
+
75
+ ### Dataset Splitting for Machine Learning
76
+ - **Creation of Train, Test, and Eval Sets**: Used scikit-learn's `train_test_split` for dataset division.
77
+
78
+ ### Hugging Face Dataset Preparation and Publishing
79
+ - **Conversion to Dataset Objects**: Converted each Pandas DataFrame (train, test, eval) into Hugging Face `Dataset` objects.
80
+ - **Dataset Dictionary Assembly**: Aggregated all splits into a `DatasetDict`.
81
+ - **Publishing to Hugging Face Hub**: The dataset was named "amazon-products" and pushed to the Hub for community access.
82
+
83
+ ## Dataset Card Information
84
+ - **Configs**: The dataset is split into train, test, and eval configurations, with paths specified for each.
85
+ - **Features**: Includes fields for Product Name, Category, Description, Selling Price, Product Specification, and Image.
86
+ - **Splits**: Detailed information on the number of bytes and examples for each dataset split.
87
+ - **Sizes**: Download and total dataset size specifications are provided.
88
+
89
+ For further details or to contribute to enhancing the dataset card, please refer to the [Hugging Face Dataset Card Contribution Guide](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards).