--- license: apache-2.0 base_model: google/mobilebert-uncased tags: - dataset tools - books - book - genre metrics: - f1 widget: - text: The Quantum Chip example_title: Science Fiction & Fantasy - text: One Dollar's Journey example_title: Business & Finance - text: Timmy The Talking Tree example_title: idk fiction - text: The Cursed Canvas example_title: Arts & Design - text: Hoops and Hegel example_title: Philosophy & Religion - text: Overview of Streams in North Dakota example_title: Nature - text: Advanced Topology example_title: Non-fiction/Math - text: Cooking Up Love example_title: Food & Cooking - text: Dr. Doolittle's Extraplanatary Commute example_title: Science & Technology language: - en --- # mobilebert-uncased-title2genre This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) for multi-label classification (18 labels). ## Model description This classifies one or more **genre** labels in a **multi-label** setting for a given book **title**. The 'standard' way of interpreting the predictions is that the predicted labels for a given example are **only the ones with a greater than 50% probability.** ## Details ### Labels There are 18 labels, these are already integrated into the `config.json` and should be output by the model: ```json "id2label": { "0": "History & Politics", "1": "Health & Medicine", "2": "Mystery & Thriller", "3": "Arts & Design", "4": "Self-Help & Wellness", "5": "Sports & Recreation", "6": "Non-Fiction", "7": "Science Fiction & Fantasy", "8": "Countries & Geography", "9": "Other", "10": "Nature & Environment", "11": "Business & Finance", "12": "Romance", "13": "Philosophy & Religion", "14": "Literature & Fiction", "15": "Science & Technology", "16": "Children & Young Adult", "17": "Food & Cooking" }, ``` ### Eval results (validation) It achieves the following results on the evaluation set: - Loss: 0.2658 - F1: 0.5395 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-10 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10.0 ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cpu - Datasets 2.14.5 - Tokenizers 0.14.0