--- license: cc-by-nc-nd-4.0 datasets: - MichalMlodawski/closed-open-eyes language: - en tags: - eye - eyes model-index: - name: mobilenet_v2 Eye State Classifier results: - task: type: image-classification dataset: name: MichalMlodawski/closed-open-eyes type: custom metrics: - name: Accuracy type: self-reported value: 99% - name: Precision type: self-reported value: 99% - name: Recall type: self-reported value: 99% --- --- # πŸ‘οΈ Open-Closed Eye Classification mobilenet_v2 πŸ‘οΈ ## Model Overview πŸ” This model is a fine-tuned version of mobilenet_v2, specifically designed for classifying images of eyes as either open or closed. With an impressive accuracy of 99%, this classifier excels in distinguishing between open and closed eyes in various contexts. ## Model Details πŸ“Š - **Model Name**: open-closed-eye-classification-focalnet-base - **Base Model**: google/mobilenet_v2_1.4_224 - **Fine-tuned By**: MichaΕ‚ MΕ‚odawski - **Categories**: - 0: Closed Eyes 😴 - 1: Open Eyes πŸ‘€ - **Accuracy**: 99% 🎯 ## Use Cases πŸ’‘ This high-accuracy model is particularly useful for applications involving: - Driver Drowsiness Detection πŸš— - Attentiveness Monitoring in Educational Settings 🏫 - Medical Diagnostics related to Eye Conditions πŸ₯ - Facial Analysis in Photography and Videography πŸ“Έ - Human-Computer Interaction Systems πŸ’» ## How It Works πŸ› οΈ The model takes an input image and classifies it into one of two categories: - **Closed Eyes** (0): Images where the subject's eyes are fully or mostly closed. - **Open Eyes** (1): Images where the subject's eyes are open. The classification leverages the advanced image processing capabilities of the FocalNet architecture, fine-tuned on a carefully curated dataset of eye images. ## Getting Started πŸš€ To start using the open-closed-eye-classification-focalnet-base, you can integrate it into your projects with the following steps: ### Installation ```bash pip install transformers==4.37.2 pip install torch==2.3.1 pip install Pillow ``` ### Usage ```python import os from PIL import Image import torch from torchvision import transforms from transformers import AutoImageProcessor, MobileNetV2ForImageClassification # Path to the folder with images image_folder = "" # Path to the model model_path = "MichalMlodawski/open-closed-eye-classification-mobilev2" # List of jpg files in the folder jpg_files = [file for file in os.listdir(image_folder) if file.lower().endswith(".jpg")] # Check if there are jpg files in the folder if not jpg_files: print("🚫 No jpg files found in folder:", image_folder) exit() # Load the model and image processor image_processor = AutoImageProcessor.from_pretrained(model_path) model = MobileNetV2ForImageClassification.from_pretrained(model_path) model.eval() # Image transformations transform = transforms.Compose([ transforms.Resize((256, 256)), transforms.ToTensor() ]) # Processing and prediction for each image results = [] for jpg_file in jpg_files: selected_image = os.path.join(image_folder, jpg_file) image = Image.open(selected_image).convert("RGB") image_tensor = transform(image).unsqueeze(0) # Process image using image_processor inputs = image_processor(images=image, return_tensors="pt") # Prediction using the model with torch.no_grad(): outputs = model(**inputs) probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1) confidence, predicted = torch.max(probabilities, 1) results.append((jpg_file, predicted.item(), confidence.item() * 100)) # Display results print("πŸ–ΌοΈ Image Classification Results πŸ–ΌοΈ") print("=" * 40) for jpg_file, prediction, confidence in results: emoji = "πŸ‘οΈ" if prediction == 1 else "❌" confidence_bar = "🟩" * int(confidence // 10) + "⬜" * (10 - int(confidence // 10)) print(f"πŸ“„ File name: {jpg_file}") print(f"{emoji} Prediction: {'Open' if prediction == 1 else 'Closed'}") print(f"🎯 Confidence: {confidence:.2f}% {confidence_bar}") print(f"{'=' * 40}") print("🏁 Classification completed! πŸŽ‰") ``` ## Disclaimer ⚠️ This model is provided for research and development purposes only. The creators and distributors of this model do not assume any legal responsibility for its use or misuse. Users are solely responsible for ensuring that their use of this model complies with applicable laws, regulations, and ethical standards. The model's performance may vary depending on the quality and nature of input images. Always validate results in critical applications. 🚫 Do not use this model for any illegal, unethical, or potentially harmful purposes. πŸ“ Please note that while the model demonstrates high accuracy, it should not be used as a sole decision-making tool in safety-critical systems without proper validation and human oversight.