ibuilder / README.md
Soufianesejjari's picture
Update Dockerfile and README.md for improved directory permissions and deployment instructions
7467c5d
metadata
title: Resume Profile Extractor
emoji: πŸ“š
colorFrom: yellow
colorTo: pink
sdk: docker
pinned: false

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

πŸš€ Resume Profile Extractor

A powerful AI-powered application that automatically extracts professional profiles from resumes in PDF format. This application uses LLMs (via Groq) to intelligently parse resume content and generates structured profile data that can be used for portfolio generation, professional websites, and more.

✨ Features

  • PDF Resume Parsing: Extract text from PDF resumes automatically
  • AI-Powered Information Extraction: Uses large language models to extract structured information
  • Interactive Web UI: Clean Streamlit interface for uploading and editing profiles
  • RESTful API: Access extracted profiles via a FastAPI backend
  • Grammar Correction: Clean up extracted text with AI grammar correction
  • Data Storage: Persistent SQLite storage for extracted profiles
  • Profile Image Support: Upload and store profile images
  • Docker Ready: Easy deployment with included Dockerfile

πŸ› οΈ Architecture

The application consists of two main components:

  1. Streamlit Web UI: A user-friendly interface for uploading resumes, editing extracted information, and managing profiles
  2. FastAPI Backend: A RESTful API service for accessing profiles programmatically

Both components run simultaneously in a single container when deployed.

πŸ“‹ Technical Stack

  • Python 3.9+
  • Streamlit: Web interface framework
  • FastAPI: API framework
  • LangChain + Groq: AI language models for text extraction & processing
  • SQLite: Lightweight database for profile storage
  • PyPDF2: PDF parsing
  • Pydantic: Data validation and settings management
  • Uvicorn: ASGI server
  • Docker: Containerization

πŸƒβ€β™€οΈ Quick Start

Local Development

  1. Clone the repository
  2. Install dependencies:
    pip install -r requirements.txt
    
  3. Create a .env file from the sample:
    cp .env.sample .env
    
  4. Add your Groq API key to the .env file
  5. Run the application:
    python run_combined.py
    
  6. Open http://localhost:7860 in your browser

Using Docker

# Build the Docker image
docker build -t profile-extractor .

# Run the container
docker run -p 7860:7860 -p 8000:8000 -e GROQ_API_KEY=your_key_here profile-extractor

πŸš€ Deployment on Hugging Face Spaces

This application is designed to be easily deployed on Hugging Face Spaces:

  1. Create a new Space on Hugging Face
  2. Select Docker as the Space SDK
  3. Link your GitHub repository or upload the files directly
  4. Add your GROQ_API_KEY in the Settings > Variables section
  5. (Optional) Set EXTERNAL_API_URL to your Space's URL (e.g., https://your-username-your-space-name.hf.space)
  6. Deploy the Space!

Required Environment Variables

Variable Description Required
GROQ_API_KEY Your Groq API key for LLM access Yes
EXTERNAL_API_URL Public URL of your API (for production) No
DEBUG Enable debug logging (true/false) No

πŸ”„ API Endpoints

The API is available at port 8000 when running locally, or through the Hugging Face Space URL.

Endpoint Method Description
/health GET Health check endpoint
/api/profile/{id} GET Get a complete profile by ID
/api/profile/{id}/image GET Get just the profile image

πŸ“š Usage Guide

  1. Upload Resume: Start by uploading a PDF resume
  2. Review & Edit: The system will extract information and allow you to review and edit
  3. Save Profile: Save your profile to get a unique profile ID
  4. Access API: Use the API endpoints to access your profile data
  5. Build Portfolio: Use the structured data to build dynamic portfolios and websites

🧩 Project Structure

agentAi/
β”œβ”€β”€ agents/            # AI agents for extraction and processing
β”œβ”€β”€ services/          # Backend services (storage, etc.)
β”œβ”€β”€ utils/             # Utility functions
β”œβ”€β”€ app.py             # Streamlit web application
β”œβ”€β”€ api.py             # FastAPI endpoints
β”œβ”€β”€ models.py          # Pydantic data models
β”œβ”€β”€ config.py          # Application configuration
β”œβ”€β”€ run_combined.py    # Script to run both services
β”œβ”€β”€ requirements.txt   # Python dependencies
β”œβ”€β”€ Dockerfile         # For containerized deployment
└── README.md          # Documentation

πŸ“ License

MIT License

πŸ™ Acknowledgements