A newer version of the Gradio SDK is available:
6.0.0
metadata
title: linkedin_profile_chatbot
app_file: linkedin_profile_chatbot/app.py
sdk: gradio
sdk_version: 5.34.2
python_version: 3.12
LinkedIn Profile Chatbot
A Gradio-based chatbot that can answer questions about your LinkedIn profile using AI. Upload your LinkedIn profile PDF and summary, then deploy to Hugging Face Spaces for easy sharing and embedding.
Setup Instructions
1. Prepare Your Profile Data
Download Your LinkedIn Profile as PDF
- Go to your LinkedIn profile page
- Click the "More" button (three dots) in your profile section
- Select "Save to PDF" from the dropdown menu
- Save the PDF file as
linkedin.pdf
Create a Profile Summary
- Write a brief summary of your professional background, skills, and experience
- Save this as a plain text file named
summary.txt - Keep it concise but informative (2-3 paragraphs recommended)
Place Files in the /assets Folder
- Copy your
linkedin.pdffile to the/assetsdirectory - Copy your
summary.txtfile to the/assetsdirectory
Your file structure should look like:
/assets
βββ linkedin.pdf
βββ summary.txt
2. Local Development
Install Dependencies
# Install using uv (recommended)
uv sync
Run the Gradio App Locally
# Using uv
uv run linkedin_profile_chatbit/app.py
The Gradio interface will start on http://localhost:7860 by default.
3. Deployment to Hugging Face Spaces
Prerequisites
- A Hugging Face account
- Your repository pushed to GitHub or Hugging Face Hub
Deploy to Hugging Face Spaces
- Go to Hugging Face Spaces
- Click "Create new Space"
- Choose a name for your space
- Select "Docker" as the SDK
- Connect your GitHub repository or upload your files
- The space will automatically build using the provided Dockerfile
Configuration
The project is pre-configured for Hugging Face Spaces with:
- Dockerfile: Uses Python 3.12 with uv for fast dependency management
- README.md header: Configured for Docker deployment
- Port 7860: Standard Gradio port for Hugging Face Spaces
4. Environment Variables
Create a .env file in your project root with your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key_here
For Hugging Face Spaces, add this as a secret in your Space settings.
Usage
Local Usage
- Run the application locally using
uv run run_gradio.py - Open your browser to
http://localhost:7860 - Start chatting with your LinkedIn profile chatbot
Deployed Usage
- Once deployed to Hugging Face Spaces, your chatbot will be available at your Space URL
- Share the URL with others or embed it in your website
- The chatbot will answer questions about your professional background using your LinkedIn PDF and summary
Features
- Gradio Interface: Clean, user-friendly chat interface
- AI-Powered: Uses OpenAI's GPT models for intelligent responses
- Document Processing: Extracts information from your LinkedIn PDF
- Docker Deployment: Containerized for reliable deployment
- Fast Dependencies: Uses uv for quick installation and updates
- Responsive Design: Works on desktop and mobile devices
Technical Details
Architecture
- Frontend: Gradio web interface
- Backend: Python with OpenAI API integration
- Document Processing: PyPDF for PDF text extraction
- Deployment: Docker container with Python 3.12
- Dependency Management: uv for fast, reliable package management
Key Files
run_gradio.py: Main entry point for the Gradio applicationlinkedin_profile_chatbot/app.py: Gradio interface configurationlinkedin_profile_chatbot/chat.py: Chat logic and AI integrationlinkedin_profile_chatbot/core.py: Core functionality and document processingDockerfile: Container configuration for deploymentpyproject.toml: Project configuration and dependencies
Troubleshooting
Common Issues
Build fails on Hugging Face:
- Ensure your
.envfile is not committed to the repository - Add your OpenAI API key as a secret in Hugging Face Spaces settings
- Check that all required files are present in the
/assetsfolder
Chatbot gives generic responses:
- Verify your
linkedin.pdfandsummary.txtfiles are in the/assetsfolder - Check that the PDF contains readable text (not just images)
- Ensure your OpenAI API key is valid and has sufficient credits
Local development issues:
- Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh - Run
uv syncto install dependencies - Check that Python 3.12+ is available
Docker Build Issues
If you encounter Docker build issues locally:
# Build the Docker image
docker build -t linkedin-chatbot .
# Run the container
docker run -p 7860:7860 linkedin-chatbot
Support
For issues or questions:
- Check the Hugging Face Spaces logs for deployment issues
- Verify your OpenAI API key and credits
- Ensure your profile documents are properly formatted and placed in
/assets