Spaces:
Sleeping
Sleeping
π Deploy LinkScout Backend on Hugging Face Spaces (FREE)
Why Hugging Face Spaces?
- β 16GB RAM FREE (vs Render's 512MB)
- β Perfect for ML models (built for this)
- β Free GPU option available
- β Persistent storage for models
- β No credit card required
- β Always on (no sleep like Render)
π¦ Step-by-Step Deployment
Step 1: Create Hugging Face Account
- Go to https://huggingface.co/join
- Sign up (free, no credit card)
- Verify your email
Step 2: Create a New Space
Fill in details:
- Owner: Your username
- Space name:
linkscout-backend - License: MIT
- Select SDK: Gradio (we'll use custom Docker)
- Space hardware: CPU basic (free) - 16GB RAM!
- Visibility: Public
Click "Create Space"
Step 3: Prepare Files for HuggingFace
We need to create a few HuggingFace-specific files:
3.1 Create app.py (Entry point for HuggingFace)
Create D:\LinkScout\app.py:
# This is a wrapper for HuggingFace Spaces
# It imports and runs the Flask server from combined_server.py
if __name__ == '__main__':
import combined_server
# Server will start automatically when combined_server is imported
3.2 Create Dockerfile (for HuggingFace Spaces)
Create D:\LinkScout\Dockerfile:
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create cache directory for models
RUN mkdir -p ./models_cache
# Expose port (HuggingFace uses port 7860 by default)
EXPOSE 7860
# Set environment variables
ENV PORT=7860
ENV PYTHONUNBUFFERED=1
# Run the application
CMD ["python", "combined_server.py"]
3.3 Create README.md for HuggingFace Space
Create D:\LinkScout\README_SPACE.md:
---
title: LinkScout Backend
emoji: π
colorFrom: orange
colorTo: yellow
sdk: docker
pinned: false
---
# LinkScout AI-Powered Misinformation Detection Backend
This is the backend API for LinkScout, featuring:
- π€ 8 Pre-trained ML Models
- π¬ 8-Phase Revolutionary Detection
- π§ Groq AI Integration
- π Real-time Fact Checking
## API Endpoints
- `POST /analyze` - Analyze text for misinformation
- `GET /health` - Health check
- `POST /feedback` - Submit RL feedback
## Environment Variables Required
Set these in Space Settings β Variables:
- `GROQ_API_KEY` - Your Groq API key
- `GOOGLE_API_KEY` - (Optional) Google Search API key
- `GOOGLE_CSE_ID` - (Optional) Google Custom Search Engine ID
Step 4: Update Port for HuggingFace
HuggingFace Spaces use port 7860 by default. Update combined_server.py:
Find the port configuration section and update:
if __name__ == '__main__':
import os
# HuggingFace Spaces uses port 7860, Render uses env PORT
port = int(os.environ.get('PORT', 7860))
print(f" π Port: {port}")
app.run(host='0.0.0.0', port=port, debug=False, threaded=True, use_reloader=False)
Step 5: Push to HuggingFace
Option A: Using Git (Recommended)
# Add HuggingFace as a remote
cd D:\LinkScout
git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/linkscout-backend
# Push to HuggingFace
git push hf main
Replace YOUR_USERNAME with your HuggingFace username!
Option B: Upload Files Manually
- Go to your Space:
https://huggingface.co/spaces/YOUR_USERNAME/linkscout-backend - Click "Files" tab
- Click "Add file" β "Upload files"
- Upload all your files (drag & drop the entire
LinkScoutfolder)
Step 6: Set Environment Variables
- Go to your Space settings
- Click "Settings" tab
- Scroll to "Repository secrets"
- Add secrets:
- Name:
GROQ_API_KEY, Value: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 - Name:
GOOGLE_API_KEY, Value:AIzaSyA9yKthZUnPAHFnmsnCZoikpvfUteJiX0s - Name:
GOOGLE_CSE_ID, Value:11cbd494597034810
- Name:
Step 7: Wait for Build
The Space will automatically build when you push files:
- Go to "Logs" tab to watch build progress
- First build takes 10-15 minutes (downloads models)
- Look for:
β RoBERTa loaded β Emotion model loaded ... π Starting LinkScout server on port 7860... Running on http://0.0.0.0:7860
Step 8: Get Your API URL
Once deployed, your backend will be at:
https://YOUR_USERNAME-linkscout-backend.hf.space
Example endpoints:
- Health check:
https://YOUR_USERNAME-linkscout-backend.hf.space/health - Analyze:
https://YOUR_USERNAME-linkscout-backend.hf.space/analyze
π― Advantages of HuggingFace Spaces
| Feature | Render Free | HuggingFace Free |
|---|---|---|
| RAM | 512MB β | 16GB β |
| ML Models | Can't load β | Perfect β |
| Sleep | After 15 min β οΈ | Always on β |
| Build Time | Fast | Fast |
| Custom Domain | Yes β | Yes β |
| For Your App | Won't work β | Perfect! β |
π§ Troubleshooting
Build Fails
- Check logs for errors
- Verify all files uploaded
- Check Dockerfile syntax
Models Won't Load
- Check you have 16GB RAM selected (CPU basic)
- Verify requirements.txt has all dependencies
- Check logs for download errors
API Not Responding
- Check Space is "Running" (not building)
- Verify port 7860 is used
- Test health endpoint first
Environment Variables Not Working
- Make sure they're set as "Repository secrets"
- Restart the Space after adding secrets
π Next Steps After Deployment
Test your backend:
curl https://YOUR_USERNAME-linkscout-backend.hf.space/healthUpdate your frontend to use new URL:
- Replace
http://localhost:5000with your HF Space URL - Update in
app/search/page.tsx
- Replace
Update extension to use new URL:
- Update
extension/popup.jsAPI_URL
- Update
Deploy frontend on Vercel (separate from backend)
π‘ Cost Comparison
- Render Free: Can't run your app (512MB limit)
- Railway Free: $5/month credit, will run out
- HuggingFace Free: Unlimited, perfect for ML
- Paid Options: $7-20/month for more RAM
HuggingFace Spaces is the ONLY truly free option that will work for your ML-heavy backend!
π Resources
- HuggingFace Spaces Docs: https://huggingface.co/docs/hub/spaces
- Docker SDK Guide: https://huggingface.co/docs/hub/spaces-sdks-docker
- HuggingFace Git: https://huggingface.co/docs/hub/repositories-getting-started
Good luck! π