Spaces:
Running
Running
Upload folder using huggingface_hub
Browse files- .gitattributes +4 -0
- .gitignore +10 -0
- .python-version +1 -0
- DEPLOYMENT.md +86 -0
- README.md +2 -8
- app.py +119 -0
- main.py +6 -0
- pyproject.toml +46 -0
- requirements.txt +3 -0
- src/.gradio/certificate.pem +31 -0
- src/app.py +124 -0
- src/cv/avatar.jpeg +0 -0
- src/cv/me.txt +152 -0
- src/cv_chat.ipynb +274 -0
- src/projects_images/1726049646844.jpeg +3 -0
- src/projects_images/bi.png +3 -0
- src/projects_images/cast.jpeg +3 -0
- src/projects_images/llm.jpeg +0 -0
- src/projects_images/robot.png +3 -0
- src/projects_images/s_up.jpeg +0 -0
- src/requirements.txt +3 -0
- uv.lock +0 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
src/projects_images/1726049646844.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
src/projects_images/bi.png filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
src/projects_images/cast.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
src/projects_images/robot.png filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python-generated files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[oc]
|
| 4 |
+
build/
|
| 5 |
+
dist/
|
| 6 |
+
wheels/
|
| 7 |
+
*.egg-info
|
| 8 |
+
|
| 9 |
+
# Virtual environments
|
| 10 |
+
.venv
|
.python-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
3.10
|
DEPLOYMENT.md
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Hugging Face Spaces Deployment Guide
|
| 2 |
+
|
| 3 |
+
## Files Created for Deployment
|
| 4 |
+
|
| 5 |
+
1. **app.py** (root) - Main application file configured for Hugging Face Spaces
|
| 6 |
+
2. **requirements.txt** - Minimal dependencies needed for the app
|
| 7 |
+
3. **README.md** - Space metadata and description
|
| 8 |
+
|
| 9 |
+
## Deployment Steps
|
| 10 |
+
|
| 11 |
+
### 1. Prepare Your Repository
|
| 12 |
+
|
| 13 |
+
The current file structure should work. The app.py in the root will automatically detect files in `src/` or root directories.
|
| 14 |
+
|
| 15 |
+
### 2. Create a Hugging Face Space
|
| 16 |
+
|
| 17 |
+
1. Go to [Hugging Face Spaces](https://huggingface.co/spaces)
|
| 18 |
+
2. Click "Create new Space"
|
| 19 |
+
3. Choose:
|
| 20 |
+
- **SDK**: Gradio
|
| 21 |
+
- **Python version**: 3.10 or higher
|
| 22 |
+
- **Hardware**: CPU (or GPU if needed)
|
| 23 |
+
- **Visibility**: Public or Private
|
| 24 |
+
|
| 25 |
+
### 3. Push Your Code
|
| 26 |
+
|
| 27 |
+
You can either:
|
| 28 |
+
|
| 29 |
+
**Option A: Using Git**
|
| 30 |
+
```bash
|
| 31 |
+
# Initialize git if not already done
|
| 32 |
+
git init
|
| 33 |
+
git add app.py requirements.txt README.md src/
|
| 34 |
+
git commit -m "Deploy to Hugging Face Spaces"
|
| 35 |
+
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
|
| 36 |
+
git push -u origin main
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
**Option B: Using Hugging Face CLI**
|
| 40 |
+
```bash
|
| 41 |
+
pip install huggingface_hub
|
| 42 |
+
huggingface-cli login
|
| 43 |
+
# Then upload files through the web interface or use the API
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
### 4. Set Environment Variables
|
| 47 |
+
|
| 48 |
+
1. Go to your Space settings
|
| 49 |
+
2. Navigate to "Variables and secrets"
|
| 50 |
+
3. Add a new secret:
|
| 51 |
+
- **Name**: `OPENAI_API_KEY`
|
| 52 |
+
- **Value**: Your OpenAI API key
|
| 53 |
+
|
| 54 |
+
### 5. Verify File Structure
|
| 55 |
+
|
| 56 |
+
Make sure these files/folders are in your Space:
|
| 57 |
+
```
|
| 58 |
+
├── app.py
|
| 59 |
+
├── requirements.txt
|
| 60 |
+
├── README.md
|
| 61 |
+
└── src/
|
| 62 |
+
├── cv/
|
| 63 |
+
│ ├── me.txt
|
| 64 |
+
│ └── avatar.jpeg
|
| 65 |
+
└── projects_images/
|
| 66 |
+
├── s_up.jpeg
|
| 67 |
+
├── llm.jpeg
|
| 68 |
+
├── bi.png
|
| 69 |
+
└── robot.png
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
### 6. Wait for Build
|
| 73 |
+
|
| 74 |
+
The Space will automatically build and deploy. You can monitor the build logs in the Space interface.
|
| 75 |
+
|
| 76 |
+
## Troubleshooting
|
| 77 |
+
|
| 78 |
+
- **File not found errors**: Make sure all files in `src/` are committed and pushed
|
| 79 |
+
- **API key errors**: Verify `OPENAI_API_KEY` is set as a secret (not a variable)
|
| 80 |
+
- **Import errors**: Check that `requirements.txt` includes all necessary packages
|
| 81 |
+
|
| 82 |
+
## Notes
|
| 83 |
+
|
| 84 |
+
- The app.py automatically detects file paths (works with both `src/` and root-level files)
|
| 85 |
+
- Make sure `.env` file is NOT committed (it's in .gitignore)
|
| 86 |
+
- The app uses Hugging Face Space secrets for the API key
|
README.md
CHANGED
|
@@ -1,12 +1,6 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
|
| 4 |
-
colorFrom: red
|
| 5 |
-
colorTo: gray
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 6.0.2
|
| 8 |
-
app_file: app.py
|
| 9 |
-
pinned: false
|
| 10 |
---
|
| 11 |
-
|
| 12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
| 1 |
---
|
| 2 |
+
title: cv-bot
|
| 3 |
+
app_file: app.py
|
|
|
|
|
|
|
| 4 |
sdk: gradio
|
| 5 |
sdk_version: 6.0.2
|
|
|
|
|
|
|
| 6 |
---
|
|
|
|
|
|
app.py
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
from openai import OpenAI
|
| 3 |
+
from dotenv import load_dotenv
|
| 4 |
+
import os
|
| 5 |
+
|
| 6 |
+
# Try to load .env file if it exists (for local development)
|
| 7 |
+
load_dotenv()
|
| 8 |
+
|
| 9 |
+
# Initialize OpenAI client - will use OPENAI_API_KEY from environment
|
| 10 |
+
# For Hugging Face Spaces, set this as a secret in the Space settings
|
| 11 |
+
api_key = os.getenv("OPENAI_API_KEY")
|
| 12 |
+
if not api_key:
|
| 13 |
+
raise ValueError("OPENAI_API_KEY not found. Please set it in your environment variables or Hugging Face Space secrets.")
|
| 14 |
+
|
| 15 |
+
client = OpenAI(api_key=api_key)
|
| 16 |
+
|
| 17 |
+
# File paths - adjust based on where the app runs
|
| 18 |
+
# For Hugging Face Spaces, files should be in the root or adjust paths accordingly
|
| 19 |
+
cv_path = "src/cv/me.txt" if os.path.exists("src/cv/me.txt") else "cv/me.txt"
|
| 20 |
+
avatar_path = "src/cv/avatar.jpeg" if os.path.exists("src/cv/avatar.jpeg") else "cv/avatar.jpeg"
|
| 21 |
+
projects_base = "src/projects_images" if os.path.exists("src/projects_images") else "projects_images"
|
| 22 |
+
|
| 23 |
+
projects = [
|
| 24 |
+
{"image": f"{projects_base}/s_up.jpeg", "title": "Ai Recommendation System"},
|
| 25 |
+
{"image": f"{projects_base}/llm.jpeg", "title": "LLM Automation"},
|
| 26 |
+
{"image": f"{projects_base}/bi.png", "title": "BI"},
|
| 27 |
+
{"image": f"{projects_base}/robot.png", "title": "Robot Arm Control With Ros Python and AI "},
|
| 28 |
+
]
|
| 29 |
+
|
| 30 |
+
with open(cv_path, "r") as f:
|
| 31 |
+
cv_text = f.read()
|
| 32 |
+
|
| 33 |
+
system_prompt = f"""
|
| 34 |
+
Your name is Alexander.You are acting as Alexander Todorov. You will answer questions related to your career, skills, work experience, and education. \
|
| 35 |
+
Questions will be asked by visitors, headhunters, or recruiters about potential job opportunities. \
|
| 36 |
+
Respond professionally and use professional language. \
|
| 37 |
+
Answer only questions that are directly related to your CV. If you do not find the answer in your CV, respond with: \
|
| 38 |
+
"I can only answer questions about my CV."
|
| 39 |
+
|
| 40 |
+
CV: {cv_text}
|
| 41 |
+
With this context, please chat with the user, always staying in character as Alexander Todorov.
|
| 42 |
+
"""
|
| 43 |
+
|
| 44 |
+
def chat(message, history):
|
| 45 |
+
messages = [{"role":"system", "content":system_prompt}] + history + [{"role":"user", "content":message}]
|
| 46 |
+
response = client.chat.completions.create(model="gpt-4o-mini", messages=messages)
|
| 47 |
+
return response.choices[0].message.content
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
with gr.Blocks() as ui:
|
| 51 |
+
# name and job title
|
| 52 |
+
with gr.Row():
|
| 53 |
+
with gr.Column(scale=1):
|
| 54 |
+
gr.Markdown('<div style="font-size:36px; font-weight:bold;">Alexander Todorov</div>')
|
| 55 |
+
|
| 56 |
+
with gr.Column(scale=4):
|
| 57 |
+
gr.Markdown("""
|
| 58 |
+
<a href="https://www.linkedin.com/in/alexander-t-50864a139" target="_blank">
|
| 59 |
+
<img src="https://cdn-icons-png.flaticon.com/512/174/174857.png"
|
| 60 |
+
alt="LinkedIn" style="width:32px; height:32px;"/>
|
| 61 |
+
</a>
|
| 62 |
+
""")
|
| 63 |
+
|
| 64 |
+
# *********************************************************************************************************
|
| 65 |
+
|
| 66 |
+
with gr.Row():
|
| 67 |
+
with gr.Column(scale=1): # 1 part
|
| 68 |
+
gr.Image(avatar_path,
|
| 69 |
+
type="pil",
|
| 70 |
+
show_label=False,
|
| 71 |
+
height=150,
|
| 72 |
+
interactive=False,
|
| 73 |
+
container=False,
|
| 74 |
+
buttons=[['download', 'share', 'fullscreen']])
|
| 75 |
+
|
| 76 |
+
# Right column 75% width
|
| 77 |
+
with gr.Column(scale=3): # 3 parts
|
| 78 |
+
gr.Markdown("""
|
| 79 |
+
<div style="width:50%">
|
| 80 |
+
<p style="color:#9b9b9b; margin-bottom:0;font-size:20px;">
|
| 81 |
+
Software and Data Engineer with over five years of experience delivering intelligent,
|
| 82 |
+
user-focused AI solutions and driving automation and innovation in complex environments.
|
| 83 |
+
</p>
|
| 84 |
+
</div>
|
| 85 |
+
""")
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
# ******************************************************************************************************************
|
| 89 |
+
# Chatbot
|
| 90 |
+
gr.Markdown(f'<p style="font-size:28px;font-weight:bold;"> Chat with Me About My CV</p>',
|
| 91 |
+
elem_id="job-title-light")
|
| 92 |
+
gr.Markdown('<hr style="border:1px solid grey;">', elem_id="custom_divider")
|
| 93 |
+
chatbot = gr.Chatbot(placeholder="<strong>Interactive CV Guide</strong><br>Ask Me Anything", height=300)
|
| 94 |
+
chat_interface = gr.ChatInterface(fn=chat, chatbot=chatbot)
|
| 95 |
+
gr.Markdown('<hr style="border: 1px solid #d3d3d3; margin-top:12px; margin-bottom:12px;">')
|
| 96 |
+
|
| 97 |
+
# ****************************************************************************************************************
|
| 98 |
+
# Projects
|
| 99 |
+
gr.Markdown(f'<p style="font-size:28px;font-weight:bold;"> Examples of My Work</p>',
|
| 100 |
+
elem_id="job-title-light")
|
| 101 |
+
gr.Markdown('<hr style="border:1px solid grey;">', elem_id="custom_divider")
|
| 102 |
+
|
| 103 |
+
# Projects row
|
| 104 |
+
with gr.Row():
|
| 105 |
+
for project in projects:
|
| 106 |
+
with gr.Column(): # equal width for each project
|
| 107 |
+
# Project image
|
| 108 |
+
gr.Image(project["image"],
|
| 109 |
+
type="pil",
|
| 110 |
+
show_label=False,
|
| 111 |
+
interactive=False,
|
| 112 |
+
height=200, width=350,
|
| 113 |
+
buttons=[['download', 'share', 'fullscreen']])
|
| 114 |
+
# Project title / text
|
| 115 |
+
gr.Markdown(f"<div style='text-align:center; font-size:16px; margin-top:4px;'>{project['title']}</div>")
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
if __name__ == "__main__":
|
| 119 |
+
ui.launch()
|
main.py
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
def main():
|
| 2 |
+
print("Hello from my-cv-agent-site!")
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
if __name__ == "__main__":
|
| 6 |
+
main()
|
pyproject.toml
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "my-cv-agent-site"
|
| 3 |
+
version = "0.1.0"
|
| 4 |
+
description = "Add your description here"
|
| 5 |
+
readme = "README.md"
|
| 6 |
+
requires-python = ">=3.10"
|
| 7 |
+
dependencies = [
|
| 8 |
+
"anthropic>=0.49.0",
|
| 9 |
+
"autogen-agentchat>=0.4.9.2",
|
| 10 |
+
"autogen-ext[grpc,mcp,ollama,openai]>=0.4.9.2",
|
| 11 |
+
"bs4>=0.0.2",
|
| 12 |
+
"gradio>=5.22.0",
|
| 13 |
+
"httpx>=0.28.1",
|
| 14 |
+
"ipywidgets>=8.1.5",
|
| 15 |
+
"langchain-anthropic>=0.3.10",
|
| 16 |
+
"langchain-community>=0.3.20",
|
| 17 |
+
"langchain-experimental>=0.3.4",
|
| 18 |
+
"langchain-openai>=0.3.9",
|
| 19 |
+
"langgraph>=0.3.18",
|
| 20 |
+
"langgraph-checkpoint-sqlite>=2.0.6",
|
| 21 |
+
"langsmith>=0.3.18",
|
| 22 |
+
"lxml>=5.3.1",
|
| 23 |
+
"mcp-server-fetch>=2025.1.17",
|
| 24 |
+
"mcp[cli]>=1.5.0",
|
| 25 |
+
"openai>=1.68.2",
|
| 26 |
+
"playwright>=1.51.0",
|
| 27 |
+
"plotly>=6.0.1",
|
| 28 |
+
"polygon-api-client>=1.14.5",
|
| 29 |
+
"psutil>=7.0.0",
|
| 30 |
+
"pypdf>=5.4.0",
|
| 31 |
+
"pypdf2>=3.0.1",
|
| 32 |
+
"python-dotenv>=1.0.1",
|
| 33 |
+
"requests>=2.32.3",
|
| 34 |
+
"semantic-kernel>=1.25.0",
|
| 35 |
+
"sendgrid>=6.11.0",
|
| 36 |
+
"setuptools>=78.1.0",
|
| 37 |
+
"smithery>=0.1.0",
|
| 38 |
+
"speedtest-cli>=2.1.3",
|
| 39 |
+
"wikipedia>=1.4.0",
|
| 40 |
+
"google-generativeai>=0.8.0",
|
| 41 |
+
]
|
| 42 |
+
|
| 43 |
+
[dependency-groups]
|
| 44 |
+
dev = [
|
| 45 |
+
"ipykernel>=6.29.5",
|
| 46 |
+
]
|
requirements.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio>=5.22.0
|
| 2 |
+
openai>=1.68.2
|
| 3 |
+
python-dotenv>=1.0.1
|
src/.gradio/certificate.pem
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
-----BEGIN CERTIFICATE-----
|
| 2 |
+
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
|
| 3 |
+
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
|
| 4 |
+
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
|
| 5 |
+
WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
|
| 6 |
+
ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
|
| 7 |
+
MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
|
| 8 |
+
h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
|
| 9 |
+
0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
|
| 10 |
+
A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
|
| 11 |
+
T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
|
| 12 |
+
B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
|
| 13 |
+
B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
|
| 14 |
+
KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
|
| 15 |
+
OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
|
| 16 |
+
jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
|
| 17 |
+
qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
|
| 18 |
+
rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
|
| 19 |
+
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
|
| 20 |
+
hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
|
| 21 |
+
ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
|
| 22 |
+
3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
|
| 23 |
+
NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
|
| 24 |
+
ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
|
| 25 |
+
TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
|
| 26 |
+
jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
|
| 27 |
+
oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
|
| 28 |
+
4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
|
| 29 |
+
mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
|
| 30 |
+
emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
|
| 31 |
+
-----END CERTIFICATE-----
|
src/app.py
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
from openai import OpenAI
|
| 3 |
+
from dotenv import load_dotenv
|
| 4 |
+
import os
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
projects = [
|
| 8 |
+
{"image": "projects_images/s_up.jpeg", "title": "Ai Recommendation System"},
|
| 9 |
+
{"image": "projects_images/llm.jpeg", "title": "LLM Automation"},
|
| 10 |
+
{"image": "projects_images/bi.png", "title": "BI"},
|
| 11 |
+
{"image": "projects_images/robot.png", "title": "Robot Arm Control With Ros Python and AI "},
|
| 12 |
+
]
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
load_dotenv()
|
| 16 |
+
|
| 17 |
+
client = OpenAI()
|
| 18 |
+
|
| 19 |
+
with open("cv/me.txt", "r") as f:
|
| 20 |
+
cv_text = f.read()
|
| 21 |
+
|
| 22 |
+
system_prompt = f"""
|
| 23 |
+
Your name is Alexander.You are acting as Alexander Todorov. You will answer questions related to your career, skills, work experience, and education. \
|
| 24 |
+
Questions will be asked by visitors, headhunters, or recruiters about potential job opportunities. \
|
| 25 |
+
Respond professionally and use professional language. \
|
| 26 |
+
Answer only questions that are directly related to your CV. If you do not find the answer in your CV, respond with: \
|
| 27 |
+
"I can only answer questions about my CV."
|
| 28 |
+
|
| 29 |
+
CV: {cv_text}
|
| 30 |
+
With this context, please chat with the user, always staying in character as Alexander Todorov.
|
| 31 |
+
"""
|
| 32 |
+
|
| 33 |
+
def chat(message,history):
|
| 34 |
+
messages = [{"role":"system", "content":system_prompt}] + history + [{"role":"user", "content":message}]
|
| 35 |
+
response = client.chat.completions.create(model="gpt-4o-mini", messages=messages)
|
| 36 |
+
return response.choices[0].message.content
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
with gr.Blocks() as ui:
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
# name and job title
|
| 44 |
+
with gr.Row():
|
| 45 |
+
with gr.Column(scale=1):
|
| 46 |
+
gr.Markdown('<div style="font-size:36px; font-weight:bold;">Alexander Todorov</div>')
|
| 47 |
+
|
| 48 |
+
with gr.Column(scale=4):
|
| 49 |
+
gr.Markdown("""
|
| 50 |
+
<a href="https://www.linkedin.com/in/alexander-t-50864a139" target="_blank">
|
| 51 |
+
<img src="https://cdn-icons-png.flaticon.com/512/174/174857.png"
|
| 52 |
+
alt="LinkedIn" style="width:32px; height:32px;"/>
|
| 53 |
+
</a>
|
| 54 |
+
""")
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
# gr.Markdown(f'<p style="color:#9b9b9b; margin-bottom:0;font-size:20px;">Software Engineer & Data Scientist</p>',
|
| 58 |
+
# elem_id="job-title-light")
|
| 59 |
+
|
| 60 |
+
# LinkedIn icon with link
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
# *********************************************************************************************************
|
| 65 |
+
|
| 66 |
+
with gr.Row():
|
| 67 |
+
with gr.Column(scale=1): # 1 part
|
| 68 |
+
gr.Image("cv/avatar.jpeg",
|
| 69 |
+
type="pil",
|
| 70 |
+
show_label=False,
|
| 71 |
+
height=150,
|
| 72 |
+
interactive=False,
|
| 73 |
+
container=False,
|
| 74 |
+
buttons=[['download', 'share', 'fullscreen']])
|
| 75 |
+
|
| 76 |
+
# Right column 75% width
|
| 77 |
+
with gr.Column(scale=3): # 3 parts
|
| 78 |
+
gr.Markdown("""
|
| 79 |
+
<div style="width:50%">
|
| 80 |
+
<p style="color:#9b9b9b; margin-bottom:0;font-size:20px;">
|
| 81 |
+
Software and Data Engineer with over five years of experience delivering intelligent,
|
| 82 |
+
user-focused AI solutions and driving automation and innovation in complex environments.
|
| 83 |
+
</p>
|
| 84 |
+
</div>
|
| 85 |
+
""")
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
# ******************************************************************************************************************
|
| 89 |
+
# Chatbot
|
| 90 |
+
gr.Markdown(f'<p style="font-size:28px;font-weight:bold;"> Chat with Me About My CV</p>',
|
| 91 |
+
elem_id="job-title-light")
|
| 92 |
+
gr.Markdown('<hr style="border:1px solid grey;">', elem_id="custom_divider")
|
| 93 |
+
chatbot = gr.Chatbot(placeholder="<strong>Interactive CV Guide</strong><br>Ask Me Anything", height=300)
|
| 94 |
+
chat_interface = gr.ChatInterface(fn=chat, chatbot=chatbot)
|
| 95 |
+
gr.Markdown('<hr style="border: 1px solid #d3d3d3; margin-top:12px; margin-bottom:12px;">')
|
| 96 |
+
# horizontal line
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
# ****************************************************************************************************************
|
| 100 |
+
# Projects
|
| 101 |
+
gr.Markdown(f'<p style="font-size:28px;font-weight:bold;"> Examples of My Work</p>',
|
| 102 |
+
elem_id="job-title-light")
|
| 103 |
+
gr.Markdown('<hr style="border:1px solid grey;">', elem_id="custom_divider")
|
| 104 |
+
|
| 105 |
+
# Projects row
|
| 106 |
+
with gr.Row():
|
| 107 |
+
for project in projects:
|
| 108 |
+
with gr.Column(): # equal width for each project
|
| 109 |
+
# Project image (disable download)
|
| 110 |
+
gr.Image(project["image"],
|
| 111 |
+
type="pil",
|
| 112 |
+
show_label=False,
|
| 113 |
+
interactive=False,
|
| 114 |
+
height=200, width=350,
|
| 115 |
+
buttons=[['download', 'share', 'fullscreen']])
|
| 116 |
+
# Project title / text
|
| 117 |
+
gr.Markdown(f"<div style='text-align:center; font-size:16px; margin-top:4px;'>{project['title']}</div>")
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
if __name__ == "__main__":
|
| 124 |
+
ui.launch()
|
src/cv/avatar.jpeg
ADDED
|
|
src/cv/me.txt
ADDED
|
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Alexander Todorov
|
| 2 |
+
Email: alex.st.todorov@gmail.com
|
| 3 |
+
Phone: +972523085768
|
| 4 |
+
Hold EU passport.
|
| 5 |
+
LinkIn https://www.linkedin.com/in/alexander-t-50864a139/
|
| 6 |
+
|
| 7 |
+
Summary:
|
| 8 |
+
Software and ML Engineer & Data Scientist with over 5 years in software development, building AI/ML solutions, LLM tools, recommendation systems, and data-driven web applications using Python, Flask, Angular, Spark, and MongoDB. Skilled in microservices, Docker, Kubernetes, CI/CD, NLP, and Azure OpenAI to deliver scalable and automated enterprise solutions. Passionate about creating intelligent, user-centric software that drives innovation and operational efficiency.
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
Experience:
|
| 13 |
+
2020 - Present - Intel Corporation
|
| 14 |
+
Data scientist and Ml Engineer
|
| 15 |
+
|
| 16 |
+
Led the development of an enterprise-level AI recommendation platform that analyzes employee skills and activity to recommend internal training courses.
|
| 17 |
+
|
| 18 |
+
Designed and deployed scalable ML pipelines for recommendation systems, predictive maintenance, and NLP topic modeling, using Python, Spark ALS, Logistic Regression, BERTopic and Azure OpenAI GPT-4.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
Built cloud-native microservices for LLM-driven using Kafka, Kubernetes, Docker, and MongoDB
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
Engineered automated CI/CD workflows with GitHub Actions
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
Automated analytics flows and BI data pipelines using containerized Python jobs and Kubernetes CronJobs, supporting real-time and scheduled reporting.
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
Applied advanced NLP and LLM techniques to classify topics, extract meaningful patterns from ticket history, and automate text understanding at scale.
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
2012 - 2020 - Intel Corporation
|
| 34 |
+
Process Eng - Production process analyses
|
| 35 |
+
Python scripts, SQL , Automated reports development, Angular
|
| 36 |
+
Intel Ireland 2013 - 2017 - Supporting the startup of new technology. Process and equipment
|
| 37 |
+
|
| 38 |
+
Education:
|
| 39 |
+
|
| 40 |
+
2025 - Today - Technical University – Sofia,Bulgaria.
|
| 41 |
+
PhD Program - Robotic and automation systems with AI
|
| 42 |
+
|
| 43 |
+
2020–2024 - Technical University – Sofia,Bulgaria.
|
| 44 |
+
Master’s degree in Robotic Engineering.
|
| 45 |
+
⋄ Final project: Robot ARM design with Python ROS framework
|
| 46 |
+
|
| 47 |
+
2018-2019 - John Bryce Collage
|
| 48 |
+
Full Stack - Angular, Java, SQL
|
| 49 |
+
|
| 50 |
+
2009–2012 - Singalovsky College Tel-Aviv.
|
| 51 |
+
Electronic Practical Engineer.
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
Skills:
|
| 56 |
+
AI: PySpark, Spark ALS, Logistic Regression, BERTopic, Azure OpenAI GPT-4, NLP, LLMs, Recommender Systems, Topic Modeling
|
| 57 |
+
DevOps: Docker, Kubernetes, Cloud Foundry, GitHub Actions, Kafka, CI/CD, Microservices
|
| 58 |
+
Programming: Python, Flask, FastAPI, Angular, MongoDB, SQL
|
| 59 |
+
DB : Mongo DB, SQL , PostGres
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
Languages
|
| 63 |
+
Hebrew – Fluent
|
| 64 |
+
English – Advanced; used daily in an international corporate environment
|
| 65 |
+
Bulgarian – Native
|
| 66 |
+
Russian – Native
|
| 67 |
+
|
| 68 |
+
Hobbies:
|
| 69 |
+
Swimming , drones, snowboarding
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
Projects:
|
| 74 |
+
|
| 75 |
+
AI Recommendation System
|
| 76 |
+
Led the development of an enterprise-level AI recommendation platform that analyzes employee skills and activity to recommend internal training courses.
|
| 77 |
+
Technologies: Python, Angular 18, MongoDB, Spark ALS, Cloud Foundry, Azure SSO, NLP, Docker
|
| 78 |
+
Designed and managed the end-to-end system architecture across backend and frontend components.
|
| 79 |
+
Implemented a recommendation engine using ALS on Apache Spark.
|
| 80 |
+
Integrated Azure SSO for secure user identification and behavior tracking.
|
| 81 |
+
Automated personalized email notifications based on work schedules and recommendation results.
|
| 82 |
+
Deployed the solution on an on-premise Cloud Foundry environment.
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
ML Preventive Maintenance System
|
| 87 |
+
Developed a machine-learning model to predict maintenance needs for manufacturing equipment using historical operational data.
|
| 88 |
+
Technologies: Python, Logistic Regression, Pandas, Scikit-learn
|
| 89 |
+
Built a predictive analytics pipeline to estimate the next maintenance cycle.
|
| 90 |
+
Enabled early warning for failures and improved production reliability.
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
BI Automated Reporting & Workflow Orchestration
|
| 95 |
+
Built automated BI data flows and reporting pipelines based on production and operational data.
|
| 96 |
+
Technologies: Python, Docker, Kubernetes (Rancher), CronJobs, Angular, Power BI
|
| 97 |
+
Created containerized Python services to compute metrics and generate reports.
|
| 98 |
+
Deployed workloads as Kubernetes Deployments and CronJobs on Rancher.
|
| 99 |
+
Provided dashboards in Angular or Power BI for management visibility.
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
NLP / ML Ticket Topic Modeling
|
| 103 |
+
Implemented NLP-based topic modeling to detect recurring issues in ticket history.
|
| 104 |
+
Technologies: BERTopic, Python, NLP, Scikit-learn
|
| 105 |
+
Automated detection of common problem categories to support root-cause analysis.
|
| 106 |
+
Delivered insights that improved support efficiency and reduced repeated issues.
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
LLM Microservices for PDF Data Extraction
|
| 111 |
+
Designed a distributed LLM system to extract structured data from large PDF documents.
|
| 112 |
+
Technologies: Python, PyPDF, Azure OpenAI GPT-4, Kafka, Kubernetes, Microservices, MongoDB
|
| 113 |
+
Built a microservices architecture for document scanning, chunking, and LLM interaction.
|
| 114 |
+
Used Azure OpenAI GPT-4 for high-accuracy extraction and summarization.
|
| 115 |
+
Implemented asynchronous communication through Kafka.
|
| 116 |
+
Presented at the internal AI Summit, gaining organizational recognition.
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
CI/CD for Web Applications (GitHub Actions)
|
| 120 |
+
Developed automated CI/CD pipelines for Angular-based applications.
|
| 121 |
+
Technologies: GitHub Actions, Angular, Docker, Kubernetes
|
| 122 |
+
Automated build steps: Angular build → Docker build → Push → Kubernetes deployment.
|
| 123 |
+
Enabled rapid and reliable releases for production systems.
|
| 124 |
+
|
| 125 |
+
Full-Stack Podcast Web Application
|
| 126 |
+
Built a full end-to-end platform for uploading, managing, and playing audio content.
|
| 127 |
+
Technologies: Angular, Python Flask, MongoDB, Docker
|
| 128 |
+
Implemented user interface for playback and an admin panel for file management.
|
| 129 |
+
Added support for metadata, thumbnails, and user comments.
|
| 130 |
+
Designed a scalable backend with REST APIs and MongoDB.
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
|
| 134 |
+
Issues Ticketing System (ServiceNow + Custom Frontend)
|
| 135 |
+
Developed a complete ticket management system integrated with ServiceNow.
|
| 136 |
+
Technologies: JavaScript, Angular, ServiceNow, Kubernetes, Azure SSO
|
| 137 |
+
Created dynamic ticket forms, routing logic, and an Angular-based landing page.
|
| 138 |
+
Integrated Azure SSO for authentication and user tracking.
|
| 139 |
+
Implemented automated reporting on ticket types, frequency, and recurring issues.
|
| 140 |
+
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
|
| 147 |
+
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
|
| 151 |
+
|
| 152 |
+
|
src/cv_chat.ipynb
ADDED
|
@@ -0,0 +1,274 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": 1,
|
| 6 |
+
"id": "7daa2ba7",
|
| 7 |
+
"metadata": {},
|
| 8 |
+
"outputs": [],
|
| 9 |
+
"source": [
|
| 10 |
+
"import gradio as gr\n",
|
| 11 |
+
"from openai import OpenAI\n",
|
| 12 |
+
"from dotenv import load_dotenv\n",
|
| 13 |
+
"import os "
|
| 14 |
+
]
|
| 15 |
+
},
|
| 16 |
+
{
|
| 17 |
+
"cell_type": "code",
|
| 18 |
+
"execution_count": 2,
|
| 19 |
+
"id": "7de6ebda",
|
| 20 |
+
"metadata": {},
|
| 21 |
+
"outputs": [],
|
| 22 |
+
"source": [
|
| 23 |
+
"\n",
|
| 24 |
+
"projects = [\n",
|
| 25 |
+
" {\"image\": \"projects_images/s_up.jpeg\", \"title\": \"Ai Recommendation System\"},\n",
|
| 26 |
+
" {\"image\": \"projects_images/llm.jpeg\", \"title\": \"LLM Automation\"},\n",
|
| 27 |
+
" {\"image\": \"projects_images/bi.png\", \"title\": \"BI\"},\n",
|
| 28 |
+
" {\"image\": \"projects_images/robot.png\", \"title\": \"Robot Arm Control With Ros Python and AI \"},\n",
|
| 29 |
+
"]\n",
|
| 30 |
+
" "
|
| 31 |
+
]
|
| 32 |
+
},
|
| 33 |
+
{
|
| 34 |
+
"cell_type": "code",
|
| 35 |
+
"execution_count": 3,
|
| 36 |
+
"id": "8e38d667",
|
| 37 |
+
"metadata": {},
|
| 38 |
+
"outputs": [
|
| 39 |
+
{
|
| 40 |
+
"data": {
|
| 41 |
+
"text/plain": [
|
| 42 |
+
"True"
|
| 43 |
+
]
|
| 44 |
+
},
|
| 45 |
+
"execution_count": 3,
|
| 46 |
+
"metadata": {},
|
| 47 |
+
"output_type": "execute_result"
|
| 48 |
+
}
|
| 49 |
+
],
|
| 50 |
+
"source": [
|
| 51 |
+
"load_dotenv()\n"
|
| 52 |
+
]
|
| 53 |
+
},
|
| 54 |
+
{
|
| 55 |
+
"cell_type": "code",
|
| 56 |
+
"execution_count": 4,
|
| 57 |
+
"id": "7708eac4",
|
| 58 |
+
"metadata": {},
|
| 59 |
+
"outputs": [],
|
| 60 |
+
"source": [
|
| 61 |
+
"with open(\"cv/me.txt\", \"r\") as f: \n",
|
| 62 |
+
" cv_text = f.read()"
|
| 63 |
+
]
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"cell_type": "code",
|
| 67 |
+
"execution_count": 5,
|
| 68 |
+
"id": "34dc7b6a",
|
| 69 |
+
"metadata": {},
|
| 70 |
+
"outputs": [],
|
| 71 |
+
"source": [
|
| 72 |
+
"client = OpenAI()"
|
| 73 |
+
]
|
| 74 |
+
},
|
| 75 |
+
{
|
| 76 |
+
"cell_type": "code",
|
| 77 |
+
"execution_count": 6,
|
| 78 |
+
"id": "efd52cc5",
|
| 79 |
+
"metadata": {},
|
| 80 |
+
"outputs": [],
|
| 81 |
+
"source": [
|
| 82 |
+
"system_prompt = f\"\"\"\n",
|
| 83 |
+
"Your name is Alexander.You are acting as Alexander Todorov. You will answer questions related to your career, skills, work experience, and education. \\\n",
|
| 84 |
+
"Questions will be asked by visitors, headhunters, or recruiters about potential job opportunities. \\\n",
|
| 85 |
+
"Respond professionally and use professional language. \\\n",
|
| 86 |
+
"Answer only questions that are directly related to your CV. If you do not find the answer in your CV, respond with: \\\n",
|
| 87 |
+
"\"I can only answer questions about my CV.\"\n",
|
| 88 |
+
"\n",
|
| 89 |
+
"CV: {cv_text}\n",
|
| 90 |
+
"With this context, please chat with the user, always staying in character as Alexander Todorov.\n",
|
| 91 |
+
"\"\"\"\n",
|
| 92 |
+
"\n"
|
| 93 |
+
]
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"cell_type": "code",
|
| 97 |
+
"execution_count": 7,
|
| 98 |
+
"id": "5e16df81",
|
| 99 |
+
"metadata": {},
|
| 100 |
+
"outputs": [],
|
| 101 |
+
"source": [
|
| 102 |
+
"def chat(message,history):\n",
|
| 103 |
+
" messages = [{\"role\":\"system\", \"content\":system_prompt}] + history + [{\"role\":\"user\", \"content\":message}]\n",
|
| 104 |
+
" response = client.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
|
| 105 |
+
" return response.choices[0].message.content"
|
| 106 |
+
]
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"cell_type": "code",
|
| 110 |
+
"execution_count": 8,
|
| 111 |
+
"id": "ad803f77",
|
| 112 |
+
"metadata": {},
|
| 113 |
+
"outputs": [],
|
| 114 |
+
"source": [
|
| 115 |
+
"# --------- CHAT FUNCTION FOR THE AGENT ----------\n",
|
| 116 |
+
"def chat_with_agent(message, history):\n",
|
| 117 |
+
" # Call your Agent here (OpenAI Agent, LangChain, etc.)\n",
|
| 118 |
+
" # For now return example text:\n",
|
| 119 |
+
" return f\"You said: {message}\""
|
| 120 |
+
]
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"cell_type": "code",
|
| 124 |
+
"execution_count": 9,
|
| 125 |
+
"id": "f4807389",
|
| 126 |
+
"metadata": {},
|
| 127 |
+
"outputs": [
|
| 128 |
+
{
|
| 129 |
+
"name": "stdout",
|
| 130 |
+
"output_type": "stream",
|
| 131 |
+
"text": [
|
| 132 |
+
"* Running on local URL: http://127.0.0.1:7860\n",
|
| 133 |
+
"* To create a public link, set `share=True` in `launch()`.\n"
|
| 134 |
+
]
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"data": {
|
| 138 |
+
"text/html": [
|
| 139 |
+
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
|
| 140 |
+
],
|
| 141 |
+
"text/plain": [
|
| 142 |
+
"<IPython.core.display.HTML object>"
|
| 143 |
+
]
|
| 144 |
+
},
|
| 145 |
+
"metadata": {},
|
| 146 |
+
"output_type": "display_data"
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"data": {
|
| 150 |
+
"text/plain": []
|
| 151 |
+
},
|
| 152 |
+
"execution_count": 9,
|
| 153 |
+
"metadata": {},
|
| 154 |
+
"output_type": "execute_result"
|
| 155 |
+
}
|
| 156 |
+
],
|
| 157 |
+
"source": [
|
| 158 |
+
"with gr.Blocks() as ui:\n",
|
| 159 |
+
"\n",
|
| 160 |
+
"\n",
|
| 161 |
+
" # name and job title \n",
|
| 162 |
+
" with gr.Row():\n",
|
| 163 |
+
" with gr.Column(scale=1):\n",
|
| 164 |
+
" gr.Markdown('<div style=\"font-size:36px; font-weight:bold;\">Alexander Todorov</div>')\n",
|
| 165 |
+
"\n",
|
| 166 |
+
" with gr.Column(scale=4): \n",
|
| 167 |
+
" gr.Markdown(\"\"\"\n",
|
| 168 |
+
" <a href=\"https://www.linkedin.com/in/alexander-t-50864a139\" target=\"_blank\">\n",
|
| 169 |
+
" <img src=\"https://cdn-icons-png.flaticon.com/512/174/174857.png\" \n",
|
| 170 |
+
" alt=\"LinkedIn\" style=\"width:32px; height:32px;\"/>\n",
|
| 171 |
+
" </a>\n",
|
| 172 |
+
" \"\"\")\n",
|
| 173 |
+
"\n",
|
| 174 |
+
"\n",
|
| 175 |
+
" # gr.Markdown(f'<p style=\"color:#9b9b9b; margin-bottom:0;font-size:20px;\">Software Engineer & Data Scientist</p>',\n",
|
| 176 |
+
" # elem_id=\"job-title-light\")\n",
|
| 177 |
+
"\n",
|
| 178 |
+
" # LinkedIn icon with link\n",
|
| 179 |
+
" \n",
|
| 180 |
+
" \n",
|
| 181 |
+
"\n",
|
| 182 |
+
" # ********************************************************************************************************* \n",
|
| 183 |
+
" \n",
|
| 184 |
+
" with gr.Row():\n",
|
| 185 |
+
" with gr.Column(scale=1): # 1 part\n",
|
| 186 |
+
" gr.Image(\"cv/avatar.jpeg\", \n",
|
| 187 |
+
" type=\"pil\", \n",
|
| 188 |
+
" show_label=False, \n",
|
| 189 |
+
" height=150, \n",
|
| 190 |
+
" interactive=False, \n",
|
| 191 |
+
" container=False, \n",
|
| 192 |
+
" buttons=[['download', 'share', 'fullscreen']])\n",
|
| 193 |
+
"\n",
|
| 194 |
+
" # Right column 75% width\n",
|
| 195 |
+
" with gr.Column(scale=3): # 3 parts\n",
|
| 196 |
+
" gr.Markdown(\"\"\"\n",
|
| 197 |
+
" <div style=\"width:50%\">\n",
|
| 198 |
+
" <p style=\"color:#9b9b9b; margin-bottom:0;font-size:20px;\">\n",
|
| 199 |
+
" Software and Data Engineer with over five years of experience delivering intelligent, \n",
|
| 200 |
+
" user-focused AI solutions and driving automation and innovation in complex environments.\n",
|
| 201 |
+
" </p>\n",
|
| 202 |
+
" </div>\n",
|
| 203 |
+
" \"\"\")\n",
|
| 204 |
+
" \n",
|
| 205 |
+
"\n",
|
| 206 |
+
" # ******************************************************************************************************************\n",
|
| 207 |
+
" # Chatbot \n",
|
| 208 |
+
" gr.Markdown(f'<p style=\"font-size:28px;font-weight:bold;\"> Chat with Me About My CV</p>',\n",
|
| 209 |
+
" elem_id=\"job-title-light\")\n",
|
| 210 |
+
" gr.Markdown('<hr style=\"border:1px solid grey;\">', elem_id=\"custom_divider\")\n",
|
| 211 |
+
" chatbot = gr.Chatbot(placeholder=\"<strong>Interactive CV Guide</strong><br>Ask Me Anything\", height=300)\n",
|
| 212 |
+
" chat_interface = gr.ChatInterface(fn=chat, chatbot=chatbot)\n",
|
| 213 |
+
" gr.Markdown('<hr style=\"border: 1px solid #d3d3d3; margin-top:12px; margin-bottom:12px;\">')\n",
|
| 214 |
+
" # horizontal line\n",
|
| 215 |
+
"\n",
|
| 216 |
+
" \n",
|
| 217 |
+
" # ****************************************************************************************************************\n",
|
| 218 |
+
" # Projects \n",
|
| 219 |
+
" gr.Markdown(f'<p style=\"font-size:28px;font-weight:bold;\"> Examples of My Work</p>',\n",
|
| 220 |
+
" elem_id=\"job-title-light\")\n",
|
| 221 |
+
" gr.Markdown('<hr style=\"border:1px solid grey;\">', elem_id=\"custom_divider\")\n",
|
| 222 |
+
" \n",
|
| 223 |
+
" # Projects row \n",
|
| 224 |
+
" with gr.Row():\n",
|
| 225 |
+
" for project in projects:\n",
|
| 226 |
+
" with gr.Column(): # equal width for each project\n",
|
| 227 |
+
" # Project image (disable download)\n",
|
| 228 |
+
" gr.Image(project[\"image\"], \n",
|
| 229 |
+
" type=\"pil\", \n",
|
| 230 |
+
" show_label=False, \n",
|
| 231 |
+
" interactive=False, \n",
|
| 232 |
+
" height=200, width=350,\n",
|
| 233 |
+
" buttons=[['download', 'share', 'fullscreen']])\n",
|
| 234 |
+
" # Project title / text\n",
|
| 235 |
+
" gr.Markdown(f\"<div style='text-align:center; font-size:16px; margin-top:4px;'>{project['title']}</div>\")\n",
|
| 236 |
+
"\n",
|
| 237 |
+
"\n",
|
| 238 |
+
" \n",
|
| 239 |
+
"\n",
|
| 240 |
+
"\n",
|
| 241 |
+
"ui.launch()"
|
| 242 |
+
]
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"cell_type": "code",
|
| 246 |
+
"execution_count": null,
|
| 247 |
+
"id": "769f66e6",
|
| 248 |
+
"metadata": {},
|
| 249 |
+
"outputs": [],
|
| 250 |
+
"source": []
|
| 251 |
+
}
|
| 252 |
+
],
|
| 253 |
+
"metadata": {
|
| 254 |
+
"kernelspec": {
|
| 255 |
+
"display_name": ".venv",
|
| 256 |
+
"language": "python",
|
| 257 |
+
"name": "python3"
|
| 258 |
+
},
|
| 259 |
+
"language_info": {
|
| 260 |
+
"codemirror_mode": {
|
| 261 |
+
"name": "ipython",
|
| 262 |
+
"version": 3
|
| 263 |
+
},
|
| 264 |
+
"file_extension": ".py",
|
| 265 |
+
"mimetype": "text/x-python",
|
| 266 |
+
"name": "python",
|
| 267 |
+
"nbconvert_exporter": "python",
|
| 268 |
+
"pygments_lexer": "ipython3",
|
| 269 |
+
"version": "3.10.12"
|
| 270 |
+
}
|
| 271 |
+
},
|
| 272 |
+
"nbformat": 4,
|
| 273 |
+
"nbformat_minor": 5
|
| 274 |
+
}
|
src/projects_images/1726049646844.jpeg
ADDED
|
Git LFS Details
|
src/projects_images/bi.png
ADDED
|
Git LFS Details
|
src/projects_images/cast.jpeg
ADDED
|
Git LFS Details
|
src/projects_images/llm.jpeg
ADDED
|
src/projects_images/robot.png
ADDED
|
Git LFS Details
|
src/projects_images/s_up.jpeg
ADDED
|
src/requirements.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio>=5.22.0
|
| 2 |
+
openai>=1.68.2
|
| 3 |
+
python-dotenv>=1.0.1
|
uv.lock
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|