bipinbudhathoki commited on
Commit
9366332
·
verified ·
1 Parent(s): 5804b04

comit new

Browse files
Files changed (4) hide show
  1. Dockerfile +22 -0
  2. README.md +50 -6
  3. requirements.txt +5 -0
  4. role_bank.json +0 -0
Dockerfile ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ ENV PYTHONDONTWRITEBYTECODE=1
4
+ ENV PYTHONUNBUFFERED=1
5
+ ENV PIP_NO_CACHE_DIR=1
6
+
7
+ WORKDIR /code
8
+
9
+ RUN apt-get update && apt-get install -y --no-install-recommends \
10
+ ffmpeg \
11
+ curl \
12
+ && rm -rf /var/lib/apt/lists/*
13
+
14
+ COPY requirements.txt /code/requirements.txt
15
+ RUN pip install --upgrade pip && pip install -r /code/requirements.txt
16
+
17
+ COPY role_bank.json /code/role_bank.json
18
+ COPY app /code/app
19
+
20
+ EXPOSE 7860
21
+
22
+ CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,55 @@
1
  ---
2
  title: Japanese AI Interview API
3
- emoji: 🏢
4
- colorFrom: indigo
5
- colorTo: gray
6
  sdk: docker
7
- pinned: false
8
- short_description: Role-based, voice-first Japanese interview practice for work
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: Japanese AI Interview API
3
+ emoji: 🎤
4
+ colorFrom: blue
5
+ colorTo: indigo
6
  sdk: docker
7
+ app_port: 7860
 
8
  ---
9
 
10
+ # Japanese AI Interview API
11
+
12
+ Role-based, voice-first Japanese interview practice for working visa jobs.
13
+
14
+ ## This build uses
15
+ - Hugging Face Inference ASR for speech-to-text
16
+ - Hugging Face router chat completions for dynamic next-question logic
17
+ - A built-in 654-question bank across 6 job roles
18
+ - N4-level interviewer style
19
+ - Early fail after repeated no-sound or weak answers
20
+ - Up to 20 questions when the interview is going well
21
+
22
+ ## Endpoints
23
+ - `GET /`
24
+ - `GET /health`
25
+ - `GET /roles`
26
+ - `POST /start`
27
+ - `POST /answer`
28
+
29
+ ## Space secrets / variables
30
+
31
+ ### Secret
32
+ - `HF_TOKEN` = Hugging Face token with Inference Providers permission
33
+
34
+ ### Variables
35
+ - `ASR_MODEL` = `openai/whisper-large-v3`
36
+ - `CHAT_MODEL` = `Qwen/Qwen2.5-7B-Instruct-1M`
37
+ - `MAX_QUESTION_LIMIT` = `20`
38
+
39
+ ## Start payload example
40
+
41
+ ```json
42
+ {
43
+ "session_uuid": "session-123",
44
+ "job_role": "construction",
45
+ "question_count": 12
46
+ }
47
+ ```
48
+
49
+ ## Available job_role values
50
+ - `construction`
51
+ - `restaurant_konbini`
52
+ - `nursing_care`
53
+ - `hotel_accommodation`
54
+ - `agriculture`
55
+ - `manufacturing`
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ fastapi==0.115.0
2
+ uvicorn[standard]==0.30.6
3
+ python-multipart==0.0.9
4
+ requests==2.32.3
5
+ pydantic==2.9.2
role_bank.json ADDED
The diff for this file is too large to render. See raw diff