IliaLarchenko commited on
Commit
bd6074b
1 Parent(s): f035eac

Updated readme and instructions

Browse files
Files changed (2) hide show
  1. README.md +165 -17
  2. docs/instruction.py +66 -105
README.md CHANGED
@@ -12,30 +12,178 @@ license: apache-2.0
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
 
15
- # Coding Interview Practice AI Assistant
16
 
17
- This Hugging Face Space is a coding interview practice application built with Gradio and OpenAI's GPT-3.5 model. It simulates a coding interview environment where candidates can practice solving problems under the guidance of an AI interviewer. The AI provides problems, analyzes solutions, and offers feedback without giving direct hints unless explicitly requested.
18
 
19
- ## Features
20
 
21
- - **Customizable Practice Sessions:** Candidates can select the difficulty level, topic, and programming language for their practice session.
22
- - **Real-time Feedback:** As candidates submit their solutions, the AI provides feedback on the code's complexity, potential improvements, and errors.
23
- - **Detailed Review:** At the end of the session, the AI gives comprehensive feedback, including detailed reviews of all attempts and suggestions for improvement.
24
 
25
- ## Using This Space
 
 
 
 
26
 
27
- Access this Gradio application at [Interviewer Space](https://huggingface.co/spaces/ilarchenko/interviewer). Here’s how to use it:
28
 
29
- 1. **Select Your Preferences:** Use the settings panel to specify the problem requirements, choose the programming language, difficulty level, and topic.
30
- 2. **Start the Interview:** Click "Start" to receive a problem tailored to your selections.
31
- 3. **Code Your Solution:** Input your code in the provided code editor.
32
- 4. **Interact for Feedback:** Submit your code for initial feedback. Use the chat to interact with the AI for further insights and hints.
33
- 5. **Complete the Session:** Click "Finish the interview" to receive final detailed feedback on your performance.
 
34
 
35
- ## Contributing
36
 
37
- We welcome contributions from the community! If you have suggestions for improvements or encounter any issues, please feel free to contribute directly on this Space or by reporting issues through the Hugging Face Space repository.
38
 
39
- ## License
40
 
41
- This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) for more details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
 
 
15
 
16
+ # Welcome to the AI Mock Interviewer!
17
 
18
+ This tool is designed to help you practice coding interviews by simulating the real interview experience. Here you can brush your interview skills in a realistic setting, although it’s not intended to replace thorough preparations like studying algorithms or practicing coding problems.
19
 
20
+ ## Key Features
 
 
21
 
22
+ - **Speech-First Interface**: Talk to the AI just like you would with a real interviewer. This makes your practice sessions feel more realistic.
23
+ - **Various AI Models**: The tool uses three types of AI models:
24
+ - **LLM (Large Language Model)**: Acts as the interviewer.
25
+ - **Speech-to-Text and Text-to-Speech Models**: These help mimic real conversations by converting spoken words to text and vice versa.
26
+ - **Model Flexibility**: The tool works with many different models, including those from OpenAI and open-source models from Hugging Face.
27
 
28
+ ## Planned Updates
29
 
30
+ This is just the first beta version, and I'm working on enhancing this tool. Planned updates include:
31
+ 1. **More Interview Types**: Adding simulations like Systems Design, Machine Learning System Design, Math and Logic, Behavioral Interviews, and Theory Tests.
32
+ 2. **Streaming Mode for Models**: Updating the models to provide faster responses during interviews.
33
+ 3. **Testing More Models**: Exploring additional open-source models to enhance the tool’s performance and flexibility.
34
+ 4. **Improving the User Interface**: Making it easier to navigate and use, ensuring a better experience for all users.
35
+
36
 
 
37
 
38
+ # Running the AI Tech Interviewer Simulator
39
 
40
+ To get the real experience you should run the service locally and use your own API key or local model.
41
 
42
+ ## Initial Setup
43
+
44
+ ### Clone the Repository
45
+
46
+ First, clone the project repository to your local machine with the following commands:
47
+
48
+ ```bash
49
+ git clone https://huggingface.co/spaces/IliaLarchenko/interviewer
50
+ cd interviewer
51
+ ```
52
+
53
+ ### Configure the Environment
54
+
55
+ Create a `.env` file from the provided Open AI example and edit it to include your OpenAI API key (learn how to get it here: https://platform.openai.com/api-keys):
56
+
57
+ ```bash
58
+ cp .env.openai.example .env
59
+ nano .env # You can use any text editor
60
+ ```
61
+
62
+ If you want to use any other model, follow the instructions in Models Configuration section.
63
+
64
+ ### Build and Run the Docker Container
65
+
66
+ To build and start the Docker container:
67
+
68
+ ```bash
69
+ docker-compose build
70
+ docker-compose up
71
+ ```
72
+
73
+ The application will be accessible at `http://localhost:7860`.
74
+
75
+ ### Running Locally (alternative)
76
+
77
+ Set up a Python environment and install dependencies to run the application locally:
78
+
79
+ ```bash
80
+ python -m venv venv
81
+ source venv/bin/activate
82
+ pip install -r requirements.txt
83
+ python app.py
84
+ ```
85
+
86
+ The application should now be accessible at `http://localhost:7860`.
87
+
88
+
89
+
90
+ # Interview Interface Overview
91
+
92
+ This tool will support different types of interviews, but currently focusing on coding interviews only. Here's how to navigate the interface:
93
+
94
+ ### Setting
95
+ Configure the interview settings such as difficulty, topic, and any specific requirements. Start the interview by clicking the **"Generate a problem"** button.
96
+
97
+ ### Problem Statement
98
+ The AI will present a coding problem after you initiate the session.
99
+
100
+ ### Solution
101
+ This section is where the interaction happens:
102
+ - **Code Area**: On the left side, you will find a space to write your solution. You can use any programming language, although syntax highlighting is only available for Python currently.
103
+ - **Communication Area**: On the right, this area includes:
104
+ - **Chat History**: Displays the entire dialogue history, showing messages from both you and the AI interviewer.
105
+ - **Audio Record Button**: Use this button to record your responses. Press to start recording, speak your thoughts, and press stop to send your audio. Your message will be transcribed and added to the chat, along with a snapshot of your code. For code-only messages, type your code and record a brief message like "Check out my code."
106
+
107
+ Engage with the AI as you would with a real interviewer. Provide concise responses and frequent updates rather than long monologues. Your interactions, including any commentary on your code, will be recorded and the AI's responses will be read aloud and displayed in the chat. Follow the AI's instructions and respond to any follow-up questions as they arise.
108
+
109
+ Once the interview is completed, or if you decide to end it early, click the **"Finish the interview"** button.
110
+
111
+ ### Feedback
112
+ Detailed feedback will be provided in this section, helping you understand your performance and areas for improvement.
113
+
114
+
115
+
116
+ # Models Configuration
117
+
118
+ This tool utilizes three types of AI models: a Large Language Model (LLM) for simulating interviews, a Speech-to-Text (STT) model for audio processing, and a Text-to-Speech (TTS) model for auditory feedback. You can configure each model separately to tailor the experience based on your preferences and available resources.
119
+
120
+ ## Flexible Model Integration
121
+
122
+ You can connect various models from different sources to the tool. Whether you are using models from OpenAI, Hugging Face, or even locally hosted models, the tool is designed to be compatible with a range of APIs. Here’s how you can configure each type:
123
+
124
+ ### Large Language Model (LLM)
125
+
126
+ - **OpenAI Models**: You can use models like GPT-3.5-turbo or GPT-4 provided by OpenAI. Set up is straightforward with your OpenAI API key.
127
+ - **Hugging Face Models**: Models like Meta-Llama from Hugging Face can also be integrated. Make sure your API key has appropriate permissions.
128
+ - **Local Models**: If you have the capability, you can run models locally. Ensure they are compatible with the Hugging Face API for seamless integration.
129
+
130
+ ### Speech-to-Text (STT)
131
+
132
+ - **OpenAI Whisper**: Available via OpenAI, this model supports multiple languages and dialects. It is also available in an open-source version on Hugging Face, giving you the flexibility to use it either through the OpenAI API or as a locally hosted version.
133
+ - **Other OS models**: Can be used too but can require a specific wrapper to align with API requirements.
134
+
135
+ ### Text-to-Speech (TTS)
136
+
137
+ - **OpenAI Models**: The "tts-1" model from OpenAI is fast and produces human-like results, making it quite convenient for this use case.
138
+ - **Other OS models**: Can be used too but can require a specific wrapper to align with API requirements. In my experience, OS models sound more robotic than OpenAI models.
139
+
140
+ ## Configuration via .env File
141
+
142
+ The tool uses a `.env` file for environment configuration. Here’s a breakdown of how this works:
143
+
144
+ - **API Keys**: Whether using OpenAI, Hugging Face, or other services, your API key must be specified in the `.env` file. This key should have the necessary permissions to access the models you intend to use.
145
+ - **Model URLs and Types**: Specify the API endpoint URLs for each model and their type (e.g., `OPENAI_API` for OpenAI models, `HF_API` for Hugging Face or local APIs).
146
+ - **Model Names**: Set the specific model name, such as `gpt-3.5-turbo` or `whisper-1`, to tell the application which model to interact with.
147
+
148
+ ### Example Configuration
149
+
150
+ OpenAI LLM:
151
+ ```plaintext
152
+ OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY
153
+ LLM_URL=https://api.openai.com/v1
154
+ LLM_TYPE=OPENAI_API
155
+ LLM_NAME=gpt-3.5-turbo
156
+ ```
157
+
158
+ Hugging face TTS:
159
+ ```plaintext
160
+ HF_API_KEY=hf_YOUR_HUGGINGFACE_API_KEY
161
+ TTS_URL=https://api-inference.huggingface.co/models/facebook/mms-tts-eng
162
+ TTS_TYPE=HF_API
163
+ TTS_NAME=Facebook-mms-tts-eng
164
+ ```
165
+
166
+ Local STT:
167
+ ```plaintext
168
+ HF_API_KEY=None
169
+ STT_URL=http://127.0.0.1:5000/transcribe
170
+ STT_TYPE=HF_API
171
+ STT_NAME=whisper-base.en
172
+ ```
173
+
174
+ You can configure each models separately. Find more examples in the `.env.example` files provided.
175
+
176
+
177
+
178
+
179
+ # Acknowledgements
180
+
181
+ The service is powered by Gradio, and the demo version is hosted on HuggingFace Spaces.
182
+
183
+ Even though the service can be used with great variety of models I want to specifially acknowledge a few of them:
184
+ - **OpenAI**: For models like GPT-3.5, GPT-4, Whisper, and TTS-1. More details on their models and usage policies can be found at [OpenAI's website](https://www.openai.com).
185
+ - **Meta**: For the Llama models, particularly the Meta-Llama-3-70B-Instruct, as well as Facebook-mms-tts-eng model. Visit [Meta AI](https://ai.facebook.com) for more information.
186
+ - **HuggingFace**: For a wide range of models and APIs that greatly enhance the flexibility of this tool. For specific details on usage, refer to [Hugging Face's documentation](https://huggingface.co).
187
+
188
+ Please ensure to review the specific documentation and follow the terms of service for each model and API you use, as this is crucial for responsible and compliant use of these technologies.
189
+
docs/instruction.py CHANGED
@@ -3,45 +3,40 @@
3
  instruction = {
4
  "demo": """
5
  <span style="color: red;">
6
- This is a demo version utilizing free API access with strict request limits. As a result, the experience may be slow, occasionally buggy, and not of the highest quality (e.g. robotic voice and very short problem and feedback). If a model is unavailable, please wait for a minute before retrying. Persistent unavailability may indicate that the request limit has been reached, making the demo temporarily inaccessible.
7
- For a significantly better experience, please run the service locally and use your own OpenAI key or HuggingFace models.
8
  </span>
9
-
10
- """,
11
  "introduction": """
12
- # Welcome to the AI Tech Interviewer Simulator!
13
 
14
- Welcome to the AI Tech Interviewer Training tool! This tool is designed to help you practice for coding interviews by simulating the real interview experience. It's perfect for brushing up on your skills in a realistic setting, although it's not meant to replace actual interview preparations like studying algorithms or practicing coding problems.
15
 
16
  ## Key Features
17
 
18
- - **Speech-First Interface**: You can talk to the tool just like you'd talk to a real interviewer. This makes practicing for your interviews more realistic.
19
- - **Various AI Models**: This tool uses 3 types of AI models:
20
  - **LLM (Large Language Model)**: Acts as the interviewer.
21
- - **Speech-to-Text and Text-to-Speech Models**: These help mimic a real conversation by converting spoken words to text and vice versa.
22
- - **Model Flexibility**: The tool works with many different models, including ones from OpenAI and open-source models from Hugging Face.
23
- - **Personal Project**: I created this tool as a fun way to experiment with AI models and to provide a helpful resource for interview practice.
24
 
25
  ## Planned Updates
26
 
27
- This is the first beta version of the service, and I have several updates planned to make this tool better:
28
-
29
- 1. **More Interview Types**: I will add new interview simulations, including Systems Design, Machine Learning System Design, Math and Logic, Behavioral Interviews, and Theory Tests for various tech areas.
30
- 2. **Streaming Mode for Models**: To make conversations smoother and more like real interviews, I'll switch to streaming mode for models. This will help with faster responses during the interviews.
31
- 3. **Testing More Models**: I'll test more open-source models to see what other capabilities can be added to enhance the tool's performance and flexibility.
32
- 4. **Improving the User Interface**: I plan to tweak the design and user interface to make it easier to navigate and use, ensuring a better experience for everyone.
33
- 5. **Adaptive Difficulty Settings**: Depending on how users perform, I'll adjust the difficulty of the problems automatically to match their skill level better.
34
- """,
35
  "quick_start": """
36
  # Running the AI Tech Interviewer Simulator
37
 
38
- This guide provides detailed instructions for setting up and running the AI Tech Interviewer Simulator either using Docker (recommended for simplicity) or running it locally.
39
 
40
  ## Initial Setup
41
 
42
  ### Clone the Repository
43
 
44
- First, clone the project repository to your local machine using the following command in your terminal:
45
 
46
  ```bash
47
  git clone https://huggingface.co/spaces/IliaLarchenko/interviewer
@@ -50,84 +45,52 @@ cd interviewer
50
 
51
  ### Configure the Environment
52
 
53
- Create a `.env` file from the provided example and edit it to include your OpenAI API key:
54
 
55
  ```bash
56
  cp .env.openai.example .env
57
- nano .env # You can use any other text editor
58
  ```
59
 
60
- Replace `OPENAI_API_KEY` in the `.env` file with your actual OpenAI API key.
61
-
62
- ## Option 1: Running with Docker
63
-
64
- ### Prerequisites
65
-
66
- - Ensure **Docker** and **Docker Compose** are installed on your system. Download and install them from Docker's [official site](https://www.docker.com/get-started).
67
 
68
  ### Build and Run the Docker Container
69
 
70
- Build and start the Docker container using the following commands:
71
 
72
  ```bash
73
  docker-compose build
74
  docker-compose up
75
  ```
76
 
77
- ### Access the Application
78
-
79
- The application will be accessible at `http://localhost:7860`. Open this URL in your browser to start using the AI Tech Interviewer Simulator.
80
-
81
- ## Option 2: Running Locally
82
-
83
- ### Prerequisites
84
 
85
- - Ensure you have **Python** installed on your system. Download and install it from [python.org](https://www.python.org).
86
 
87
- ### Set Up the Python Environment
88
-
89
- Create a virtual environment to isolate the package dependencies:
90
 
91
  ```bash
92
  python -m venv venv
93
  source venv/bin/activate
94
- ```
95
-
96
- ### Install Dependencies
97
-
98
- Install the required Python packages within the virtual environment:
99
-
100
- ```bash
101
  pip install -r requirements.txt
102
- ```
103
-
104
- ### Running the Application
105
-
106
- Start the server by executing:
107
-
108
- ```bash
109
  python app.py
110
  ```
111
 
112
- The application should now be accessible locally, typically at `http://localhost:7860`. Check your terminal output to confirm the URL.
113
- """,
114
  "interface": """
115
  # Interview Interface Overview
116
 
117
- The AI Tech Interviewer Training tool currently supports different types of interviews, with only the coding interview available at this time. To begin, select the corresponding tab at the top of the interface.
118
-
119
- ## Interface Components
120
-
121
- The interface is divided into four main sections, which you will navigate through sequentially:
122
 
123
  ### Setting
124
- In this section, you can configure the interview parameters such as difficulty, topic, and any specific requirements in a free text form. Once you've set your preferences, click the **"Generate a problem"** button to start the interview. The AI will then prepare a coding problem for you.
125
 
126
  ### Problem Statement
127
- After clicking **"Generate a problem"**, wait for less than 10 seconds, and the AI will present a coding problem in this section. Review the problem statement carefully to understand what is expected for your solution.
128
 
129
  ### Solution
130
- This is where the main interaction occurs:
131
  - **Code Area**: On the left side, you will find a space to write your solution. You can use any programming language, although syntax highlighting is only available for Python currently.
132
  - **Communication Area**: On the right, this area includes:
133
  - **Chat History**: Displays the entire dialogue history, showing messages from both you and the AI interviewer.
@@ -135,13 +98,15 @@ This is where the main interaction occurs:
135
 
136
  Engage with the AI as you would with a real interviewer. Provide concise responses and frequent updates rather than long monologues. Your interactions, including any commentary on your code, will be recorded and the AI's responses will be read aloud and displayed in the chat. Follow the AI's instructions and respond to any follow-up questions as they arise.
137
 
 
 
138
  ### Feedback
139
- Once the interview is completed, or if you decide to end it early, click the **"Finish the interview"** button. Detailed feedback will be provided in this section, helping you understand your performance and areas for improvement.
140
- """,
141
  "models": """
142
  # Models Configuration
143
 
144
- The AI Tech Interviewer Training tool utilizes three types of models: a Large Language Model (LLM) for simulating interviews, a Speech-to-Text (STT) model for audio processing, and a Text-to-Speech (TTS) model for auditory feedback. You can configure each model separately to tailor the experience based on your preferences and available resources.
145
 
146
  ## Flexible Model Integration
147
 
@@ -173,70 +138,66 @@ The tool uses a `.env` file for environment configuration. Here’s a breakdown
173
 
174
  ### Example Configuration
175
 
176
- For OpenAI models:
177
  ```plaintext
178
  OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY
179
  LLM_URL=https://api.openai.com/v1
180
  LLM_TYPE=OPENAI_API
181
  LLM_NAME=gpt-3.5-turbo
182
- STT_URL=https://api.openai.com/v1
183
- STT_TYPE=OPENAI_API
184
- STT_NAME=whisper-1
185
- TTS_URL=https://api.openai.com/v1
186
- TTS_TYPE=OPENAI_API
187
- TTS_NAME=tts-1
188
  ```
189
 
190
- For a Hugging Face model integration:
191
  ```plaintext
192
  HF_API_KEY=hf_YOUR_HUGGINGFACE_API_KEY
193
- LLM_URL=https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct/v1
194
- LLM_TYPE=HF_API
195
- LLM_NAME=Meta-Llama-3-70B-Instruct
196
- STT_URL=https://api-inference.huggingface.co/models/openai/whisper-tiny.en
197
- STT_TYPE=HF_API
198
- STT_NAME=whisper-tiny.en
199
  TTS_URL=https://api-inference.huggingface.co/models/facebook/mms-tts-eng
200
  TTS_TYPE=HF_API
201
  TTS_NAME=Facebook-mms-tts-eng
202
  ```
203
 
204
- For local models:
205
  ```plaintext
206
  HF_API_KEY=None
207
- LLM_URL=http://192.168.1.1:8080/v1
208
- LLM_TYPE=HF_API
209
- LLM_NAME=Meta-Llama-3-8B-Instruct
210
  STT_URL=http://127.0.0.1:5000/transcribe
211
  STT_TYPE=HF_API
212
  STT_NAME=whisper-base.en
213
- TTS_URL=http://127.0.0.1:5001/read
214
- TTS_TYPE=HF_API
215
- TTS_NAME=my-tts-model
216
  ```
217
 
218
- This section provides a comprehensive guide on how to configure and integrate different AI models into the tool, including handling the `.env` configuration file and adapting it to various sources.
219
- """,
 
220
  "acknowledgements": """
221
  # Acknowledgements
222
 
223
- This tool is powered by Gradio, enabling me to create an easy-to-use interface for AI-based interview practice. I thank Gradio for their fantastic platform.
224
-
225
- ## Thanks to the Model Providers
226
-
227
- While this tool can integrate various AI models, I primarily utilize and sincerely appreciate technologies provided by the following organizations:
228
 
 
229
  - **OpenAI**: For models like GPT-3.5, GPT-4, Whisper, and TTS-1. More details on their models and usage policies can be found at [OpenAI's website](https://www.openai.com).
230
- - **Meta**: For the Llama models, particularly the Meta-Llama-3-70B-Instruct and Meta-Llama-3-8B-Instruct, crucial for advanced language processing. Visit [Meta AI](https://ai.facebook.com) for more information.
231
  - **HuggingFace**: For a wide range of models and APIs that greatly enhance the flexibility of this tool. For specific details on usage, refer to [Hugging Face's documentation](https://huggingface.co).
232
 
233
  Please ensure to review the specific documentation and follow the terms of service for each model and API you use, as this is crucial for responsible and compliant use of these technologies.
234
-
235
- ## Other Models
236
-
237
- This tool is designed to be adaptable, allowing the integration of other models that comply with the APIs of the major providers listed. This enables the tool to be continually enhanced and tailored to specific needs.
238
-
239
- I hope this tool assists you effectively in preparing for your interviews by leveraging these advanced technologies.
240
-
241
  """,
242
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  instruction = {
4
  "demo": """
5
  <span style="color: red;">
6
+ This demo uses a free tier server and free API with strict request limits and limited capabilities for some models. For a significantly better experience, run the service locally. The demo performance is worse than of a locally running service (slow, buggy, robotic voice, too short messages, etc.). If some model is unavailable, please wait a minute before retrying. Persistent unavailability might mean that the request limit has been reached and demo is unavailable for a while.
 
7
  </span>
8
+ """,
 
9
  "introduction": """
10
+ # Welcome to the AI Mock Interviewer!
11
 
12
+ This tool is designed to help you practice coding interviews by simulating the real interview experience. Here you can brush your interview skills in a realistic setting, although its not intended to replace thorough preparations like studying algorithms or practicing coding problems.
13
 
14
  ## Key Features
15
 
16
+ - **Speech-First Interface**: Talk to the AI just like you would with a real interviewer. This makes your practice sessions feel more realistic.
17
+ - **Various AI Models**: The tool uses three types of AI models:
18
  - **LLM (Large Language Model)**: Acts as the interviewer.
19
+ - **Speech-to-Text and Text-to-Speech Models**: These help mimic real conversations by converting spoken words to text and vice versa.
20
+ - **Model Flexibility**: The tool works with many different models, including those from OpenAI and open-source models from Hugging Face.
 
21
 
22
  ## Planned Updates
23
 
24
+ This is just the first beta version, and I'm working on enhancing this tool. Planned updates include:
25
+ 1. **More Interview Types**: Adding simulations like Systems Design, Machine Learning System Design, Math and Logic, Behavioral Interviews, and Theory Tests.
26
+ 2. **Streaming Mode for Models**: Updating the models to provide faster responses during interviews.
27
+ 3. **Testing More Models**: Exploring additional open-source models to enhance the tool’s performance and flexibility.
28
+ 4. **Improving the User Interface**: Making it easier to navigate and use, ensuring a better experience for all users.
29
+ """,
 
 
30
  "quick_start": """
31
  # Running the AI Tech Interviewer Simulator
32
 
33
+ To get the real experience you should run the service locally and use your own API key or local model.
34
 
35
  ## Initial Setup
36
 
37
  ### Clone the Repository
38
 
39
+ First, clone the project repository to your local machine with the following commands:
40
 
41
  ```bash
42
  git clone https://huggingface.co/spaces/IliaLarchenko/interviewer
 
45
 
46
  ### Configure the Environment
47
 
48
+ Create a `.env` file from the provided Open AI example and edit it to include your OpenAI API key (learn how to get it here: https://platform.openai.com/api-keys):
49
 
50
  ```bash
51
  cp .env.openai.example .env
52
+ nano .env # You can use any text editor
53
  ```
54
 
55
+ If you want to use any other model, follow the instructions in Models Configuration section.
 
 
 
 
 
 
56
 
57
  ### Build and Run the Docker Container
58
 
59
+ To build and start the Docker container:
60
 
61
  ```bash
62
  docker-compose build
63
  docker-compose up
64
  ```
65
 
66
+ The application will be accessible at `http://localhost:7860`.
 
 
 
 
 
 
67
 
68
+ ### Running Locally (alternative)
69
 
70
+ Set up a Python environment and install dependencies to run the application locally:
 
 
71
 
72
  ```bash
73
  python -m venv venv
74
  source venv/bin/activate
 
 
 
 
 
 
 
75
  pip install -r requirements.txt
 
 
 
 
 
 
 
76
  python app.py
77
  ```
78
 
79
+ The application should now be accessible at `http://localhost:7860`.
80
+ """,
81
  "interface": """
82
  # Interview Interface Overview
83
 
84
+ This tool will support different types of interviews, but currently focusing on coding interviews only. Here's how to navigate the interface:
 
 
 
 
85
 
86
  ### Setting
87
+ Configure the interview settings such as difficulty, topic, and any specific requirements. Start the interview by clicking the **"Generate a problem"** button.
88
 
89
  ### Problem Statement
90
+ The AI will present a coding problem after you initiate the session.
91
 
92
  ### Solution
93
+ This section is where the interaction happens:
94
  - **Code Area**: On the left side, you will find a space to write your solution. You can use any programming language, although syntax highlighting is only available for Python currently.
95
  - **Communication Area**: On the right, this area includes:
96
  - **Chat History**: Displays the entire dialogue history, showing messages from both you and the AI interviewer.
 
98
 
99
  Engage with the AI as you would with a real interviewer. Provide concise responses and frequent updates rather than long monologues. Your interactions, including any commentary on your code, will be recorded and the AI's responses will be read aloud and displayed in the chat. Follow the AI's instructions and respond to any follow-up questions as they arise.
100
 
101
+ Once the interview is completed, or if you decide to end it early, click the **"Finish the interview"** button.
102
+
103
  ### Feedback
104
+ Detailed feedback will be provided in this section, helping you understand your performance and areas for improvement.
105
+ """,
106
  "models": """
107
  # Models Configuration
108
 
109
+ This tool utilizes three types of AI models: a Large Language Model (LLM) for simulating interviews, a Speech-to-Text (STT) model for audio processing, and a Text-to-Speech (TTS) model for auditory feedback. You can configure each model separately to tailor the experience based on your preferences and available resources.
110
 
111
  ## Flexible Model Integration
112
 
 
138
 
139
  ### Example Configuration
140
 
141
+ OpenAI LLM:
142
  ```plaintext
143
  OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY
144
  LLM_URL=https://api.openai.com/v1
145
  LLM_TYPE=OPENAI_API
146
  LLM_NAME=gpt-3.5-turbo
 
 
 
 
 
 
147
  ```
148
 
149
+ Hugging face TTS:
150
  ```plaintext
151
  HF_API_KEY=hf_YOUR_HUGGINGFACE_API_KEY
 
 
 
 
 
 
152
  TTS_URL=https://api-inference.huggingface.co/models/facebook/mms-tts-eng
153
  TTS_TYPE=HF_API
154
  TTS_NAME=Facebook-mms-tts-eng
155
  ```
156
 
157
+ Local STT:
158
  ```plaintext
159
  HF_API_KEY=None
 
 
 
160
  STT_URL=http://127.0.0.1:5000/transcribe
161
  STT_TYPE=HF_API
162
  STT_NAME=whisper-base.en
 
 
 
163
  ```
164
 
165
+ You can configure each models separately. Find more examples in the `.env.example` files provided.
166
+
167
+ """,
168
  "acknowledgements": """
169
  # Acknowledgements
170
 
171
+ The service is powered by Gradio, and the demo version is hosted on HuggingFace Spaces.
 
 
 
 
172
 
173
+ Even though the service can be used with great variety of models I want to specifially acknowledge a few of them:
174
  - **OpenAI**: For models like GPT-3.5, GPT-4, Whisper, and TTS-1. More details on their models and usage policies can be found at [OpenAI's website](https://www.openai.com).
175
+ - **Meta**: For the Llama models, particularly the Meta-Llama-3-70B-Instruct, as well as Facebook-mms-tts-eng model. Visit [Meta AI](https://ai.facebook.com) for more information.
176
  - **HuggingFace**: For a wide range of models and APIs that greatly enhance the flexibility of this tool. For specific details on usage, refer to [Hugging Face's documentation](https://huggingface.co).
177
 
178
  Please ensure to review the specific documentation and follow the terms of service for each model and API you use, as this is crucial for responsible and compliant use of these technologies.
 
 
 
 
 
 
 
179
  """,
180
  }
181
+
182
+ if __name__ == "__main__":
183
+ spaces_config = """---
184
+ title: Interviewer
185
+ emoji: 📚
186
+ colorFrom: pink
187
+ colorTo: yellow
188
+ sdk: gradio
189
+ sdk_version: 4.27.0
190
+ app_file: app.py
191
+ pinned: true
192
+ license: apache-2.0
193
+ ---
194
+
195
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
196
+
197
+ """
198
+ with open("README.md", "w") as f:
199
+ f.write(spaces_config)
200
+
201
+ for key in ("introduction", "quick_start", "interface", "models", "acknowledgements"):
202
+ f.write(instruction[key])
203
+ f.write("\n\n")