Spaces:
Sleeping
Sleeping
alwinvargheset@outlook.com
commited on
Commit
•
c5aee4e
1
Parent(s):
87cc8bc
added chat-bot
Browse files- Behavioural.json +27 -0
- Culture Fit.json +27 -0
- Homepage.py +89 -0
- STAR.json +27 -0
- Technical.json +33 -0
- dataset.json +21 -0
- pages/1_Introduction Round.py +158 -0
- pages/2_Warm Up Round.py +200 -0
- pages/3_Interview Round.py +195 -0
- requirements.txt +0 -0
- resumeicon.png +0 -0
- rex.png +0 -0
- secret_key.py +3 -0
- templates.py +136 -0
- user.png +0 -0
- utils.py +12 -0
Behavioural.json
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"question": "How do you stay updated on the latest trends in data analytics and machine learning?",
|
4 |
+
"type": "Behavioral",
|
5 |
+
"answer/intent": "The intent behind this question is to understand the candidate's approach to continuous learning in a rapidly evolving field. Look for keywords such as stay updated, research papers, reputable blogs, and online communities."
|
6 |
+
},
|
7 |
+
{
|
8 |
+
"question": "How do you approach explaining complex data findings to non-technical stakeholders?",
|
9 |
+
"type": "Behavioral",
|
10 |
+
"answer/intent": "The intent behind this question is to evaluate the candidate's communication skills and ability to convey complex information to non-technical stakeholders. Look for keywords such as explain complex data, visualizations, storytelling, and business impact."
|
11 |
+
},
|
12 |
+
{
|
13 |
+
"question": "How do you handle missing or incomplete data in a dataset, and what impact can it have on your analysis?",
|
14 |
+
"type": "Behavioral",
|
15 |
+
"answer/intent": "This question assesses the candidate's data preprocessing skills and their understanding of the implications of missing data. Look for responses that mention techniques like imputation, analysis of missing data patterns, and an awareness of potential biases introduced by missing data."
|
16 |
+
},
|
17 |
+
{
|
18 |
+
"question": "Describe a real-world scenario where you had to deal with a large dataset. How did you approach handling and analyzing it efficiently?",
|
19 |
+
"type": "Behavioral",
|
20 |
+
"answer/intent": "This question assesses the candidate's practical experience with large datasets and their ability to manage and analyze data efficiently. Look for responses that mention tools or frameworks used for handling large datasets, strategies for optimizing performance, and any challenges faced during the process."
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"question": "Can you provide an example of a data visualization you created to uncover meaningful insights, and how did it contribute to the overall understanding of the data?",
|
24 |
+
"type": "Behavioral",
|
25 |
+
"answer/intent": "This question assesses the candidate's data visualization skills and their ability to communicate insights effectively. Look for details on the choice of visualization tools, the story conveyed through the visualization, and the impact it had on decision-making or understanding the dataset."
|
26 |
+
}
|
27 |
+
]
|
Culture Fit.json
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"question": "How do you contribute to creating a positive and collaborative team environment?",
|
4 |
+
"type": "Cultural Fit",
|
5 |
+
"answer/intent": "The intent behind this question is to assess the candidate's teamwork and collaboration skills. Look for keywords such as positive team environment, collaboration, and open communication."
|
6 |
+
},
|
7 |
+
{
|
8 |
+
"question": "Can you share an example of a time when you provided support or assistance to a team member facing challenges?",
|
9 |
+
"type": "Cultural Fit",
|
10 |
+
"answer/intent": "This question gauges the candidate's willingness to support team members in challenging situations. Look for responses that demonstrate empathy, teamwork, and a proactive attitude in helping others overcome challenges."
|
11 |
+
},
|
12 |
+
{
|
13 |
+
"question": "How do you handle disagreements or conflicts within a team, and what steps do you take to resolve them?",
|
14 |
+
"type": "Cultural Fit",
|
15 |
+
"answer/intent": "The intent behind this question is to evaluate the candidate's conflict resolution skills and approach to maintaining a positive team dynamic. Look for responses that emphasize open communication, active listening, and a constructive resolution process."
|
16 |
+
},
|
17 |
+
{
|
18 |
+
"question": "In what ways do you promote diversity and inclusion within a team or workplace?",
|
19 |
+
"type": "Cultural Fit",
|
20 |
+
"answer/intent": "This question assesses the candidate's commitment to fostering a diverse and inclusive work environment. Look for responses that highlight initiatives or actions taken to promote diversity, equity, and inclusion, both in and outside of work projects."
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"question": "How do you celebrate team achievements and milestones, and why do you think it's important?",
|
24 |
+
"type": "Cultural Fit",
|
25 |
+
"answer/intent": "This question explores the candidate's perspective on acknowledging and celebrating team successes. Look for responses that emphasize the importance of recognizing achievements, fostering a positive team culture, and motivating team members."
|
26 |
+
}
|
27 |
+
]
|
Homepage.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from secret_key import openapi_key, groq_api_key
|
3 |
+
import os
|
4 |
+
from groq import Groq
|
5 |
+
from langchain_groq import ChatGroq
|
6 |
+
|
7 |
+
from PyPDF2 import PdfReader
|
8 |
+
from langchain.text_splitter import CharacterTextSplitter
|
9 |
+
from langchain.embeddings import HuggingFaceEmbeddings
|
10 |
+
from langchain.vectorstores import FAISS
|
11 |
+
from langchain.chains.question_answering import load_qa_chain
|
12 |
+
|
13 |
+
|
14 |
+
st.set_page_config(page_icon='rex.png', layout='wide', page_title='Interview Preparation : Getting Started')
|
15 |
+
|
16 |
+
st.sidebar.markdown("Navigate using the options above")
|
17 |
+
#key = st.sidebar.text_input("Groq API Key ", type="password")
|
18 |
+
|
19 |
+
if "groq_key" not in st.session_state:
|
20 |
+
st.session_state.groq_key = groq_api_key
|
21 |
+
|
22 |
+
#if not key and not st.session_state.groq_key:
|
23 |
+
#st.sidebar.info("Please add your API key to continue")
|
24 |
+
#st.stop()
|
25 |
+
|
26 |
+
#if key:
|
27 |
+
#st.session_state.groq_key = key
|
28 |
+
|
29 |
+
os.environ['GROQ_API_KEY'] = groq_api_key
|
30 |
+
llm = ChatGroq(
|
31 |
+
groq_api_key=groq_api_key,
|
32 |
+
model_name="mixtral-8x7b-32768"
|
33 |
+
)
|
34 |
+
|
35 |
+
st.title("Interview AI Tool : Getting Started")
|
36 |
+
st.header("Recommended Steps : ")
|
37 |
+
|
38 |
+
st.markdown("""\n1. Please upload your **resume** in the sidebar on your **left**.
|
39 |
+
\n\n2. If you are applying for a specific job , please add **job description** in the text box **below**.
|
40 |
+
\n\n3. For starters we recommend navigating to the **Introduction Round** , here your AI assistant will debrief you
|
41 |
+
on the interview and answer your queries related to the interview.
|
42 |
+
\n\n4. Next, we recommend having a go with a low stakes **Warmup Round** to get you in the right flow for the
|
43 |
+
actual interview round.
|
44 |
+
\n\n5. Navigate to the **Interview Round** to get started with your practice interviews.\n\n""")
|
45 |
+
|
46 |
+
st.sidebar.header("Resume")
|
47 |
+
resume = st.sidebar.file_uploader(label="**Upload your Resume/CV PDF file**", type='pdf')
|
48 |
+
|
49 |
+
if resume:
|
50 |
+
pdf = PdfReader(resume)
|
51 |
+
|
52 |
+
text = ""
|
53 |
+
for page in pdf.pages:
|
54 |
+
text += page.extract_text()
|
55 |
+
|
56 |
+
text_splitter = CharacterTextSplitter(
|
57 |
+
separator="\n",
|
58 |
+
chunk_size=1000,
|
59 |
+
chunk_overlap=200,
|
60 |
+
length_function=len
|
61 |
+
)
|
62 |
+
chunks = text_splitter.split_text(text)
|
63 |
+
embeddings = HuggingFaceEmbeddings()
|
64 |
+
doc = FAISS.from_texts(chunks, embeddings)
|
65 |
+
|
66 |
+
chain = load_qa_chain(llm, chain_type="stuff")
|
67 |
+
|
68 |
+
name = chain.run(input_documents=doc.similarity_search("What is the person's name?"), question="What is the person's name")
|
69 |
+
#exp = chain.run(input_documents=doc.similarity_search("What is the professional experience?"), question="What is the professional experience?")
|
70 |
+
skills = chain.run(input_documents=doc.similarity_search("What are the person's skills?"), question="What are the person's skills?")
|
71 |
+
#certs = chain.run(input_documents=doc.similarity_search("What is the person's courses/certifications?"), question="What is the person's courses/certifications?")
|
72 |
+
#projects = chain.run(input_documents=doc.similarity_search("What are the person's projects"), question="What are the person's projects")
|
73 |
+
|
74 |
+
resume_info = {"Name": name,
|
75 |
+
#"Experience": exp,
|
76 |
+
"Skills": skills,
|
77 |
+
#"Certifications": certs,
|
78 |
+
#"Projects": projects
|
79 |
+
}
|
80 |
+
|
81 |
+
|
82 |
+
st.session_state["Resume Info"] = resume_info
|
83 |
+
st.sidebar.info("PDF Read Successfully!")
|
84 |
+
|
85 |
+
|
86 |
+
st.header("Job Details")
|
87 |
+
|
88 |
+
st.session_state["Job Description"] = st.text_area(label="**Write your job description here**", height=300)
|
89 |
+
|
STAR.json
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"question": "Using the STAR format, describe a specific situation where you successfully implemented a machine learning model to solve a business problem. Outline the key tasks you undertook, actions you implemented, and the measurable results achieved.",
|
4 |
+
"type": "STAR",
|
5 |
+
"answer/intent": "This STAR question assesses the candidate's hands-on experience with machine learning in a business context. Look for responses that clearly present the Situation, Task, Action, and Result, highlighting their role, specific tasks performed, actions taken, and the measurable impact of the machine learning solution."
|
6 |
+
},
|
7 |
+
{
|
8 |
+
"question": "In a previous role, share a situation where you had to present complex data findings to a non-technical audience. Utilizing the STAR format, detail the specific tasks involved, actions you took, and the overall impact on stakeholder understanding and decision-making.",
|
9 |
+
"type": "STAR",
|
10 |
+
"answer/intent": "The intent behind this STAR question is to evaluate the candidate's communication skills and ability to convey complex information. Look for responses that structure their answer using the Situation, Task, Action, and Result framework, emphasizing the challenges faced, specific tasks performed, actions taken to simplify complex data, and the impact on stakeholder decision-making."
|
11 |
+
},
|
12 |
+
{
|
13 |
+
"question": "Describe a challenging data preprocessing task you encountered in a data analytics project, applying the STAR format. Outline the steps you took to address the challenge, your role in the task, and the impact on the overall data analysis.",
|
14 |
+
"type": "STAR",
|
15 |
+
"answer/intent": "This STAR question aims to assess the candidate's data preprocessing skills and problem-solving approach. Look for responses that clearly delineate the Situation, Task, Action, and Result, providing insights into the specific challenges faced, the candidate's role, actions taken to address the challenge, and the impact on the overall data analysis."
|
16 |
+
},
|
17 |
+
{
|
18 |
+
"question": "Using the STAR format, describe a scenario where you had to lead a data analytics project. Outline the key tasks you undertook, actions you implemented to ensure project success, and the measurable results achieved.",
|
19 |
+
"type": "STAR",
|
20 |
+
"answer/intent": "This STAR question evaluates the candidate's project leadership skills in a data analytics context. Look for responses that follow the Situation, Task, Action, and Result structure, highlighting the candidate's leadership role, specific tasks performed, actions taken to ensure project success, and the measurable results achieved."
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"question": "Share an instance where you identified a critical error in a dataset during a data analysis project. Applying the STAR format, detail the Situation, the specific Tasks you performed to rectify the error, Actions you took, and the overall impact on the project outcomes.",
|
24 |
+
"type": "STAR",
|
25 |
+
"answer/intent": "This STAR question assesses the candidate's attention to detail and problem-solving skills when dealing with data errors. Look for responses that present a clear Situation, Task, Action, and Result, outlining the specific error identified, tasks performed to rectify it, actions taken, and the impact on the overall project outcomes."
|
26 |
+
}
|
27 |
+
]
|
Technical.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"question": "Explain the concept of 'p-value' in statistics and its significance in hypothesis testing.",
|
4 |
+
"type": "Technical",
|
5 |
+
"answer/intent": "The p-value is the probability of obtaining results as extreme or more extreme than the observed results under the assumption that the null hypothesis is true. A lower p-value suggests stronger evidence against the null hypothesis."
|
6 |
+
},
|
7 |
+
{
|
8 |
+
"question": "Describe a situation where you applied clustering techniques in a data analysis project.",
|
9 |
+
"type": "Technical",
|
10 |
+
"answer/intent": "I applied clustering to group similar data points together based on certain features. For example, in customer segmentation, I used k-means clustering to identify distinct customer groups. Keywords: clustering techniques, grouping, k-means clustering."
|
11 |
+
},
|
12 |
+
{
|
13 |
+
"question": "How do you handle imbalanced datasets in machine learning, and why is it important?",
|
14 |
+
"type": "Technical",
|
15 |
+
"answer/intent": "I handle imbalanced datasets by using techniques like oversampling, undersampling, or using algorithms that handle imbalanced classes. It's important because imbalanced datasets can lead to biased models, and these techniques help in achieving better model performance. Keywords: imbalanced datasets, oversampling, undersampling, biased models."
|
16 |
+
},
|
17 |
+
{
|
18 |
+
"question": "Explain the importance of cross-validation in machine learning and how it works.",
|
19 |
+
"type": "Technical",
|
20 |
+
"answer/intent": "Cross-validation is important for assessing a model's performance on multiple subsets of the data. It works by splitting the dataset into training and testing sets multiple times, allowing for a more robust evaluation of the model. Keywords: cross-validation, model performance, training set, testing set."
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"question": "Describe a project where you had to extract, transform, and load (ETL) data from various sources.",
|
24 |
+
"type": "Technical",
|
25 |
+
"answer/intent": "I worked on a project where I extracted data from multiple databases, transformed it to a common format, and loaded it into a centralized data warehouse. This ensured a unified and accessible data source for analysis. Keywords: ETL, extract, transform, load, data warehouse."
|
26 |
+
},
|
27 |
+
{
|
28 |
+
"question": "In what situations would you use a box plot, and what information does it provide?",
|
29 |
+
"type": "Technical",
|
30 |
+
"answer/intent": "I would use a box plot to visualize the distribution of a dataset and identify outliers. It provides information about the median, quartiles, and potential skewness or outliers in the data. Keywords: box plot, distribution, outliers, median, quartiles."
|
31 |
+
}
|
32 |
+
]
|
33 |
+
|
dataset.json
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
|
3 |
+
{
|
4 |
+
"question": "How do you stay updated on the latest trends in data analytics and machine learning?",
|
5 |
+
"type": "Behavioral",
|
6 |
+
"answer/intent": "The intent behind this question is to understand the candidate's approach to continuous learning in a rapidly evolving field. Look for keywords such as stay updated, research papers, reputable blogs, and online communities."
|
7 |
+
},
|
8 |
+
|
9 |
+
{
|
10 |
+
"question": "How do you approach explaining complex data findings to non-technical stakeholders?",
|
11 |
+
"type": "Behavioral",
|
12 |
+
"answer/intent": "The intent behind this question is to evaluate the candidate's communication skills and ability to convey complex information to non-technical stakeholders. Look for keywords such as explain complex data, visualizations, storytelling, and business impact."
|
13 |
+
},
|
14 |
+
|
15 |
+
{
|
16 |
+
"question": "How do you contribute to creating a positive and collaborative team environment?",
|
17 |
+
"type": "Cultural Fit",
|
18 |
+
"answer/intent": "The intent behind this question is to assess the candidate's teamwork and collaboration skills. Look for keywords such as positive team environment, collaboration, and open communication."
|
19 |
+
}
|
20 |
+
]
|
21 |
+
|
pages/1_Introduction Round.py
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from groq import Groq
|
3 |
+
from langchain.prompts import PromptTemplate
|
4 |
+
from langchain.chains import LLMChain
|
5 |
+
import os
|
6 |
+
from langchain_groq import ChatGroq
|
7 |
+
|
8 |
+
from secret_key import groq_api_key
|
9 |
+
import pandas as pd
|
10 |
+
from langchain.schema import (AIMessage,HumanMessage,SystemMessage)
|
11 |
+
from langchain.prompts.chat import (
|
12 |
+
ChatPromptTemplate,
|
13 |
+
SystemMessagePromptTemplate,
|
14 |
+
HumanMessagePromptTemplate
|
15 |
+
)
|
16 |
+
from langchain.memory import ConversationBufferMemory
|
17 |
+
from langchain.memory import ConversationBufferWindowMemory
|
18 |
+
import time,random
|
19 |
+
|
20 |
+
st.set_page_config(page_icon='rex.png', layout='wide')
|
21 |
+
|
22 |
+
st.title("Introduction Round : Getting Familiar")
|
23 |
+
st.info("""
|
24 |
+
Hey there! In the Introduction Round, we aim to get to know you better and create a comfortable environment for a productive
|
25 |
+
interview experience. We'll begin by explaining the interview structure, providing you with a clear roadmap of what to
|
26 |
+
expect. Following this, we'll kick things off with an icebreaker question to break the ice and ease you into the
|
27 |
+
conversation. Moving forward, we'll explore your professional background, educational journey, and delve into your
|
28 |
+
skills and strengths. You'll have the opportunity to share your career goals and aspirations, allowing us to understand
|
29 |
+
the unique qualities you bring to the table. If there are any specific achievements or points you'd like to highlight,
|
30 |
+
this is the moment to shine. As we approach the conclusion of the round, we'll wrap up with a closing discussion and
|
31 |
+
seamlessly transition to the next stage. This round is designed to be informative, engaging, and to help you showcase
|
32 |
+
your best self. Let's embark on this journey together!""", icon="🤖")
|
33 |
+
|
34 |
+
if not st.session_state.groq_key:
|
35 |
+
st.info("Please add your API key to continue")
|
36 |
+
st.stop()
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
if "Resume Info" not in st.session_state or not st.session_state["Resume Info"]:
|
41 |
+
st.info("Please upload your Resume")
|
42 |
+
st.stop()
|
43 |
+
|
44 |
+
os.environ['GROQ_API_KEY'] = st.session_state.groq_key
|
45 |
+
|
46 |
+
# Initialize Groq client
|
47 |
+
client= ChatGroq(
|
48 |
+
groq_api_key=groq_api_key,
|
49 |
+
model_name="mixtral-8x7b-32768"
|
50 |
+
)
|
51 |
+
memory = ConversationBufferMemory(
|
52 |
+
memory_key="history",
|
53 |
+
return_messages=True
|
54 |
+
)
|
55 |
+
|
56 |
+
|
57 |
+
system_template_q = """ You are to take the user through a guided introduction session before an interview, this session is divided into the following rounds/stages:
|
58 |
+
|
59 |
+
You are to choose just ONE round based on the conversation from the previous round : {previous}
|
60 |
+
|
61 |
+
1. Welcome Message
|
62 |
+
2. Explain the Interview Structure
|
63 |
+
3. Professional Background
|
64 |
+
4. Educational Background"
|
65 |
+
5. Skills and Strengths"
|
66 |
+
6. Goals and Aspirations
|
67 |
+
7. Any Specific Points to Highlight
|
68 |
+
8. Closing and Transition
|
69 |
+
|
70 |
+
|
71 |
+
|
72 |
+
Use the previous round info to choose the next question. For example if the previous round asked about skills and strengths.
|
73 |
+
The next question should be about goals and aspirations. Do not give all of the information above at the same time. ONLY ask/give info with respect to the round.
|
74 |
+
|
75 |
+
|
76 |
+
Relevant Information related to the interview :
|
77 |
+
|
78 |
+
The interview process that will contain three type of questions :
|
79 |
+
1.Techinical questions, testing hard skills.
|
80 |
+
2.Behavioral questions to assess the candidates personality and work style, and soft skills.
|
81 |
+
3.Culutural Fit questions to assess the candidates viability to fit in the company culture.
|
82 |
+
|
83 |
+
Instruct the user that they can do a practice round if they navigate to the Warm Up round section
|
84 |
+
of the , and then they can do actual interviews by navigating to the Interview round section ,
|
85 |
+
where they will be provided live feedback and score for their responses. The user can also repeat
|
86 |
+
the questions if they want to improve the response.
|
87 |
+
|
88 |
+
Answer to the best of your abilities , but do not make any information up.
|
89 |
+
|
90 |
+
Use this information about the user to address them and use relevant details : {user_info}
|
91 |
+
|
92 |
+
Before giving your output , make sure, it is only related to that specific round, do not print out all of the rounds and ask everything at once.
|
93 |
+
|
94 |
+
Use this logic for your output :
|
95 |
+
|
96 |
+
The previous round was which round ? And which round should I choose, what question should I ask for that round.
|
97 |
+
|
98 |
+
Use the past messages : {messages} , to make sure no question is repeated. Where the assistant messages are your previous messages.
|
99 |
+
Do not ask about one specific topic too much, ask questions and let the user respond , and move on to the next. Embolden any key words in your response by
|
100 |
+
enclosing the word in **.
|
101 |
+
|
102 |
+
|
103 |
+
"""
|
104 |
+
|
105 |
+
|
106 |
+
system_message_prompt_q = SystemMessagePromptTemplate.from_template(system_template_q)
|
107 |
+
|
108 |
+
human_template_q = "{text}"
|
109 |
+
human_message_prompt_q = HumanMessagePromptTemplate.from_template(human_template_q)
|
110 |
+
|
111 |
+
chat_prompt_q = ChatPromptTemplate.from_messages([system_message_prompt_q,human_message_prompt_q])
|
112 |
+
|
113 |
+
intro_chain = LLMChain(llm=client, prompt=chat_prompt_q)
|
114 |
+
|
115 |
+
|
116 |
+
if "round" not in st.session_state:
|
117 |
+
st.session_state["round"] = 1
|
118 |
+
|
119 |
+
if "intro_messages" not in st.session_state:
|
120 |
+
st.session_state["intro_messages"] = []
|
121 |
+
st.session_state['intro_messages'].append({'role': 'assistant', 'content': "Hello! Welcome to the interview. I'm here to help you through the process. In this guided introduction "
|
122 |
+
"session, we'll explore different aspects of your background. By the end, you'll have a "
|
123 |
+
"chance to practice and improve your interview skills. Let's begin! How are you doing today?"})
|
124 |
+
|
125 |
+
for intro_message in st.session_state["intro_messages"]:
|
126 |
+
if intro_message['role'] == "assistant":
|
127 |
+
avatar = "rex.png"
|
128 |
+
else:
|
129 |
+
avatar = "user.png"
|
130 |
+
with st.chat_message(intro_message['role'],avatar=avatar):
|
131 |
+
st.markdown(intro_message['content'])
|
132 |
+
|
133 |
+
|
134 |
+
if query := st.chat_input("Type here to talk to AI assistant"):
|
135 |
+
with st.chat_message("user",avatar="user.png"):
|
136 |
+
st.markdown(query)
|
137 |
+
|
138 |
+
st.session_state['intro_messages'].append({'role': 'user', 'content': query})
|
139 |
+
|
140 |
+
if query is not None:
|
141 |
+
reply = intro_chain.run(text=query, user_info=st.session_state["Resume Info"], previous=st.session_state["intro_messages"][-2],
|
142 |
+
messages=st.session_state["intro_messages"])
|
143 |
+
with st.chat_message("assistant",avatar="rex.png"):
|
144 |
+
message_placeholder = st.empty()
|
145 |
+
full_response = ""
|
146 |
+
for chunk in reply.split():
|
147 |
+
full_response += chunk + " "
|
148 |
+
time.sleep(0.05)
|
149 |
+
# Add a blinking cursor to simulate typing
|
150 |
+
message_placeholder.markdown(full_response + "▌")
|
151 |
+
message_placeholder.markdown(full_response)
|
152 |
+
#st.markdown(reply)
|
153 |
+
|
154 |
+
st.session_state['intro_messages'].append({'role': 'assistant', 'content': reply})
|
155 |
+
|
156 |
+
|
157 |
+
if "round" in st.session_state and st.session_state["round"] < 9:
|
158 |
+
st.session_state["round"] = st.session_state["round"] + 1
|
pages/2_Warm Up Round.py
ADDED
@@ -0,0 +1,200 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from groq import Groq
|
3 |
+
from langchain.prompts import PromptTemplate
|
4 |
+
from langchain.chains import LLMChain
|
5 |
+
import os
|
6 |
+
from langchain_groq import ChatGroq
|
7 |
+
from secret_key import groq_api_key
|
8 |
+
import pandas as pd
|
9 |
+
from langchain.schema import (AIMessage, HumanMessage, SystemMessage)
|
10 |
+
from langchain.prompts.chat import (
|
11 |
+
ChatPromptTemplate,
|
12 |
+
SystemMessagePromptTemplate,
|
13 |
+
HumanMessagePromptTemplate
|
14 |
+
)
|
15 |
+
from langchain.memory import ConversationBufferMemory
|
16 |
+
from langchain.memory import ConversationBufferWindowMemory
|
17 |
+
import json,time,random
|
18 |
+
from templates import choose_template, extract_template, warmup_feedback_template, warm_up_question_template
|
19 |
+
from utils import select_questions
|
20 |
+
|
21 |
+
st.set_page_config(page_icon='rex.png', layout='wide')
|
22 |
+
|
23 |
+
st.title("Warm Up Round : Getting Comfortable with the Interview")
|
24 |
+
|
25 |
+
category = st.selectbox("Which type of questions do you want to practice?",
|
26 |
+
['Technical', 'Behavioural', 'Culture Fit','STAR'],index=None)
|
27 |
+
|
28 |
+
while category is None:
|
29 |
+
st.info('Please select question category')
|
30 |
+
st.stop()
|
31 |
+
|
32 |
+
if category:
|
33 |
+
data = select_questions(category=category)
|
34 |
+
|
35 |
+
|
36 |
+
if not st.session_state.groq_key:
|
37 |
+
st.info("Please add your API key to continue")
|
38 |
+
st.stop()
|
39 |
+
|
40 |
+
if not st.session_state["Resume Info"]:
|
41 |
+
st.info("Please upload your Resume")
|
42 |
+
st.stop()
|
43 |
+
|
44 |
+
if not st.session_state["Job Description"]:
|
45 |
+
st.info("Please add your job description")
|
46 |
+
st.stop()
|
47 |
+
|
48 |
+
os.environ['GROQ_API_KEY'] = st.session_state.groq_key
|
49 |
+
|
50 |
+
client = ChatGroq(
|
51 |
+
groq_api_key=groq_api_key,
|
52 |
+
model_name="mixtral-8x7b-32768"
|
53 |
+
)
|
54 |
+
### Extract previously asked Questions from the history
|
55 |
+
memory = ConversationBufferMemory(
|
56 |
+
memory_key="history",
|
57 |
+
return_messages=True
|
58 |
+
)
|
59 |
+
|
60 |
+
|
61 |
+
system_template_e = extract_template
|
62 |
+
|
63 |
+
system_message_prompt_e = SystemMessagePromptTemplate.from_template(system_template_e)
|
64 |
+
|
65 |
+
human_template_e = "{text}"
|
66 |
+
human_message_prompt_e = HumanMessagePromptTemplate.from_template(human_template_e)
|
67 |
+
|
68 |
+
chat_prompt_e = ChatPromptTemplate.from_messages([system_message_prompt_e, human_message_prompt_e])
|
69 |
+
|
70 |
+
extract_chain = LLMChain(llm=client, prompt=chat_prompt_e)
|
71 |
+
|
72 |
+
### Choose question based on action
|
73 |
+
|
74 |
+
system_template_c = choose_template
|
75 |
+
|
76 |
+
system_message_prompt_c = SystemMessagePromptTemplate.from_template(system_template_c)
|
77 |
+
|
78 |
+
human_template_c = "{text}"
|
79 |
+
human_message_prompt_c = HumanMessagePromptTemplate.from_template(human_template_c)
|
80 |
+
|
81 |
+
chat_prompt_c = ChatPromptTemplate.from_messages([system_message_prompt_c, human_message_prompt_c])
|
82 |
+
|
83 |
+
choose_chain = LLMChain(llm=client, prompt=chat_prompt_c)
|
84 |
+
|
85 |
+
### Asking the questions
|
86 |
+
system_template_q = warm_up_question_template
|
87 |
+
|
88 |
+
system_message_prompt_q = SystemMessagePromptTemplate.from_template(system_template_q)
|
89 |
+
|
90 |
+
human_template_q = "{text}"
|
91 |
+
human_message_prompt_q = HumanMessagePromptTemplate.from_template(human_template_q)
|
92 |
+
|
93 |
+
chat_prompt_q = ChatPromptTemplate.from_messages([system_message_prompt_q, human_message_prompt_q])
|
94 |
+
|
95 |
+
question_chain = LLMChain(llm=client, prompt=chat_prompt_q)
|
96 |
+
|
97 |
+
### Provide Feedback
|
98 |
+
|
99 |
+
system_template_f = warmup_feedback_template
|
100 |
+
|
101 |
+
|
102 |
+
system_message_prompt_f = SystemMessagePromptTemplate.from_template(system_template_f)
|
103 |
+
|
104 |
+
human_template_f = "{text}"
|
105 |
+
human_message_prompt_f = HumanMessagePromptTemplate.from_template(human_template_f)
|
106 |
+
|
107 |
+
chat_prompt_f = ChatPromptTemplate.from_messages([system_message_prompt_f, human_message_prompt_f])
|
108 |
+
|
109 |
+
feedback_chain = LLMChain(llm=client, prompt=chat_prompt_f)
|
110 |
+
|
111 |
+
if "warmup_message" not in st.session_state:
|
112 |
+
st.session_state.warmup_message = []
|
113 |
+
|
114 |
+
if "action" not in st.session_state:
|
115 |
+
st.session_state.action = "Next"
|
116 |
+
|
117 |
+
|
118 |
+
if "history" not in st.session_state:
|
119 |
+
st.session_state.history = []
|
120 |
+
|
121 |
+
if "questions" not in st.session_state:
|
122 |
+
st.session_state.questions = []
|
123 |
+
|
124 |
+
for message in st.session_state.warmup_message:
|
125 |
+
if message['role'] == "user":
|
126 |
+
name = "user"
|
127 |
+
avatar = "user.png"
|
128 |
+
else:
|
129 |
+
name = "assistant"
|
130 |
+
avatar = "rex.png"
|
131 |
+
|
132 |
+
with st.chat_message(name, avatar=avatar):
|
133 |
+
st.markdown(f"{message['content']}")
|
134 |
+
|
135 |
+
if inp := st.chat_input("Type here"):
|
136 |
+
with st.chat_message("user",avatar='user.png'):
|
137 |
+
st.markdown(inp)
|
138 |
+
st.session_state['warmup_message'].append({'role': 'user', 'content': inp})
|
139 |
+
|
140 |
+
question = None
|
141 |
+
|
142 |
+
if st.session_state.warmup_message != [] and st.session_state.warmup_message[-1]['role'] == "feedback":
|
143 |
+
option = st.radio(label="Which question would you like to do?", options=["Next", "Repeat"], index=None)
|
144 |
+
while option is None:
|
145 |
+
pass
|
146 |
+
st.session_state.action = option
|
147 |
+
|
148 |
+
if st.session_state.action == "Next" or "Repeat" and (
|
149 |
+
st.session_state.warmup_message == [] or st.session_state.warmup_message[-1]['role'] == "feedback"):
|
150 |
+
|
151 |
+
if st.session_state.questions != []:
|
152 |
+
extracts = extract_chain.run(history=st.session_state.questions, text="")
|
153 |
+
else:
|
154 |
+
extracts = "No previous Questions"
|
155 |
+
|
156 |
+
chosen_q = choose_chain.run(action=st.session_state.action, questions=extracts, data=data, text="",details=st.session_state["Resume Info"], description=st.session_state['Job Description'])
|
157 |
+
response = question_chain.run(question=chosen_q, history=st.session_state.history[-2:], text=inp, details=st.session_state["Resume Info"])
|
158 |
+
|
159 |
+
with st.chat_message("assistant", avatar='rex.png'):
|
160 |
+
|
161 |
+
message_placeholder = st.empty()
|
162 |
+
full_response = ""
|
163 |
+
for chunk in response.split():
|
164 |
+
full_response += chunk + " "
|
165 |
+
time.sleep(0.05)
|
166 |
+
# Add a blinking cursor to simulate typing
|
167 |
+
message_placeholder.markdown(full_response + "▌")
|
168 |
+
message_placeholder.markdown(full_response)
|
169 |
+
|
170 |
+
#st.markdown(response)
|
171 |
+
|
172 |
+
st.session_state.action = "Feedback"
|
173 |
+
st.session_state['warmup_message'].append({'role': 'interviewer', 'content': response})
|
174 |
+
memory.save_context({"input": ""}, {"output": response})
|
175 |
+
st.session_state['history'].append(memory.buffer_as_messages[-2:])
|
176 |
+
st.session_state['questions'].append({'Question': response})
|
177 |
+
question = chosen_q
|
178 |
+
st.stop()
|
179 |
+
|
180 |
+
if st.session_state.warmup_message[-1]['role'] == "user" and st.session_state.action == "Feedback":
|
181 |
+
feedback = feedback_chain.run(question=question, response=inp, history=st.session_state.history[-2:], text=inp,asked=st.session_state.warmup_message[-2]['content'])
|
182 |
+
|
183 |
+
with st.chat_message("assistant", avatar='rex.png'):
|
184 |
+
|
185 |
+
message_placeholder = st.empty()
|
186 |
+
full_response = ""
|
187 |
+
for chunk in feedback.split():
|
188 |
+
full_response += chunk + " "
|
189 |
+
time.sleep(0.05)
|
190 |
+
# Add a blinking cursor to simulate typing
|
191 |
+
message_placeholder.markdown(full_response + "▌")
|
192 |
+
message_placeholder.markdown(full_response)
|
193 |
+
|
194 |
+
#st.markdown(feedback)
|
195 |
+
st.session_state['warmup_message'].append({'role': 'feedback', 'content': feedback})
|
196 |
+
memory.save_context({"input": inp}, {"output": feedback})
|
197 |
+
st.session_state['history'].append(memory.buffer_as_messages[-2:])
|
198 |
+
|
199 |
+
st.button("Continue")
|
200 |
+
|
pages/3_Interview Round.py
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from groq import Groq
|
3 |
+
from langchain.prompts import PromptTemplate
|
4 |
+
from langchain.chains import LLMChain
|
5 |
+
import os
|
6 |
+
from langchain_groq import ChatGroq
|
7 |
+
|
8 |
+
from secret_key import groq_api_key
|
9 |
+
import pandas as pd
|
10 |
+
from langchain.schema import (AIMessage, HumanMessage, SystemMessage)
|
11 |
+
from langchain.prompts.chat import (
|
12 |
+
ChatPromptTemplate,
|
13 |
+
SystemMessagePromptTemplate,
|
14 |
+
HumanMessagePromptTemplate
|
15 |
+
)
|
16 |
+
from langchain.memory import ConversationBufferMemory
|
17 |
+
from langchain.memory import ConversationBufferWindowMemory
|
18 |
+
import time, random
|
19 |
+
from templates import choose_template, extract_template, interview_feedback_template, interview_question_template
|
20 |
+
from utils import select_questions
|
21 |
+
|
22 |
+
st.set_page_config(page_icon='rex.png', layout='wide')
|
23 |
+
|
24 |
+
st.title("Interview Round : Skill Up Your Interviewing Prowess")
|
25 |
+
|
26 |
+
category = st.selectbox("Which type of questions do you want to practice?",
|
27 |
+
['Technical', 'Behavioural', 'Culture Fit', 'STAR'], index=None)
|
28 |
+
|
29 |
+
while category is None:
|
30 |
+
st.info('Please select question category')
|
31 |
+
st.stop()
|
32 |
+
|
33 |
+
if category:
|
34 |
+
data = select_questions(category=category)
|
35 |
+
|
36 |
+
while category is None:
|
37 |
+
st.markdown('Please select question category')
|
38 |
+
|
39 |
+
if not st.session_state.groq_key:
|
40 |
+
st.info("Please add your API key to continue")
|
41 |
+
st.stop()
|
42 |
+
|
43 |
+
if not st.session_state["Resume Info"]:
|
44 |
+
st.info("Please upload your Resume")
|
45 |
+
st.stop()
|
46 |
+
|
47 |
+
if not st.session_state["Job Description"]:
|
48 |
+
st.info("Please add your job description")
|
49 |
+
st.stop()
|
50 |
+
|
51 |
+
os.environ['GROQ_API_KEY'] = st.session_state.groq_key
|
52 |
+
|
53 |
+
# Initialize Groq client with Mixtral model
|
54 |
+
client = ChatGroq(
|
55 |
+
groq_api_key=groq_api_key,
|
56 |
+
model_name="mixtral-8x7b-32768"
|
57 |
+
)
|
58 |
+
### Extract previously asked Questions from the history
|
59 |
+
|
60 |
+
memory = ConversationBufferMemory(
|
61 |
+
memory_key="history",
|
62 |
+
return_messages=True
|
63 |
+
)
|
64 |
+
|
65 |
+
|
66 |
+
system_template_e = extract_template
|
67 |
+
|
68 |
+
system_message_prompt_e = SystemMessagePromptTemplate.from_template(system_template_e)
|
69 |
+
human_template_e = "{text}"
|
70 |
+
human_message_prompt_e = HumanMessagePromptTemplate.from_template(human_template_e)
|
71 |
+
|
72 |
+
chat_prompt_e = ChatPromptTemplate.from_messages([system_message_prompt_e, human_message_prompt_e])
|
73 |
+
|
74 |
+
extract_chain = LLMChain(llm=client, prompt=chat_prompt_e)
|
75 |
+
|
76 |
+
### Choose question based on action
|
77 |
+
|
78 |
+
system_template_c = choose_template
|
79 |
+
|
80 |
+
system_message_prompt_c = SystemMessagePromptTemplate.from_template(system_template_c)
|
81 |
+
human_template_c = "{text}"
|
82 |
+
human_message_prompt_c = HumanMessagePromptTemplate.from_template(human_template_c)
|
83 |
+
|
84 |
+
chat_prompt_c = ChatPromptTemplate.from_messages([system_message_prompt_c, human_message_prompt_c])
|
85 |
+
|
86 |
+
choose_chain = LLMChain(llm=client, prompt=chat_prompt_c)
|
87 |
+
|
88 |
+
### Asking the questions
|
89 |
+
system_template_q = interview_question_template
|
90 |
+
|
91 |
+
system_message_prompt_q = SystemMessagePromptTemplate.from_template(system_template_q)
|
92 |
+
human_template_q = "{text}"
|
93 |
+
human_message_prompt_q = HumanMessagePromptTemplate.from_template(human_template_q)
|
94 |
+
|
95 |
+
chat_prompt_q = ChatPromptTemplate.from_messages([system_message_prompt_q, human_message_prompt_q])
|
96 |
+
|
97 |
+
question_chain = LLMChain(llm=client, prompt=chat_prompt_q)
|
98 |
+
|
99 |
+
### Provide Feedback
|
100 |
+
|
101 |
+
system_template_f = interview_feedback_template
|
102 |
+
|
103 |
+
system_message_prompt_f = SystemMessagePromptTemplate.from_template(system_template_f)
|
104 |
+
human_template_f = "{text}"
|
105 |
+
human_message_prompt_f = HumanMessagePromptTemplate.from_template(human_template_f)
|
106 |
+
|
107 |
+
chat_prompt_f = ChatPromptTemplate.from_messages([system_message_prompt_f, human_message_prompt_f])
|
108 |
+
|
109 |
+
feedback_chain = LLMChain(llm=client, prompt=chat_prompt_f)
|
110 |
+
|
111 |
+
if "messages" not in st.session_state:
|
112 |
+
st.session_state.messages = []
|
113 |
+
|
114 |
+
if "action" not in st.session_state:
|
115 |
+
st.session_state.action = "Next"
|
116 |
+
|
117 |
+
if "history" not in st.session_state:
|
118 |
+
st.session_state.history = []
|
119 |
+
|
120 |
+
if "questions" not in st.session_state:
|
121 |
+
st.session_state.questions = []
|
122 |
+
|
123 |
+
for message in st.session_state.messages:
|
124 |
+
if message['role'] == "user":
|
125 |
+
name = "user"
|
126 |
+
avatar = "user.png"
|
127 |
+
else:
|
128 |
+
name = "assistant"
|
129 |
+
avatar = "rex.png"
|
130 |
+
|
131 |
+
with st.chat_message(name, avatar=avatar):
|
132 |
+
st.markdown(f"{message['content']}")
|
133 |
+
|
134 |
+
if inp := st.chat_input("Type here"):
|
135 |
+
with st.chat_message("user", avatar='user.png'):
|
136 |
+
st.markdown(inp)
|
137 |
+
st.session_state['messages'].append({'role': 'user', 'content': inp})
|
138 |
+
|
139 |
+
question = None
|
140 |
+
|
141 |
+
if st.session_state.messages != [] and st.session_state.messages[-1]['role'] == "feedback":
|
142 |
+
option = st.radio(label="Which question would you like to do?", options=["Next", "Repeat"], index=None)
|
143 |
+
while option is None:
|
144 |
+
pass
|
145 |
+
st.session_state.action = option
|
146 |
+
|
147 |
+
if st.session_state.action == "Next" or "Repeat" and (
|
148 |
+
st.session_state.messages == [] or st.session_state.messages[-1]['role'] == "feedback"):
|
149 |
+
|
150 |
+
if st.session_state.questions != []:
|
151 |
+
extracts = extract_chain.run(history=st.session_state.questions, text="")
|
152 |
+
else:
|
153 |
+
extracts = "No previous Questions"
|
154 |
+
chosen_q = choose_chain.run(action=st.session_state.action, questions=extracts, data=data, text="",
|
155 |
+
details=st.session_state["Resume Info"], description=st.session_state['Job Description'])
|
156 |
+
response = question_chain.run(question=chosen_q, history=st.session_state.history[-1:], text=inp,
|
157 |
+
details=st.session_state["Resume Info"])
|
158 |
+
|
159 |
+
with st.chat_message("assistant", avatar='rex.png'):
|
160 |
+
message_placeholder = st.empty()
|
161 |
+
full_response = ""
|
162 |
+
for chunk in response.split():
|
163 |
+
full_response += chunk + " "
|
164 |
+
time.sleep(0.05)
|
165 |
+
# Add a blinking cursor to simulate typing
|
166 |
+
message_placeholder.markdown(full_response + "▌")
|
167 |
+
message_placeholder.markdown(full_response)
|
168 |
+
# st.markdown(response)
|
169 |
+
|
170 |
+
st.session_state.action = "Feedback"
|
171 |
+
st.session_state['messages'].append({'role': 'interviewer', 'content': response})
|
172 |
+
memory.save_context({"input": ""}, {"output": response})
|
173 |
+
st.session_state['history'].append(memory.buffer_as_messages[-2:])
|
174 |
+
st.session_state['questions'].append({'Question': response})
|
175 |
+
question = chosen_q
|
176 |
+
|
177 |
+
if st.session_state.messages[-1]['role'] == "user" and st.session_state.action == "Feedback":
|
178 |
+
feedback = feedback_chain.run(question=question, response=inp, history=st.session_state.history[-1:], text=inp,
|
179 |
+
asked=st.session_state.messages[-2]['content'])
|
180 |
+
|
181 |
+
with st.chat_message("assistant", avatar='rex.png'):
|
182 |
+
message_placeholder = st.empty()
|
183 |
+
full_response = ""
|
184 |
+
for chunk in feedback.split():
|
185 |
+
full_response += chunk + " "
|
186 |
+
time.sleep(0.05)
|
187 |
+
# Add a blinking cursor to simulate typing
|
188 |
+
message_placeholder.markdown(full_response + "▌")
|
189 |
+
message_placeholder.markdown(full_response)
|
190 |
+
# st.markdown(feedback)
|
191 |
+
st.session_state['messages'].append({'role': 'feedback', 'content': feedback})
|
192 |
+
memory.save_context({"input": inp}, {"output": feedback})
|
193 |
+
st.session_state['history'].append(memory.buffer_as_messages[-2:])
|
194 |
+
|
195 |
+
st.button("Continue")
|
requirements.txt
ADDED
Binary file (290 Bytes). View file
|
|
resumeicon.png
ADDED
rex.png
ADDED
secret_key.py
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
openapi_key = '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'
|
2 |
+
|
3 |
+
groq_api_key = "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" # Replace with your actual Groq API key
|
templates.py
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
warm_up_question_template = """
|
2 |
+
|
3 |
+
You are to ask the user a question : {question} , you are being provided the intent, but do not explicitly
|
4 |
+
state that to the user.
|
5 |
+
|
6 |
+
In each response , make sure to address the user using their last name , and you may use other details : {details}
|
7 |
+
|
8 |
+
If the question is "Please provide your improved response", say that only.
|
9 |
+
|
10 |
+
It is necessary to provide the question , and talk like an interviewer and use friendly language.
|
11 |
+
|
12 |
+
Use this history : {history} of chat if needed, where Human Messages are by the user and AI messages are
|
13 |
+
previous messages by the interviewer. This history is important to reference past conversation as in a
|
14 |
+
natural conversation.
|
15 |
+
|
16 |
+
Clearly separate the question.
|
17 |
+
|
18 |
+
Embolden headings or keywords by enclosing that word in **
|
19 |
+
|
20 |
+
NEVER answer the question yourself and NEVER provide any hints.
|
21 |
+
|
22 |
+
"""
|
23 |
+
|
24 |
+
|
25 |
+
interview_question_template = """You are to serve the role of an interviewer and engage in a professional interview for an entry level data analyst position.
|
26 |
+
|
27 |
+
You are to ask the user a question : {question} , you are being provided the intent, but do not explicitly
|
28 |
+
state that to the user.
|
29 |
+
|
30 |
+
If the question is "Please provide your improved response", say that only.
|
31 |
+
|
32 |
+
Do not simply ask them the question , talk like an interviewer and use friendly language.
|
33 |
+
|
34 |
+
Use this history : {history} of chat if needed, where Human Messages are by the user and AI messages are
|
35 |
+
previous messages by the interviewer. This history is important to reference past conversation as in a
|
36 |
+
natural conversation.
|
37 |
+
|
38 |
+
If the history is blank , begin the conversation as if it's the first message of the conversation , otherwise,
|
39 |
+
converse as if it's a continued conversation.
|
40 |
+
|
41 |
+
Clearly separate the question.
|
42 |
+
|
43 |
+
NEVER answer the question yourself and NEVER provide any hints.
|
44 |
+
|
45 |
+
Embolden headings or keywords by enclosing that word in **
|
46 |
+
|
47 |
+
|
48 |
+
"""
|
49 |
+
|
50 |
+
extract_template = """ Extract the questions only from the history of a conversation : {history}
|
51 |
+
|
52 |
+
|
53 |
+
You should say "Do not ask any of these questions : " and provide a numbered list of previously asked question.
|
54 |
+
|
55 |
+
If a question has been asked multiple times, still number it and mention it in the list.
|
56 |
+
"""
|
57 |
+
|
58 |
+
|
59 |
+
choose_template = """ Based on the action : {action} , choose a question and corresponding intent from the the dataset : {data}.
|
60 |
+
|
61 |
+
If action is "Next" , the chosen question should NOT be the same as the ones in this list of questions: {questions}. One question should not be asked more than once.
|
62 |
+
If the action is "Repeat", just say "Go ahead and provide your new response to the previous question".
|
63 |
+
Try to make the questions relevant to the user's details : {details} and job description : {description}
|
64 |
+
|
65 |
+
Your response should be like :
|
66 |
+
|
67 |
+
Question : <<<chosen question, that should be asked , do not mention the question that shouldn't be asked here>>>
|
68 |
+
Intent : <<<the intent provided alongside the question in the dataset, complete with all the keywords and everything>>>
|
69 |
+
Logic : <<< Provide the logic of choosing the question based on the action , the previous questions and the relevant job and user details>>>
|
70 |
+
|
71 |
+
You MUST choose a question, and provide intent from dataset for it.
|
72 |
+
"""
|
73 |
+
|
74 |
+
warmup_feedback_template = """Provide an assessment to the response : {response} to the question : {question} of an
|
75 |
+
interview.
|
76 |
+
|
77 |
+
Before giving the feedback , make sure that the response is relevant to the question. If it seems like the user gave an ambigious response, or it seems.
|
78 |
+
like the user did not understand the question. For example , "I don't know" , "Not Sure" , and other responses like these are not to be considered proper responses.
|
79 |
+
Do not provide feedback in this case AT ALL. Instead, Tell the user you are not sure if the user understood the
|
80 |
+
question correctly , and explain the question : {asked} to the user by elaborating on it and explaining what it means.
|
81 |
+
|
82 |
+
If the response is a proper response to the question , then :
|
83 |
+
|
84 |
+
Use the following key to guide your final assessment and score :
|
85 |
+
1. Is the answer in clear, concise wording? Mark out of 10 points
|
86 |
+
3. Is the tone of the answer professional and formal? Mark out of 10 points?
|
87 |
+
4. Does the answer satisfy the intent of the question? Mark out of 20 points
|
88 |
+
5. Is the response of sufficient depth? Mark out of 15 points
|
89 |
+
6. Does the user mention the right key words in their answer : Mark out of 10 points
|
90 |
+
8. Is the answer of a sufficient length? Mark out of 15 points
|
91 |
+
|
92 |
+
Add these scores and give a final score out of the total.
|
93 |
+
|
94 |
+
Your response MUST provide the user scores and tips on how they can improve their response in each category of the scoring, but keep it precise.
|
95 |
+
|
96 |
+
Do not give general tips, they must be specific to the user's response , give a compact and precise response, do not provide long responses.
|
97 |
+
|
98 |
+
If the user's response is not a response to the question or they are unsure, do not provide feedback.
|
99 |
+
|
100 |
+
Embolden headings or keywords by enclosing that word in ** and restrict your responses to below 300 words.
|
101 |
+
|
102 |
+
"""
|
103 |
+
|
104 |
+
interview_feedback_template = """Provide an critical assessment to the response : {response} to the question : {question} of an
|
105 |
+
interview.
|
106 |
+
You can use the following steps to guide your final assessment :
|
107 |
+
|
108 |
+
1. Is the answer in clear, concise wording?
|
109 |
+
2. Is the a good attempt to answer the question?
|
110 |
+
2. Is the tone of the answer professional and formal?
|
111 |
+
3. Does the answer satisfy the intent of the question?
|
112 |
+
4. Is the response of sufficient depth?
|
113 |
+
5. Does the user mention the right key words in their answer
|
114 |
+
6. Does the answer show curiosity to learn if the answer is unknown?
|
115 |
+
7. Is the answer of a sufficient length?
|
116 |
+
8. The choice of words , flow of the response, confidence.
|
117 |
+
|
118 |
+
Your response MUST contain BOTH :
|
119 |
+
|
120 |
+
1. First give final assessments explaining how the response can be improved.
|
121 |
+
2. Then provide an example improved response
|
122 |
+
3. It should not be a hard bullet pointed feedback , make your feedback informative , clear and precise based on the above critera.
|
123 |
+
|
124 |
+
Give the response a score out of 100. Be critical. Do not be afraid to give 0 if the candidate's response does not
|
125 |
+
satisfy the intent, is not even related to the question. Answers that are too abrupt or short should also be marked
|
126 |
+
poorly.
|
127 |
+
|
128 |
+
Before giving the feedback , make sure that the response is relevant to the question. If it seems like the user gave an ambigious response, or it seems.
|
129 |
+
like the user did not understand the question. For example , "I don't know" , "Not Sure" , and other responses like these are not to be considered proper responses.
|
130 |
+
Tell the user you are not sure if the user understood the question correctly , and ask them to choose the repeat option if that
|
131 |
+
is the case.
|
132 |
+
|
133 |
+
Do not give feedback or assessments if you are unsure if the user understood the question. Only assess the responses that seem to address the question.
|
134 |
+
|
135 |
+
Embolden headings or keywords by enclosing that word in ** and restrict your responses to below 300 words.
|
136 |
+
"""
|
user.png
ADDED
utils.py
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
|
3 |
+
|
4 |
+
def select_questions(category):
|
5 |
+
with open(str(category)+".json", 'r') as file:
|
6 |
+
data = json.load(file)
|
7 |
+
return data
|
8 |
+
|
9 |
+
|
10 |
+
q = select_questions("Technical")
|
11 |
+
print(q)
|
12 |
+
|