Inspiration
We realized that Gemini had a very accurate ability to recognize facial expressions, and wanted to explore innovative ways to apply this. As college students who often need to hone our interviewing skills, we wanted to explore a way to make this process easier to tackle and more enjoyable. While there are interview assistent applications out there, none of them are able to provide specific, tailored feedback based on both speech and facial expressions.
What it does
Given a job title, company, and a question a user wants to answer, we allow the user to record a response. InterviewerAI will then parse the video frame by frame and separate the audio. We feed these components in individually and ask Gemini to provide feedback on them in the context of the given role, company, and question.
How we built it
We used the Gemini API, JavaScript, and Flask to develop the backend and front-end on VSCode.
Accomplishments that we're proud of
Using the Gemini API to develop a tool that legitimately gives accurate feedback to users. We think it's super cool that we can provide feedback involving visual aspects on top of just audio/speech.
What we learned
Full-stack development, integrating the use of the Gemini API.
What's next for InterviewerAI
Continue building, innovating, and optimizing!
Log in or sign up for Devpost to join the conversation.