Inspiration

Early on, we realized that Gemini had a very accurate ability to recognize facial expressions. We took 3 pictures of each other, one super sad, one neutral, and one happy. Gemini accurately described the expressions and properly ranked them 1-10 with 1 being sad and 10 being happy. We went down this hole and realized that it is hard to get interview feedback that incorporates visual elements. Currently, interview AI tools only provide feedback on what was said, but not anything to do with facial expressions.

What it does

Given a job title, company, and a question a user wants to answer, we allow the user to record a response. InterviewerAI will then parse the video frame by frame and separate the audio. We feed these components in individually and ask Gemini to provide feedback on them in the context of the given role, company, and question.

How we built it

Used Gemini API, JS, and Flask to develop the backend and front-end on VSCode.

Accomplishments that we're proud of

Using the Gemini API to develop a tool that legitimately gives accurate feedback to users. We think it's super cool that we can provide feedback involving visual aspects on top of just audio/speech.

What we learned

Full-stack development, integrating the use of Gemini API

What's next for InterviewerAI

Continue building, innovating, and optimizing!

Built With

Share this project:

Updates