Inspiration
We saw a lack of resources for those with visual impairment, so we wanted to give those with visual disabilities autonomy in life.
What it does
An object-detecting AI locates objects and informs a visually impaired person of how they must move the object.
How we built it
We started with a basic MVP and slowly added small ideas to make a finished product that works well and is easy to understand.
Challenges we ran into
The YOLO we trained did not have the necessary objects and had to be retrained to include more objects. The interface we attempted to create also caused unnecessary crashing and restarting of the program.
Accomplishments that we're proud of
We are proud that we built a functional program in 24 hours capable of helping others in the future.
What we learned
We learned how to use YOLOv8 and PowerPoint better to describe our ideas.
What's next for DigitalEyes
We want to create a pendant and integrate the robotic dog as a user service dog.
Log in or sign up for Devpost to join the conversation.