Last weekend, my friends and I attended MV-Hacks, which took place at HackerDojo in Santa Clara. We built an app called NutriScan that is designed to track your eating habits in the easiest way possible, without needing to enter lots of data into an app that may take up a lot of time and discourage frequent usage.
It works by using Microsoft’s Cognitive Vision API to scan nutrition labels and add them to a list of foods you have scanned. The app then uses Firebase and a graphing API to show you a detailed breakdown of your calories, fats, proteins, and carb intake. This is all enclosed within an app that has a clean UI and material design interface. Some problems we had were getting the OCR to work every time and parsing the results from each scan into usable data.
We are proud of the implementation of this new Microsoft API we haven’t used before and our integration of the OCR and graph plots.
It has a basic one button interface and then switches to an activity where the user can take a new picture or upload one from their gallery. Once it analyzes it, the user is then taken to their current breakdown for that day, and the graphs adapt as new information is added.
In the end, we won 2nd place, but were very close to being 1st, as the judges had a hard time deciding. We all won Echo Dots, but switched them out with Intel Compute Sticks.