Skip to content

What I learned from Hacking at Singapore’s Largest Student-Run Hackathon

Over the weekend, I hacked at HacknRoll 2020. We built smart glasses that use computer vision to assist the visually impaired with perceiving emotions, and won Top 8 out of over 120 teams for our solution.

The below text is taken from the DevPost, which was written by Kerf Chang and myself.

Inspiration

In a world where relationships with people form a big part of our lives, people who are visually impaired often face a disadvantage. Currently, there are 285 million visually impaired individuals worldwide. We want to create a tool to help improve their interpersonal relationships and make better connections with others through understanding their emotions.

We realised later during our research that those with difficulty interpreting social cues and signals such as those with Asperger’s Syndrome and autism would also benefit greatly from having such an innovation. In fact, 1 in 160 children has an autism spectrum disorder of some sort.

What SEENSE does

Our smart glasses, SEENSE, allow the visually impaired to recognise other people’s emotions through computer vision, emotion recognition and audio feedback. This will allow them to improve their interpersonal relationships with their family and friends by providing them a technologically assistive avenue to enable understand the world around them.

To clarify, we see this not as a replacement for their current methods of perception, but as an augmentation.

How we built SEENSE

Our Solution

SEENSE Prototype (Photo credits: Vishnu Sarath)

The smart glasses are made up of four components: a Raspberry Pi 4 with a camera module, buzzer, software (Python), and a button for actuation. This Raspberry Pi 4 captures an image when a button is pressed, which is sent to an API and returns a confidence interval of emotions. We use this returned emotion to give feedback to the user, who may not ordinarily be able to perceive emotions as easily.

We have implemented two feedback mechanisms – audio (text to speech clips) and a piezo buzzer. The piezo buzzer signal is encoded in morse code, which has the advantage of being intelligible also to the deaf-blind, who primarily sense the world through tactile feedback, but we also have an audio option (and the user can then blend in with those who wear Airpods 24/7, like one of the team members).

Due to hardware limitations, we were not able to procure a vibration / haptic buzzer, but we substituted it with the piezo buzzer, which can play tones. In an ideal scenario, we would have used the haptic buzzer, as this increases the avenues for feedback.

Challenges we ran into

We initially wanted to build on the Raspberry Pi Zero WH due to the smaller form factor. The Pi Zero WH can also be powered by lithium battery, which allows wireless handling. However, we were constrained as we did not have the Raspberry Pi camera module ribbon connector, which downsizes the RPi camera ribbon to the size of the RPI Zero’s. Hence our prototype has to be wired, which limits movement, however we aim to use microcontrollers or the RPi Zero W H in future. We also would like to develop more of our hardware skills so our wiring can be cleaner. We also lacked hardware such as a haptic buzzer (which gives vibration feedback), hence we substituted it with a piezo buzzer.

We also had some initial trouble connecting the Raspberry Pi 4 and flashing the SD to an appropriate version of RPi. The time it took to download NOOBS and other OSs didn’t help, but thankfully Francis from NUS Hackers was able to help us flash Debian Buster on the Pi 4. The organizing team and Major League Hacking’s Hardware Lab was also incredibly helpful in procuring components for us to ensure our solution worked (piezo buzzer, jumper cables, breadboards, miscellaneous power cables, SD cards, SD card readers, coffee, roti prata…)

Background: our understanding of the facial emotion recognition problem: an image is taken (we cannot use a constant video stream due to exorbitant data rate costs, and we also don’t plan to call an API every 5 seconds and give feedback to the user constantly).

This meant we needed a way to reliably actuate a signal to set off the pipeline. By giving users a reliable way to control the pipeline, it heightens their sense of control over the device, and allows them to take an active role in administering the emotion detection algorithm. They could then use it in moments of anxiety or silence (which creates moments of tension).

We realised that a voice interface wouldn’t be the best solution to the input problem as it would interrupt the flow of a conversation that the user would be having. Instead, we chose to use a button as input as it would not face this problem, and can be discretely used. It also gives tactile feedback when its depressed, which is helpful for confirmation. This only came after trial and error + conversations with Ben from Xamariners, members of the NUS Hackers team in the MPH (Francis and Noel), John from MLH, Christopher from Microsoft.

Eventually, if we were able to continue working on this solution, we would extend the solution to use microcontrollers instead of Raspberry Pis, or failing that, a Raspberry Pi Zero.

Accomplishments that we’re proud of

Proud of having integrated the hardware solutions, which was definitely not trivial.

Proud of being able to ask for help from the organizers when we needed it to make this project happen. The ability to ask for help is a really important ability to have.

We’re also proud of having innovated in order to make it fully portable — by using a power bank to power the prototype, we were able to bring it around and showcase to various participants of the hackathon, which is one of the best parts of being at a physical event! This ranks up there with seeing others’ cool projects 🙂

What we learned

How to hack with Raspberry Pi, and connections with breadboard + Arduino. Most of our team were first-time hardware hackers, and we even had first-time hackathon goers. Therefore, this was a really great experience for us to build a functioning prototype!

More about the problems that the visually impaired and those with difficulty interpreting social cues and signals face, and how a solution such as this will help them in their daily lives with friends and family.

We also learnt what goes into an enjoyable hackathon: a constant supply of food, support on hardware and software when its needed, and amazing teammates.

What’s next for SEEnse

Potential Partners

API providers (such as Microsoft) would be our potential partners as we conduct many API calls in order to process the images. Our other potential partners would include hardware companies so that we will be able to manufacture cheaper glasses and promote SEEnse as a consumer-based product in the market. It would be good to partner also with computer vision specialists, who can train custom models to detect emotions that can run on the edge alone.

As we are also looking to expand SEEnse into the market for people with Asperger’s syndrome and Autism Spectrum Disorder, we would also be looking to partner social service beneficiaries to produce smart glasses that can help in the education and guidance of emotion recognition and education. These beneficiaries, and other educational institutions, could also serve as partners to help us come up with glasses that are more catered towards educational purposes.

Hardware + Software Changes

Instead of using an API, we would consider using OpenCV or a solution that can run inferences reliably without an internet connection. This would greatly expand the reach of the application, as it allows users to perceive emotions on the go.

We also would want to reduce the form factor of the solution. Right now, it is relatively clunky. Even though we found a way to make it portable by using a portable charger as a power source, we would like to improve further by having a battery onboard — this would be feasible with a Zero, or a microcontroller. We would also use a haptic buzzer, and perform the wiring better so as to reduce the clunkiness of the solution.

The enabling component can also be extended into obstacle detection and avoidance, as well as navigation through common routes (from landing in an airport to the taxi stand, for example.)

Conclusion

We’re really happy to have spent the time hacking at Hack n Roll 2020. Thank you to the core team as well as the sponsors for pulling the hackathon together!