Google Glass is compelling technology and the first bold move in the world of wearable computing. But other than taking pictures and video, we have not seen too many practical uses for Glass. This is one of the reasons, after some initial excitement, the level of fervor over Glass has gone down. Then, news of an actual practical use that makes sense.
In Australia, they are testing Google Glass for lactation support. Moms who, for whatever reason, are having trouble breastfeeding, can now use a new app for Glass that connects them to a live lactation support professional. The professional can view live video of the mom feeding and offer advice in real-time. The app also includes instructional videos that can be viewed from within Glass. The key for much of this is the hands-free nature of the solution. Moms can focus on holding the baby while viewing videos or getting advice. It is also thought the solution will be valuable for moms in remote locations who don’t have immediate access to maternal health advice.
I’m not sure if Google Glass will ever be big in the mainstream, but I can definitely see more successful implementations in specific circumstances like these. How about the repair industry? How nice would it be to view a video or look at a schematic of what you are trying to work on without taking your eyes or hands off the object? People are getting hung up about the privacy issues surrounding Glass, but that’s a non issue in these other use cases which I think is where Glass will have it’s biggest success. To understand Google Glass, I think we need to think bigger, and smaller. Clearly, this is more than just an expensive wearable camera.