We tend to think of app development as a neat, carefully planned process. Product managers translate customer feature lists into prioritized projects, planning teams determine timelines and effort, developers code, and eventually a new app is released to customers. But sometimes, new ideas come out of experimentation and unplanned collaboration.

Case in point: the Microsoft Seeing AI app. This iOS smartphone app—available as a free download from the Apple App Store, helps users with vision impairments to identify a scene, read short text blocks, identify products from a barcode scan, or even identify nearby people and describe their emotions.

But the app wasn’t developed from customer requests or as part of a broader project plan. It grew organically out of creative hacks sponsored by Microsoft and experimentation by a vision-impaired developer.

Saqib Shaikh lost his sight when he was seven years old. For years, he’s benefitted from computer speech and voice-recognition technologies that help him code, perform typical work tasks, and interact with the world. But he dreamed for years of extending those capabilities into a more portable tool that could help with other functions.

Figure 1. With the Seeing AI app, users can point a smartphone camera at a scene and hear a description

Figure 1. With the Seeing AI app, users can point a smartphone camera at a scene and hear a description

In 2015, three key ingredients converged to make it possible for Shaikh to realize his dream: (1) the ability to perform near-real-time processing of large amounts of data, (2) major advancements in machine learning technologies, and (3) the time and opportunity to experiment.

Modern smartphones are capable of processing at speeds that were available only to high-end computers a few years ago. At the same time, recent advancements in machine learning now make it possible to rapidly analyze scenes for known objects, people, and text. By combining those technological advancements, Shaikh was able to design an app that uses a smartphone’s camera to take a photo, analyze the scene to identify known objects, and then describe that scene to the user.

Shaikh was also given the opportunity to work with Pivothead, designers of modular devices. Pivothead makes a line of SMART glasses that look like standard sunglasses but include a built-in camera, Bluetooth networking, and expandable memory. By connecting a pair of Pivothead SMART glasses to his smartphone software, Shaikh gained the ability to swipe the side of his glasses to take a photo, which his smartphone could then use to analyze the scene in front of him. (See a demonstration in this YouTube video.)

Today, Shaikh’s Seeing AI app is available exclusively for the Apple iPhone, with Pivothead SMART glasses integration still under development.

In the meantime, other Microsoft developers have been so inspired by Shaikh’s app that they’re integrating some of the code and design work into other areas. For example, hackathon teams in 2016 and 2017 have been experimenting on extending the machine learning and scene-mapping functionality to work with a Microsoft HoloLens and to enable autonomous driving for an electronic wheelchair through indoor environments.

What makes these projects unique isn’t only what they do; it’s how they were created. When developers are given opportunities to share ideas and experiences, to view each other’s prototypes, and to team up in hackathons, they can collectively solve some of the biggest challenges faced by users and businesses.

To keep up on the latest apps and technologies from companies like Microsoft, follow us on our blog, on Twitter, and on LinkedIn.

Share this:

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail