“Smart glasses” are the most characteristic product that has emerged from the wearable computing “revolution.” They embody many of the communications capabilities of the smartphone (which they may one day replace) along with additional visual and other sense enhancements. They are also seen as a critical enabling technology for augmented reality. The current poster child for smart glasses is Google’s “Glass” product, but there are more than 20 firms offering smart glasses—or planning to do so.
Although early users of smart glasses buy them because they are “cool,” the primary selling feature of smart glasses—the one that is supposed to turn them into a mass-market consumer item—is their ability to display video, navigation, messaging, augmented reality (AR) applications, and games on a large virtual screen, all completely hands free. This is essentially the advantage that smart glasses (and wearables more generally) could ultimately have over smartphones.
Assuming that the smart glasses concept takes off, NanoMarkets believes that it will open up significant business opportunities for suppliers of components and subsystems ranging from optical and audio devices, through sensors to processors of various kinds. These opportunities are of two types, which we will call “volume sales” and “value-added.” We profile these below.
Optical Subsystems: First in the Value Chain
Smart glasses are optical systems using cameras, lenses and displays. An important part of the competition in the smart glasses, currently focuses on the optical subsystems that distribute optical signals within the smart glasses system. This is because these optical subsystems sit at the core of smart glasses products and define their most important characteristics—size of image, color quality, resolution and even the aesthetics the glasses themselves.
Importance of visual experience: NanoMarkets believes that there are many ways that optical subsystems for will succeed in the marketplace. As we see it, a key factor will be in terms of visual experience, most notably how big the display appears to the viewer.
The reason is that a “virtual” large screen provides a factor that dramatically distinguishes how a pair of smart glasses looks and feels to the user from what that user would get with a smart phone. This may be defined in terms of the field of view or how big the screen appears to a user, as well as the resolution of the display.
Aesthetics of optical subsystems: We also think the aesthetics of smart glasses will be an important competitive factor for optical subsystems, since the choice of this subsystem can impact how the glasses look when worn.
This is important because spectacles—smart or not—are articles of clothing and the general consensus is that the current generation of smart glasses—Google Glass in particular—look strange when they are worn. There is therefore a premium on any optical subsystem that can improve the appearance of smart glasses by, for example, pushing some of the electronics to the front or side of the head.
Emergence of the smart glasses opportunity for makers of optical subsystems: We think that all of this puts the makers of optical subsystems in an excellent position to capture an important share of the value inherent in smart glasses. Competition in this part of the smart glasses business is currently along two dimensions:
Rival technologies. Both curved mirror and several kinds of waveguide technology have been tried over the years (See Exhibit) None are completely satisfactory. We note that the mindshare leader, Google Glass has adopted a mirror-based approach, giving mirrors a kind of prominence. But there is no guarantee that Google will continue with its current approach and NanoMarkets believes we are at an early enough stage that a start-up could quickly generate large revenues (and attract significant investment) by “building a better mousetrap,” that is, a smaller firm could come up with a smart glasses technology which ultimately dominates the market.
Internal versus external development. The optical subsystems currently being used in smart glasses have often been developed internally by the smart glasses makers themselves, although in some cases they have licensed the technology from third parties. NanoMarkets expects to see more such licensing with the strong possibility that independent third-party developers of optical subsystems may emerge. These firms could sell profitably to OEMs, but would not themselves have to become involved with mass consumer marketing
Sensors for Smart Glasses: Volume Opportunity and Beyond
Many kinds of sensors are already being used in smart glasses. These include time-of-flight sensors, accelerometers, gyroscopes, compasses, image sensors, thermometers, GPS, pressure and touch sensors, as well as medical and biosensors of various kinds. As far as NanoMarkets can tell, today smart glasses OEMs are simply buying off-the-shelf sensor components.
As discussed at the beginning of this Chapter, there is an immediate volume opportunity for sensors as the smart glasses concept begins to take off. But NanoMarkets believes there is also potential for more profitable higher value-added products aimed at the same market. Subsystems embodying complex environments have become possible primarily because the ongoing LSI advances associated with Moore’s Law, which have made it practical to integrate MCUs, signal processors, sensors, amplifiers and wireless interfaces into something as small and flimsy as a pair of spectacles.
Sensor fusion and sensor subsystems: “Sensor fusion” has become a hot topic within the sensor community and refers to the subsystems that combine data from different kinds of sensors to provide information that is more accurate, more complete, and/or more dependable than is obtainable from individual sensors.
At one level almost all smart glasses use sensor fusion, since they provide stereoscopic vision by combining the data from two or more image sensors/cameras at different locations. However, NanoMarkets believes that the potential for sensor fusion in the context of smart glasses has only just begun to be explored and opens up important opportunities for industrial designers to distinguish smart glasses products in the market.
Communications modules and access to remote sensors: NanoMarkets believes that a more immediate opportunity for sensor subsystems will emerge as the result of a need to connect with external sensors or to send data from wearable sensors to remote processors—for example to a remote site such as a hospital server for further clinical analysis.
Again, exactly how these subsystems may be developed in the context of smart glasses are unclear and will be a matter for future industrial designers. However, the processor electronics necessary for such subsystems are well advanced (possibly more so than for sensor fusion subsystems) as are various appropriate communications protocols—most notably ZigBee.
NanoMarkets also thinks that what may prove to be a key enabler for this type of sensor subsystem is/will be cloud technology. Not only will cloud technology make it easy for smart glasses to access remote sensors, but cloud computing will enable the sensing software in smart glasses to be easily upgraded without the need for user installation of software in their monitoring devices, which makes it easier and cheaper to maintain the health monitoring system networks.
Control of wearable sensors from a smartphone: A related concept—and perhaps a more immediate one—is a subsystem that provides connectivity between smartphones and smart glasses and other wearable electronics. In fact, some early smart glasses are specifically designed to work with smart phones. This somewhat refutes the notion that smartphones and smart glasses are in competition, although we think they will be in the long run, for now the situation is more fluid.
As an illustration of where we are now in terms of such opportunities, consider three recently published U.S. patent applications from Apple. These cover a method in which an iPhone 5, along with one or more remote wearable sensors, gathers and processes raw data to track a user's activity level, as well as control certain scheduling functions like alarms. After processing the data from the wearable, the iPhone can deduce what the user is doing—running, walking, sleeping, etc. and provide information on the user's lifestyle and perhaps more. This is also an example of sensor fusion.
NanoMarkets believes that all the business opportunities profiled above will be realized in the next five to eight years. However, strategic plans designed to capture this potential should be seen in the context of the fact that the evolution of smart glasses is at a very early stage at the present time. No one can be sure that any particular current product line or smart glasses company will survive for long; presumably many will not. In addition, smart glasses compete at some level with the capabilities provided by the Internet-of-Things (IoT). The balance among the IoT, wearable computing, and conventional smartphone/tablet computing has yet to be worked out by the marketplace.