Recently, two events conspired to converge in an interesting way for me. The first is my interest in and coverage of Google Glass, often mentioned in Friday's Particle Debris, for example, and in this exploration: "Okay, SciFi Fans, What Comes After iPad?"
The second event was meeting my namesake, John A. Martellaro who is the co-founder and CTO of APX Labs in Herndon, Virginia. This fellow isn't the only other John Martellaro I know about, but we've compared notes and seem to be distantly related. When I found out that he is the co-founder of a company specializing in smart glasses (and smart buildings), an interview seemed to be instantly compelling, especially considering the technical community's intense interest in smart glasses and Google Glass.
In fact, this interview reaffirmed my belief that, someday soon, many of us will be wearing smart glasses both casually and for specific kinds of work, explained below. While we may have some computational and wireless hardware in our pockets, the smart glass display's the thing wherein we'll do many kinds things that replace the small display of our smartphones that we carry around today.
With that, here's the interview with Mr. Martellaro. I think you'll find the technology fascinating, and the conversation provides great insights into how smart glass technology works as well as the amazing things APX Labs is doing with the U.S. military. It also provides that all important glimpse into directions for our technological future.
John A. Martellaro, co-founder, APX Labs
TMO: Tell us about what APX Labs does.
JM: APX Labs is working on changing the way people interact with the digital and physical worlds through wearable computing. We build software products for smart glasses and smart buildings. Both of these areas are real-time, contextual, sensor-driven technologies, which is where we have unique expertise.
How we do it involves a lot of smart people and a variety of foundation technologies. We have engineers working on computer vision, biometrics, user interfaces and studying human computer interaction.
We’ve also tackled these areas in a platform kind-of way, delivering the underlying plumbing that makes them smart and the building blocks which enable customers and other companies to build specific applications using these smart systems.
Our APX blog has a great article on how the company was started.
TMO: What are smart glasses and what is augmented reality?
JM: These are two related concepts in our view:
Smart glasses are devices that have to do two things -- the first is to act as a display -- to give the user a transparent overlay on-top of the real-world. Secondly, they provide contextual information about the user: their location, orientation and state.
Augmented reality (AR) is simply an application that employs contextual overlay information that have a precise understanding of where they are or what is in the scene that is being augmented. Glasses are by far the strongest device for such apps, but not the only ones. However, it’s this combination of AR and smart glasses that really drove us to build our company.
TMO: Can you explain a little about how these devices work, optically?
JM: Around 80 percent of physical hardware components of smart glasses are not much different from what goes into a smartphone. In short, the capacitive touch screen of a smartphone is replaced by a microprojector that utilizes a see-through optical lens.
Currently, there are two primary approaches in designing these optics: a prismatic beam splitter and an optical waveguide. A beam splitter takes the light coming out of the microprojector and typically bounces it twice inside a prism, mixing it in 50/50 with the environmental light. These beam splitters typically employ the projector from the top of the frame (along the eyebrows).
An optical waveguide typically employs a side (temple) mounted projection source that bounces light several times within the lens to create the eyebox, which is effectively the area within the lens that the user’s eye can visualize the projected image.
APX Labs team meeting. (With a Mac, front and center.)
Both approaches have tradeoffs. A beam splitter, due to its optical simplicity, is often more efficient (less light is lost in adding transparency) and easier to manufacture. A waveguide is significantly thinner and can be more easily implemented in a form factor closer to a typical set of eyeglasses but often at the expense of manufacturability.
In both cases, it’s important to note that both displays are additive light based -- the content projected to the user adds to the light that the user’s eyes already are seeing. Therefore, the color black (pixel off) is not possible in smart glasses using these optics, which creates a unique set of human factors considerations.
Once you are wearing the glasses, the individual focal lengths of different vendors’ optics each appear slightly different which affects how you perceive overlaid content. We’ve seen resolutions ranging from 300 diagonal pixels to almost fully immersive lenses. This is really where we think the market will start to differentiate.
TMO: Give us some examples of how your smart glasses are used and the industries where they can be helpful. What are we looking at here in terms of cost?
JM: The military was the first adopter of smart glasses, in fact they have been funding various forms of head worn computing for decades. The soldier is perhaps the single person that would most benefit from the situational awareness smart glasses can provide -- just imagine all of the information in your favorite video games like Deus Ex or Halo being there in real life.
Our first software application for military smart glasses was face and voice biometrics. These weren’t AR per se but heads-up-display (HUD) applications. APX developed software that functioned in a way very similar to the identification system used by the Terminator in the popular action movies. With our software mated to the army’s smart glasses, a soldier at a checkpoint could look at an individual from a distance and determine if the person were dangerous without ever taking their hands off their weapon. The smart glasses automatically detect faces and determine if the subject is on a terrorist watch list or a harmless local then display that in the soldiers view.
That was the start. We built on the success of hands-free biometrics and started laying the ground work for an augmented reality operating system. Smart glasses require a new paradigm in building user experiences, so APX began building an operating system for the military to enable user experiences that truly augmented the soldier’s life. Numerous use cases were added including a true "Call of Duty"-like RADAR and real-time audio and video streaming.
The RADAR provides the wearer with the location of all other squad members wearing glasses or carrying a mobile device. “See what I see” audio and video streaming turned out to be one of the most exciting use cases for the solider and their commanders. Other glasses wearers and commanders in a tactical operations center could, for the first time, see what an actual soldier was experiencing and have an unprecedented level of situational awareness to assist the soldier with their mission.
While we continue to develop applications for the military, we’ve actively begun to move our technology into industrial and enterprise settings such as telemedicine, logistics, and sports. In particular we look at industries or “fan experiences” that can gain a lot of results across large numbers of similar users.
Example of facial recognition in the user's visual field
The cost for smart glasses varies widely based on form factor and performance requirements, and can range from US$500 dollars to several thousand. One of the things we are excited about is the continual innovation on the hardware side which will not only make them smaller and better looking, but will keep driving down costs. Ultimately we think the costs will be on par with a smartphone.
Here’s a video of the current AR concepts we have been developing. You’ll quickly see that AR and smart glasses have endless uses in everyday life.
TMO: How does what you do with smart glasses differ from what Google is trying to achieve?
JM: We love the idea of Google Glass. From what we've seen so far, Google took a well thought out step toward "wearable" computers. If we were Google, we probably would have done the same thing by appealing to our Android user base, making devices that could be an adjunct to our phones, and making sure the first camera people access (the one on their head) is connected to Google+ rather than to Facebook.
However, we do believe that we (the industry players) are a few years away from a real consumer ready device that could really replace normal glasses as an accessory or rival a phone.We’ve taken a slightly different route than them -- firstly we don’t make hardware, and we support many vendors. More importantly we are focused on enabling different kinds of scenarios. We are laser focused on real-time, contextual problems -- overlaying and interacting with things in the real-world.
Google Glass (credit: Google)
Our understanding is that Google is trying to deliver what is primarily a great consumer heads-up-display. From what the industry at large can tell, the Glass’ optics can’t render content directly in your line of sight. This really limits some of the use cases for the technology as you lose the ability to render content in context to the environment around the user. You can say we’re focused on the smart while they’re focused on the Glass. We’ll know more after Google’s hackathons, but our assumption is that their backend platform really adds most of the brains to their end device.
There is also the difference in AR enabled smart glasses and Google Glass. We really are looking to have capabilities of both in the markets we are pursuing today. This puts a lot more hardware dependency on the final form factor. Google’s Glass has a very small prism and lower power processor than most systems we have dealt with. That means there is little onboard Computer Vision support and far more dependence on the server for information. That equates to consuming more wireless bandwidth. All of these applications require special care to balance the use of the device.
TMO: Do you see a conflict between what Google is trying to achieve in their market? After all, they might go after yours and you might go after theirs.
JM: Honestly, every good idea has more than one player in the market and Google's entry into the market sent a great signal to the industry that a major player was willing to try and roll out an ecosystem.
We've been building software for smart glasses for almost three years now and have watched nearly every major mobile device vendor dip a toe into the market. Some of these made us more nervous than others but Google is a little different. They have a real focus on the consumer -- an area we aren't really focused on today.
Three years from now, I think we're going to collide. From a business case, we're working hard to make sure that we're building technology that the market will love. For us that means solving enterprise problems with smart glasses today and continuing to iterate on the technology as the price and form factor options begin to appeal to your everyday mobile device buyer.
TMO: Is there a patent race between you and Google?
JM: Actually yes, there is an intellectual property (IP) free for all right now. Not just us and Google, but many, many other players too. The surface area and the potential is tremendous which is why it has attracted everyone to the party. Dozens of companies are working on everything from optic techniques, applications, features, and so on.
The patent process as a whole is also challenging for small businesses. It forces us to be very careful that we aren't wasting time releasing glamour patents and instead focusing on really innovative ones. We can prove that we invented some things independent of their patents, and I'm sure they can claim the same in other areas. Frankly I think that there will be a number of strong IP portfolios that will develop over the next five years that will likely block new entrants from the market.
TMO: Who are some of the other players in this market worth mentioning?
JM: There are about 20 or so companies who are building full or partial smart glass systems, but only a handful are ready for commercial release. As a technologist, I'm excited to see what Project Fortaleza (Microsoft's rumored solution) will become. They're attacking the problem from the in-living room gamer, which means they must innovate at an aggressive price point. That's challenging with current optics production techniques.
We also have some partners that prefer to stay out of the news that have produced mind bending solutions for the U.S. military. In addition, I'd like to give credit to LinkedIn. No, they aren't building glasses. But they have a number of great communities helping to move the overall wearable industry in a positive direction.
TMO: Where do you see the technology going? For example, will smart glasses eventually replace our current-day smartphones?
JM: At APX, we don’t believe that the current handheld mobile market (tablets and smartphones) is disappearing any time soon. However, smart glasses are logical extensions to the handhelds in that they allow access to real-time environmental information and consumption to content that is enabled by the heads-up and hands free nature of smart glasses. In any implementation, there is a tremendous market in wearable smart devices forming right now, and we believe smart glasses will play a big part in it.
In the near term, APX sees this technology really changing life for people we call “deskless workers”, those who work with their hands and for whom having access to a plethora of hands-free information is a major benefit.
Over time, as the form-factors become more stylish and the capabilities increase we absolutely see smart glasses moving into the consumer market. It’s easy to imagine a future where the technology is small enough and power efficient enough that it can completely replace hand-held devices.
As evidenced by this year’s CES, wearable computing is getting incredibly hot. The enterprise is starting to see the benefits of hands-free smart glass computing given hardware that is viable and available today. The public definitely wants this technology to come to market. Although we think the deskless worker can use the technology that’s ready now, we think that widespread consumer adoption is still a few years out.
John A. Martellaro directs technology development at APX Labs, Herndon VA, as CTO, which he co-founded in 2010, aligning the company’s high tech vision with business strategy. After five years of high performance network computing research within the United States Department of Defense, John joined Battlefield Telecommunication Systems as their Director of Special Projects. Mr. Martellaro holds a M.S. and B.S. in Computer Engineering from the Rochester Institute of Technology.