August 19, 2016
John Haddick is the chief technology officer of the Osterhout Design Group (ODG).
John Haddick of ODG explains why the company prioritizes image quality over other tradeoffs to ensure its smartglasses can support real work without causing eyestrain.
PwC: John, can you please tell us about you and your company?
John Haddick: Sure. I’m the CTO for the Osterhout Design Group, also called ODG. ODG has worked in head-worn products for more than 30 years.
Head-worn products in the past have been limited to very specific needs or functionalities. For example, the diver mask that shows the pressure in your air tank while you’re swimming. With a mask like that, you don’t need to stop swimming to read the pressure. Another example is night vision goggles.
Today, head-worn products could be considered a platform rather than a specific solution for a given task. We recognized that transition really taking hold in about 2008, and we focused on a platform approach to head-worn computing or head-worn displays.
At ODG, we specifically prioritized hands-free information, and therefore we chose the image quality over all other types of metrics. We’ve made sure all of our displays have sufficient image quality so a person can’t see the individual pixels; the image looks almost photographic in nature. We wanted our solution to be used for doing work rather than just for notifications, hence the priority on image quality and see-through clarity.
We feel high image quality will drive the largest volume market, especially with industrial and commercial customers. And then as we grow that image quality and field of view, more and more entertainment and consumer applications will become possible.
PwC: What use cases are you seeing for industrial and commercial customers?
John Haddick: We see applications that really value our image quality. For example, in various medical areas, the accuracy of the imagery is super critical to successfully performing the task. If the color accuracy or balance is wrong, doctors may not recognize the live tissue from the dead tissue when they’re doing a surgery.
There are many use cases where stakes are high. Stakes are high in the medical industry as a patient’s life depends on the accuracy of information. In the airline industry, thousands of planes costing hundreds of millions carry billions of dollars of liability flying on board. Oil and gas is another high-stakes industry. The capital cost of their equipment is very high, and so they put a premium on deploying and using any device that improves the safety of people and assets.
“The accuracy of the imagery is super critical to successfully performing the task.”
PwC: What is stopping enterprises from rushing and getting smartglasses?
John Haddick: Some challenges have to do with wearability and usability. It’s easy to have a great demo, but it’s very different to deploy occupationally. People work eight hours a day. When users must wear smartglasses for hours at a time, then issues like eyestrain become very important. You would never notice that in a short little demo or at a trade show.
We learned very early on that we cannot do anything that hinders the person’s see-through abilities, because the entire day-to-day interactions of the world are in the see-through view. You want the display experience to be as close to viewing a TV or a computer monitor as possible, and so we set that as our metric in the very beginning. The image quality, sharpness, resolution, brightness matching the environment and, critically, distortion should not cause undue eyestrain during an eight-hour shift. That is immensely difficult in optical design.
We must get the high-quality image out to every corner of the display. If the alignment isn’t quite right, that will cause eyestrain, and people simply won’t use the device.
“Today, head-worn products (smartglasses) could be considered a platform rather than a specific solution for a given task.”
PwC: How big is the field of view your products offer?
John Haddick: The R-7 we’re launching now is in the range of 30 degrees, which is the same field of view as when you’re looking at a standard laptop, when you hold a tablet with your bent elbows, or when you look at a 65-inch TV screen at the recommended distance. Also, 30 degrees is a magic number because that’s about the range that you can comfortably move your eyes to view something without needing to move your head. You have the center 15 degrees and another 7 degrees of movement on each side.
The experience drives the display choice. For example, you’re typing text, and you want to move only your eyes in a comfortable range without needing to move your head. We will improve field of view as the resolution keeps up, so we can keep the same image quality.
The situation is different for gaming or entertainment, where users need a wider field of view. We are working on larger field-of-view systems that are more immersive for an entertainment experience—the larger the field of view, the more immersive the experience. Virtual reality goggles, for example, have a field of view that’s more than 100 degrees, and some reach 200 degrees.
PwC: You mentioned color accuracy before. How easy is it to get the right color accuracy?
John Haddick: It’s the hardest thing you could choose to do first. People need the accuracy so they feel comfortable relying on the system. All consumers have been trained to expect high image quality. Today they have these amazing laptops, tablets, phones, and TVs. Consumers are accustomed to that and expect that.
When you’re developing a product, there’s always that moment when you realize it’s no longer just an idea but a real tool that anyone can use. For me, it happened with the glasses a while ago when I was working on integrating them with a customer’s remote video system. I didn’t have a remote camera source handy to test, so I plugged the glasses directly into my laptop by using an HDMI to USB connector. I had a bunch of drawings and diagrams and schematics open on my laptop. Those drawings popped into my view in the glasses, and I realized that I could comfortably use the glasses instead of my computer monitor to work on the drawings. I finished what I was working on right in the glasses. You can’t do that if you don’t have the image quality. That’s the difference between a tool and a notification system.
PwC: Which optical technology are you using—prisms, mirrors, waveguides, or something else?
John Haddick: Over the years, we have used everything imaginable. While we’ve delivered systems to customers with each of those technologies, we’ve focused on using conventional optics in very creative ways. We use refractive and reflective optics, and right now, for example, the R-7 has a flat mirror in front of the eye. The mirror has some very special coatings and other treatments, but it’s essentially a flat element and that’s because we don’t want to distort a person’s see-through view at all.
Anytime you use one of the other techniques [waveguides, freeform surfaces, and so on], they effectively spread or interrupt the optical wave front of the displayed image and mess with a person’s see-through view. All of those techniques introduce manufacturing challenges, so in the design, you introduce tradeoffs.
Distortion versus sharpness is the most obvious tradeoff and always comes up. Another tradeoff is uniformity of color and brightness. With the other techniques, when you go to most manufacturers, it is pretty much impossible to maintain the image quality over all temperature ranges, all humidity levels, and all manufacturing tolerances.
PwC: Is your system monocular or binocular?
John Haddick: It is binocular. What we are shipping today has a resolution of 1280 by 720, and each display is driven at 80 frames per second independently. It is a full see-through display: more than 80 percent transparent see-through display with no see-through distortion.
We also can support both 2D and 3D experiences. We have two separate high-speed low-latency display pipelines, one for each eye. For a 2D experience, we just mirror the display for the two eyes. For 3D, different signals are sent to the eyes. If you are rendering augmented reality content, typically the developer wants that in 3D so it has the depth to the placement as well as scale. And that would automatically render to the two eyes separately for the two different viewpoints that each eye would have.
“Having very clear cues that match the real world is a big part of how you help the user and reduce eye fatigue.”
PwC: How do you help the user focus on the digital content in relation to the physical world?
John Haddick: We know that if you don’t match the person’s perception with what they expect, it causes extra cognitive load, as users need to decide when they are looking at digital content and at the world. It fatigues people.
To address this issue, we take advantage of the capability of the brain that is most impressive. For example, you’re walking down the street and looking in a store window at something you really like. In the reflection of the window, you notice a person across the street walking or a car going by. Your brain has no problem interpreting that as long as it’s very consistent and predictable and calibrated well. People are very, very comfortable with that.
With very good image quality, the cue to the brain is very clear that something is out of focus, because it’s at a different focal plane. Or, the cue isn’t clear if the image in the display is bad. I think having very clear cues that match the real world—like the reflection in the window—is a big part of how you help the user and reduce eye fatigue. The point is to preserve what the brain is already very comfortable doing. That’s something we’ve prioritized greatly.