August 19, 2016
Sumanta Talukdar is the CEO and cofounder of WaveOptics.
Sumanta Talukdar shares how a new optics design opens up optical components to new manufacturing techniques and lower prices.
PwC: Sumanta, can you please provide a brief background on you and your company?
Samanta Talukdar: Sure. I’m the CEO of WaveOptics. My background is in computer science and optics or photonics with a Bachelor of Engineering degree followed by a PhD. I come from a background in optics and photonics, but varied across displays, medical systems, sensors, and head-up displays [HUD].
WaveOptics is in the augmented reality [AR] market, and our focus is on optics—more specifically, on a particular type of waveguide-based augmented reality display. Our team has more than three decades of deep experience in the field of augmented reality. WaveOptics is composed of professionals from the consumer electronics, medical, gaming, and defense industries. Our core optics team, for example, comprises individuals who come from a background of developing and deploying HUD solutions for defense applications.
PwC: Why waveguides?
Samanta Talukdar: Since the start, our mission has been to deliver the AR experience that is necessary for successful use. From an optics point of view, the experience can be deconstructed into parameters such as eyebox, head freedom, field of view, and image quality.
With other methods, such as prisms and mirrors, when you want a higher field of view, for example, you will need a bigger eyebox, bigger lenses, and so on. That’s just following conventional laws of physics. So the device will be bigger, bulkier, and heavier, and it will limit head freedom. Using our patented waveguide technology, we can deliver on the optical specifications while having a device that is inherently wearable.
If the use case requires only limited head freedom and relatively small fields of view, such as 10 degrees to 12 degrees, then conventional prisms or mirrors might be adequate. With waveguides, we can deliver fields of view that are 40 degrees and higher with large eyeboxes, which means more head freedom without an equivalent increase in size and weight of the device.
“A larger field of view lends itself intrinsically to a more engaging and immersive experience and is therefore of considerable value.”
PwC: Why is a larger field of view desirable?
Samanta Talukdar: Throughout tests with our clients, we’ve learned and validated that the end users need information that in most cases must be overlaid on the periphery of their field of view. They do not want the information to be overlaid directly in the center of the field of view, because rather than adding to the productivity of their task, it distracts them from what they actually need to pay attention to. In enterprise use cases, the only space you can really use to provide the information is on the outside, on the periphery.
If you can use only the periphery, then the size of the virtual screen you need so you can show more information becomes very important. In the trials we’ve run with our clients, we started with 20 degrees. The trials showed that we basically needed a larger virtual screen; that is, we needed a larger field of view so we could have a larger peripheral area to put information in.
Across enterprise and consumer markets where the value is heavily content driven, a larger field of view lends itself intrinsically to a more engaging and immersive experience and is therefore of considerable value.
PwC: What is your opinion with respect to monocular or binocular display of information?
Samanta Talukdar: We strongly maintain that AR for only one eye is not just counterproductive but it’s also uncomfortable. We have the capability to enable both monocular and binocular displays, and many members of our core team, myself included, come from a background of head-up displays in defense. We have experience in deploying real solutions into very challenging environments.
People point to defense forces and say, “Well, monocular has been used in defense all the time.” That’s correct, but those are people who are highly trained to use monocular systems. Deploying a monocular system to non-defense personnel is something that we found extremely challenging from the standpoint of ease of use and added value for the end user or operator. In more instances than not, a monocular approach is counterproductive. So we deploy binocular solutions to our customers.
“What people see as reality is not just a question of intensity, color, and angles but also all the nuances that make humans perceive reality.”
PwC: How about 2-D versus 3D? What issues come up there? Will future solutions be a mix, or will they gravitate to 3D–capable devices?
Samanta Talukdar: For most of our enterprise customers that have real problems, I’d say most if not all of their problems require 2-D solutions. Most of these use cases are very simple and rely on the display of instructions, symbols, schematics, and things like that. So there is an argument to be made about whether 3D is actually needed for these use cases.
Our solution is 3D capable, and we can do light fields with our waveguides. We are not seeing any market pull from our enterprise clients for full 3D at the moment. The consumer market is different, and for that, yes, we are focusing on full 3D.
Obviously, first we need to have a conversation around what exactly 3D is. The term 3D is often used quite loosely. Just providing stereoscopic images to the left and right eyes isn’t necessarily the correct or the appropriate 3D experience.
PwC: Then what would be an appropriate experience?
Samanta Talukdar: We would accomplish 3D by using light fields. It is a way to provide a visual experience that looks more like a real 3D object than the experience provided by using simple left and right stereoscopic images. The main differentiator between a light field camera and a typical camera is that the light field camera also captures angular information.
What reaches the eye isn’t just 2D images and a collection of intensity and color information, but also angular information. With angular information, we can create imagery that looks a whole lot more 3D than the imagery created by typical left and right stereo. Also, eyestrain is less of a problem because eyes do not need to accommodate separate left and right stereoscopic images.
PwC: What can you say about the future of AR experiences? Will they blend the physical and digital worlds?
Samanta Talukdar: This question is very core to the way we work. We also strongly believe that our capabilities and technology present the opportunity to insert digital information into the real world, so the digital augmentation is indistinguishable from what a person perceives as reality. This approach is contrary to virtual reality, for example, where the world is synthetic and a person is expected to believe that it’s a synthetic world.
What people see as reality is not just a question of intensity, color, and angles but also all the nuances that make humans perceive reality—nuances such as the slight imperfections in what humans see around them in day-to-day life that tell their physiology that the object they’re seeing is real. If imagery that was perfect to the detail were injected into your eye, then you would know that it is not real, and the goal of suspended belief would not have been achieved.
I believe there is a thin line between augmenting someone’s experience and causing a distraction. And what is a distraction? A distraction is not just something that is unpleasant. It’s something that directs your gaze when it shouldn’t. For example, if we create imagery that makes the user focus on it longer than necessary, maybe that’s not the effect we’re looking for.
“There is a balance between trying to create the perfect optical system and taking advantage of what human physiology is also doing.”
PwC: Will AR supplant such situations?
Samanta Talukdar: There is a balance between trying to create the perfect optical system and taking advantage of what human physiology is also doing. The human brain is highly adaptive; it can pull off a lot of tricks in how people perceive. We should take advantage of that and make the optics do only what it needs to do and let the brain do the rest of it.
Here is a simple example: I can make an augmented image seem more opaque just by increasing its relative brightness compared to its surroundings. If I have some text in front of your eyes—A, B, and C—it doesn’t matter what color the letters are, but for the sake of argument let’s say they are all red or they’re all blue. If I increase the brightness in the B, the B would seem to be more occlusive than the A and the C, and that’s just because your eyes adapt to the higher intensity and therefore darken the background. To do the same thing using optical physics would be pretty difficult.
PwC: Essentially, because something seems much brighter and the background is dimmer, you have the sense that you can’t see through it, right?
Samanta Talukdar: Exactly. We are creating AR by adding light. The flip side, and ultimately one of the hard problems in AR, is creating the perception of a black image. If you’re creating AR by injecting light into people’s eyes, then how do you create an occluding image, which is the same as how do you create a black pixel? That’s something that blocks out the real world.
There are ways to accomplish shades of gray, but creating an actual solid black pixel is very challenging. It is not like a computer or television screen where you turn off the light but everything around it is lit, so it looks black in comparison. If you turn off a pixel in an AR display, you’ll still be able to see the outside world. There’s no such thing as a black wavelength. That’s one challenge that anyone who is serious in this business will need to solve. Also, it is one of the strong nuances that you need in your nuance toolkit to create the perception of true suspended belief.
PwC: What changes do you see on the horizon that will be promising for the AR industry?
Samanta Talukdar: The manufacturing of microdisplays is one area that can potentially disrupt this industry, as there likely will be a sharp decline in prices. Very soon we will deploy our waveguides in plastic. We have engineered the choice of plastic into the DNA of our designs.
We did not come up with a clever idea to do something in glass and then start to think, “Right. Now how do we do it in plastic?” We all spent two years crossing off a whole bunch of concepts because we knew they would never transfer to plastic. We then came up with a generic concept that lends itself to a variety of designs that we knew could be transferred into plastic, and that’s what we are doing right now. We have solved that problem.
“We are working with our partners this year to bring to market a number of smartglass products targeting various verticals at price points that are a lot more realistic than what is advertised today.”
PwC: What price points are possible in the future?
Samanta Talukdar: Price points would depend on the use case. For example, a solution for a delivery driver would be simpler. The user likely sits in a cabin protected from the elements most of the time and has easy access to power, perhaps Wi-Fi as well. That scenario leads toward an optimized solution that does not need to adapt to various conditions, and the price tag would reflect that.
On the other hand, a use case like field service would be very demanding because the environment is totally unconstrained. Users might work around the globe in any kind of landscape that is not necessarily clean, is exposed to rain or dirt, and has varying lighting conditions. Such a demanding situation will add complexity to the device, and the price tag would reflect that.
Not counting the AR software in the headset and the display system, the other components of a headset for such a use case are not that dissimilar to what you find in an average smartphone. So the price is really driven by the cost to make the waveguides. We are working with our partners this year to bring to market a number of smartglass products targeting various verticals at price points that are a lot more realistic than what is advertised today.
I’ll let you draw your own conclusions about what kind of price benefit we would have after we start making waveguides in plastic and enjoying the economies of scale that come with that. We are looking at market entry products with certain clients in enterprise and consumer markets during the next 24 months.