April 15, 2016
by Chris Curran
The performance of optical systems in smartglasses is an important gating factor for the adoption of augmented reality. New methods, less-expensive manufacturing processes, and adapting to human perceptual capabilities are shaping the path to improved performance and lower costs.
Optics technology is at the core of defining the augmented reality (AR) experience delivered by smartglasses. The optics generates the display that the user perceives and interacts with. The adoption of AR smartglasses in the future will likely be gated by the evolution of optical components, their performance, and their cost.
As AR technology stands today, smartglasses with superb optics could be ideal for many enterprise uses but might be so heavy that no one could wear them as long as required. Or a company might find a high-quality device so expensive that adoption is not financially feasible. Issues like these limit the technology to a narrow set of use cases where the device can be worn for short durations and the high cost can be justified.
AR holds great promise as a productivity tool. Smartglasses are emerging as an important form factor for delivering AR solutions, especially in the enterprise. Their appeal stems from the fact that they are wearable, much like eyeglasses, and are also hands-free, thereby allowing a user to be engaged with the real world for doing work.
This article examines the trends in optics for smartglasses. The success of any AR application depends on the experience enabled by the smartglasses, including image quality, size, weight, and power consumption—all of which impact the cost. Design tradeoffs currently force many compromises.
The role of optics in AR
AR uses optics in two ways: to display information for users on the job and to acquire visual information for computer processing. The acquisition technology, originally developed for still and video cameras in various applications, is mature, sophisticated, and capable of delivering high quality in several device form factors.
The big challenge is image display, which has greater constraints and complexities. Image display in smartglasses is different from the display on a PC, TV, tablet, or any other device, because smartglasses must blend visual information coming in naturally with that generated by the computer and project it in a user’s field of view without being distracting. “There is a thin line between augmenting someone’s experience and causing a distraction,” cautions Sumanta Talukdar, CEO of WaveOptics.
Color, brightness, sharpness, or any other input that directs the user’s gaze away from where it should be focused can be a distraction. Smartglasses often must present complex visual information to users, and that presentation must have the quality necessary for text, images, and sometimes video.
The challenge is to take visual information generated by a light source, such as a mini projector generally placed above the eye or on the temple, and to display the information in front of the eye without blocking the view of the physical world. This approach is called an optical see-through (OST) display, as it allows users to see through any optical components in the line of sight.
A contrasting approach is video see-through (VST). VST captures the world through a video camera, combines it with digital content, and projects it to the user on an opaque display. (See Figure 1.) OST is primarily the choice for enterprise AR, because it allows engagement with the physical world, maintains peripheral vision, and weighs less. In contrast, virtual reality (VR) goggles do not need a see-through capability and therefore have fewer challenges from an optics standpoint than AR.
Figure 1: Smartglasses have two main configurations: optical see-through and video see-through. Optical see-through is primarily the choice for enterprise AR; it allows engagement with the physical world, maintains peripheral vision, and weighs less.
There are many optical methods that take information and present it to the user. Common methods include:
- Prisms and beam splitters: Prisms are crystals that bend and redirect light. Beam splitters use similar technology to split light and send it in two directions simultaneously. The reflected light is projected directly into the user’s retina. Google Glass uses a prism to redirect the image into the eye.
- Mirrors: The basis for many optical instruments, mirrors can be used to redirect and focus light. Depending on how they are designed and manufactured, they can transmit light from one direction and reflect light from another. Osterhout Design Group (ODG) uses a mirror with a special coating in its R-7 smartglasses.
- Waveguides: These devices channel light along a path as in an optical fiber, and they are used widely in telecommunications and electronics. In smartglasses, waveguides direct light from tiny displays housed in the temples of the glasses toward the lenses in front of the eye. Vuzix was the first to use waveguides in 2013.
- Diffraction gratings: Closely etched lines on a substrate manipulate light waves, and their interactions can create 3-D images, among other things. The image is formed in the user’s field of view rather than projected into the user’s eyes. The Microsoft HoloLens uses diffraction gratings.
- Light field: This term is defined as the amount of light flowing in every direction through every point in space. It is emerging as an alternative method for displaying 3-D objects that appear more realistic than those created by providing different left and right images in a stereoscopic display. Magic Leap states it is using light field technology in its smartglasses.
All of these methods are in use today, as there is no one-size-fits-all method to cover the wide range of use cases and situations. They’re all evolving due to ongoing innovation and R&D. Together, they’re narrowing a gap between prevailing characteristics and desired characteristics for optical components. (See Figure 2.)
Figure 2: The key trends in optics are narrowing the optical capabilities gap.
Trends shaping progress
For any user, the experience of using smartglasses depends on a few key optical characteristics: eyebox (relates to head freedom), field of view, and image quality. Ideally, each should be as large as possible. However, increasing any of them leads to bigger optical components, greater size, more weight, discomfort, and higher cost. Therefore, vendors routinely make tradeoffs.
Below is a list of many of these tradeoffs. Several approaches are improving the ability of designers to make fewer tradeoffs, and these approaches hold promise for much better smartglasses in the future. The actual tradeoffs made depend on the particular application or use case.
- Display size: The greater an area the user needs to see at one time, the larger the display must be. But the larger the display, the more awkward and heavier the smartglasses.
- Weight: A direct function of display size, the weight of smartglasses is typically carried at the user’s temple. The heavier the display, the more uncomfortable it is to wear smartglasses for extended periods. There is also the issue of weight balance. For a monocular system, which presents the image to one eye, all the weight is on one side. Too much weight can make balance difficult to achieve. Binocular systems have an advantage in balance, but they double the amount of optics, which increases the weight.
- Image quality: Sharpness, resolution, distortion, brightness, and color accuracy are the basic aspects of image quality. How much is needed depends on the application: a lower-end display might support text annotation, while video would require something more sophisticated. As the image quality increases, so does the cost.
- Field of view: Humans can see approximately 120 degrees vertically with normal binocular vision. That is much wider than the range of 14 degrees to 40 degrees in commercially available smartglasses. Wider fields of view force larger displays, which influence size, weight, and comfort. Smaller fields of view may preclude important visual information, particularly if a system uses a gesture interface but users cannot see their hands to be sure they are using the correct motions.1
- Eyebox: Also sometimes called a head motion box, the eyebox is the size of the cone of light when it strikes the eye. The larger the eyebox, the greater freedom users will have to move their heads. Movement that is too far up/down or left/right will cause the display to disappear as no light reaches the eye. But the larger the eyebox, the bigger the optics and the heavier and bulkier the device will be.
- 2-D vs. 3-D: Designers and manufacturers now can create the perception of 3-D in various ways, whether using binocular techniques to replicate how the eyes see slightly different angles on the same object or integrating holographic projections.
- Cost: Nearly every factor that improves the quality of the image and experience drives up the cost. As the cost increases, so does the need to get higher levels of return on the investment to justify the expenditure.
- Power: The two considerations are delivering power and sustaining it. The larger the display or the brighter the image, the more power is consumed, which affects the amount of power that must be delivered. As for sustaining the needed level, the more power, the larger the battery pack needed and the greater likelihood that batteries or smartglasses will need to be swapped during the day, which increases cost.
- Comfort: People only use tools that make a job easier, not harder. Smartglasses that must be used for hours at a time must also be comfortable. Increased weight and size reduce comfort.
- Environment: The location where people use smartglasses can add complications to the optics choice. Ambient light levels affect how bright the display must be. Environmental conditions may require a degree of ruggedness that can affect weight, size, comfort, and power.
Improving tradeoffs: New methods on the horizon
The typical field of view for smartglasses today is between 20 degrees and 40 degrees. Increasing the field of view or eyebox has usually increased the size (and weight) of the device. That is why VR goggles, which have a field of view greater than 100 degrees, are much bulkier.
Waveguide technology has made good progress in changing this tradeoff, making it possible to have a larger field of view with smaller optical components. Waveguide lenses can be very thin (fewer than 2mm compared with about 19mm for prisms).
On the horizon are new optical methods that promise even better tradeoffs. Innovega has developed a solution that combines a contact lens, which includes embedded optical components, with a smartglass to generate a see-through virtual display. Bringing the optical component close to the eye creates the potential to achieve a field of view of 120 degrees at significantly lower size and weight.
The University of North Carolina and Nvidia have collaborated to develop what they call pinlight display, a technique in which transparent point light sources project directly into the eye, creating the potential for a field of view greater than 100 degrees.
Freeform optics is an emerging method that could achieve a sunglass-like form factor. In freeform optics, optical elements do not need to be symmetrical and can appear to have arbitrary shapes, thereby giving engineers more flexibility in their designs. Although manufacturing freeform optical elements is a challenge, such elements can achieve optical performance previously impossible while being lighter, smaller, and less expensive than other methods. Freeform optics also offers the potential to add more capabilities into the optical system. Researchers have used freeform optics to add eye illumination and imaging, so eye tracking was integrated with the display in a compact package. (See Figure 3.)
Figure 3: A compact optical system that uses freeform optics and integrates illumination (NIR LED) and sensing (NIR sensor) to have eye-tracking capabilities along with a virtual display and see-through view.
Source: Adapted from Hong Hua, “Sunglass-like displays become a reality with free-form optical technology,” SPIE Newsroom, August 20, 2012.
Reducing eyestrain and weight: Extending wearability
To be successful, smartglasses should be comfortable to wear for long periods of time. “When users must wear smartglasses for hours at a time, then issues like eyestrain become very important. You would never notice that in a short little demo or at a trade show,” says John Haddick, CTO of ODG.
Eyestrain can happen when workers must shift focus between the augmented information and the task at hand or when image quality is poor. Vendors are trying many approaches to eliminate eyestrain. One approach takes inspiration from TVs and PCs. “You want the display experience to be as close to viewing a TV or a computer monitor as possible,” Haddick says. Improving image quality over the full display (including corners) and focusing the information at the appropriate depth (when 3-D) are some ways to achieve a TV-like experience.
Another approach for reducing eyestrain is to take advantage of a human’s natural perceptual capabilities. “The human brain is highly adaptive; it can pull off a lot of tricks in how people perceive. We should take advantage of that and make the optics do only what it needs to do and let the brain do the rest of it,” Talukdar says. For instance, if a person is looking into a store window and someone walks behind them, that person will not be confused by the reflection they will see in the window. “Your brain has no problem interpreting [the reflection] as long as it’s very consistent and predictable and calibrated well,” Haddick explains.
In addition, the slight imperfections that humans see and expect in day-to-day life actually tell their physiology that the object they’re seeing is real. “If imagery that was perfect to the detail were injected into your eye, then you would know that it is not real,” cautions Talukdar.
Providing clear cues that match the real world will reduce eye fatigue. The broader optical goal is to make augmented information or objects appear indistinguishable from the physical world, so the experience is just like what humans are accustomed to in the physical world.
Reducing the weight of smartglasses has a long runway for improvement. Smartglasses today weigh between 100 grams and more than 500 grams (not including cables and accessories). In comparison, typical eyeglasses are an order of magnitude lighter, between 20 grams and 40 grams. Rimless eyeglasses can be less than 10 grams. Magnifiers that dentists and jewelry technicians use can range from 50 grams to 250 grams.
While the optics system is a fraction of the total weight, it is an important driver when adding up the weight of the lenses, mini projectors, shields, and so on. Enterprises will likely pay a premium for lighter smartglasses, just as consumers do for lighter eyeglasses. The new methods discussed previously will help to reduce the weight and improve the wearability of smartglasses.
Reducing cost: New material and new manufacturing process
Many aspects impact the cost of the overall device. Optical components are a significant contributor. As optical components have become smaller, manufacturing has relied on grinding glass pieces. This approach is expensive and continues to drive costs higher.
“The manufacturing of microdisplays is one area that can potentially disrupt this industry, as there likely will be a sharp decline in prices,” suggests Talukdar. WaveOptics has developed a design that allows the company to use a widely available material—plastic, for which there are mature manufacturing processes—to manufacture at high volume and low cost. The company spent two years developing a design that would work with plastics. “We have engineered the choice of plastic into the DNA of our designs,” Talukdar says.
Carl Zeiss has developed a lens design that uses internal reflection and a display at the edge of the lens to project directly into the eye. The form factor is closer to prescription eyeglasses. The company’s lens design can be manufactured from injection-molded polycarbonate, making possible mass production at much lower costs.
Over the longer time horizon, developments in materials science could open up other possibilities for combining sophisticated optical capabilities in compact packages. Nanomaterials have already been shown in the lab to enable invisible metal–semiconductor photodetectors.
The bold promise of smaller, cheaper, better
The optical system size, weight, performance, and cost are important gating factors for the adoption of smartglasses and their future success. The systems today are good, much better than the previous generations, but significant improvements are still needed. The good news is that innovation and progress are robust, and many alternative approaches are in play. While it is difficult to know which approach will win in the long run, progress is inevitable.
The Holy Grail is to fuse digital content with the physical world in a manner so they are indistinguishable from each other. That will reduce eyestrain, tap into well-evolved human perceptual capabilities, and allow users to perform real work while interacting richly with the physical and digital worlds.
Ultimately, optical advancements will expand the depth and breadth of use cases that smartglasses can support, and smartglasses will become a compelling computing platform that redefines the experience of engaging with the physical and digital worlds.
- Bettina L. Beard, Willa A. Hisle, and Albert J. Ahumada, Jr., Occupational Vision Standards: A Review, NASA, 2002.