August 22, 2016
Paul Davies is an associate technical fellow in Boeing Research and Technology.
Paul Davies of Boeing shares the AR projects in progress at Boeing, the lessons they have learned, and the benefits they expect.
PwC: Paul, would you please describe your role at Boeing and how augmented reality [AR] is important to Boeing?
Paul Davies: I’m an associate technical fellow in Boeing Research and Technology. My background is electrical engineering, specializing in digital signal processing, image processing, and machine vision technologies.
I’m part of the advanced production systems group. We work on a range of technologies that are aimed at improving quality, reducing cycle time, and reducing manufacturing costs. My team is working on a project called digitally assisted assembly technology, and within that project is our augmented reality work.
We’re using or piloting AR to help mechanics understand work instructions and the design intent of what they’re building, and to help them build it right the first time, faster, and with high quality and less rework required. We’re also piloting AR to train new mechanics, so they can learn the job faster and more efficiently.
PwC: How long has Boeing been working with AR, and what is the outlook for the future?
Paul Davies: Our history in AR goes back to 1989, when we had our first prototype system. In fact, Boeing researcher Tom Caudell is credited with coining the term augmented reality. He was creating a system to help aircraft electricians by combining virtual graphics with physical reality. We have been advancing and refining our solutions and exploring their use in many situations.
Today there’s more awareness of the technology and lots of discussion in mainstream media. As an industry, AR is on the cusp of enabling some really powerful and unique AR applications that wouldn’t have been possible a couple of years ago. In the next five to ten years, there will be a big increase in the use and adoption of AR.
PwC: Can you please describe the AR efforts you have under way?
Paul Davies: Sure. There are two threads to our work. One thread is basic technology development. We’ve built our own prototype AR systems, and we maintain those. We’ve built our own prototype AR systems, and we maintain those. For example, we’re developing shape-based or edge-based tracking approaches and using CAD [computer-aided design] models to track parts, so we can use AR in the factory.
“As an industry, AR is on the cusp of enabling some really powerful and unique AR applications that wouldn’t have been possible a couple of years ago.”
The other thread is doing pilot projects. We have five active pilot projects around the company. Four of the pilots are for our commercial and defense business units, and the goals are the improvements I mentioned earlier—to reduce the build time and to achieve higher quality. The fifth pilot focuses on training to help mechanics learn a job faster.
PwC: What information is being used to augment the user’s experience?
Paul Davies: What we display in the AR solutions varies. If the task is to install a unit, we take the CAD model for the unit and show it in its installed positions. That can help a mechanic understand the orientation of something, without having to figure it out or potentially making an error. For example, waveguides on satellites are like pipes; they have very odd shapes and it’s not immediately clear which way they go. That can be confusing.
In another use case, augmentation helps the user through a long process. For example, one particular job has about 50 steps to install a certain part. The mechanic can hold a tablet device and click on the step number. The display will then show the tooling in 3D, such as drills, clamps, or whatever. Some of the displays are animated and show a mechanic how to attach a clamp. In parallel, the mechanic is getting audio cues to explain the task.
PwC: What benefits are you seeing?
Paul Davies: We have done studies to understand processes such as assembly with and without AR. We gave several trainees the task of assembling a mock airplane wing. We split them into three groups. One group had instructions in PDF format on a desktop PC. Another group had the same instructions in PDF on a tablet. The third group had animated AR instructions on a tablet.
“When people see instructions through AR, they absorb them and learn them much faster and more accurately.”
These studies have shown that when people see instructions through AR, they absorb them and learn them much faster and more accurately. The trainees using the AR solution assembled the airplane wing 30 percent faster and with a 90 percent improvement in accuracy. Such evidence is very promising, and that is what we’re trying to capitalize on in our AR efforts.
PwC: You mentioned tracking. What methods are you using to recognize and track objects in the environment?
Paul Davies: There are a handful of ways to recognize objects, and we do all of them. In one of our pilot projects we use a marker, like a QR code. In another, we also use infrared motion capture systems. These systems are available from many manufacturers, they are really accurate and stable, and they produce really reliable data. But they’re not as scalable, and they’re quite expensive to install. So, we can’t put them everywhere.
We are also developing our own method, shape-based tracking. We’ve had some good success with this method and feel pretty confident that that’s the future of AR. We have CAD models for a lot of what we build. With the geometry and the shape of the assembly available to us as CAD models, if we can leverage them for tracking, then we can go anywhere in the factory and do AR. That is what we do with shape-based tracking and it will be the key that unlocks a lot of use cases for us.
PwC: Would you tell us more about shape-based tracking? Is it different from edge-based tracking or mapping?
Paul Davies: Edge-based tracking is one type of shape-based tracking. As the name suggests, in shape-based tracking we are trying to match the shape and not just edges. We match color images of the target objects with predefined 3D CAD models. This does include matching the edges.
To do shape-based tracking, you would need to have a 2-D or a 3D camera—whichever technique you’re using—on a tablet or a head-mounted display and the camera captures the scene in front of the user. If the user is building an assembly, the camera is looking at the assembly.
The matching allows you to align what the sensor sees to the CAD model, so you can show relevant content. That’s generally how shape-based tracking works.
The difference between shape-based tracking and mapping is that mapping generates the model of the environment on the fly. Some systems do that well, but of course there’s room for error. When you use the shape of the CAD model, you’re bounding the error. You have many anchors that you can use to locate from, and you’re not generating a new map—you’re using the CAD design as the map.
“We’re always looking to get the ROI from the technology we’re implementing. Some of the benefits are in terms of cost avoidance, because you avoid making mistakes.”
PwC: What are you learning about content for AR? Does much of it exist or do you need to create it from scratch?
Paul Davies: Content is an important issue. While we have a lot of content in terms of CAD models, assembly steps, and so on, the flashy textured animated tooling content that we also need is not available. For the pilot projects, we work with another group here at Boeing—called virtual manufacturing—to develop the content. In some cases, we show the digital design in its installed location. No new content creation is required: the design exists, and we just display that.
In terms of talent, you need 3D skills to develop the content. You need skills and knowledge in coordinate frame translations, mapping, engineering math, and matrix algebra. In general, the 3D mindset is very helpful in AR content development.
PwC: Where are smartglasses on your roadmap?
Paul Davies: We have several sets of smartglasses in my lab and in other labs around the company. We’ve interfaced one set of them with our AR system only in the lab for trials. They work fine, but they have some issues that prevent us from putting them on the shop floor just yet.
Field of view, battery life, and safety implications are a few examples of the shortcomings. A head-mounted display usually occludes some part of the person’s vision. There are ergonomic issues, such as the weight of smartglasses, and sometimes they get hot. Some of our mechanics spend a lot of their time crawling around in the belly of an aircraft, and that scenario is just not conducive to today’s head-mounted display technology. But I am encouraged by all the innovation and advancement that is going on in smartglasses. I expect that many of the issues will be mitigated over time.
PwC: To what extent are you collaborating with other teams within Boeing?
Paul Davies: We work closely with our IT group and the business partner organization. The IT group focuses on the security aspects and getting the devices—whatever they are—on our corporate network. We also talk with them about how to interface these new cool display technologies with legacy systems and MES [manufacturing execution systems]. How do we get data out of MES into a display? How does that connection happen securely? Those are the issues we work with on the IT side.
In the last year or two, our IT has gotten switched onto AR. They are really partnering with us now to make it a scalable tool and move beyond the pilot phase.
PwC: What are you learning about how to measure the impact of AR technology?
Paul Davies: We have a lot of historical data on building one particular assembly, and we just kicked off an AR pilot on that assembly. We’ll focus on the next five to ten units we build and answer questions such as: What was the quality like? How long did it take? And we’ll compare that with the historical data.
We’re always looking to get the ROI [return on investment] from the technology we’re implementing. Some of the benefits are in terms of cost avoidance, because you avoid making mistakes. Typically it’s difficult to calculate the impact of cost avoidance. How do you prove that this technology avoided a cost? You didn’t experience any rework, so it’s difficult to measure and claim that.
PwC: When do you think you might use some AR in a production environment?
Paul Davies: We’re fairly close on one of our programs. We’re a few months away from using this tool in production in that program. It will be a bounded system. People must understand that this is a long journey; we can’t just flip a switch and roll it out to all of our programs.
PwC: What are some lessons you’ve learned that others should be cognizant of as they look at augmented reality, mixed reality, or this range of technologies?
Paul Davies: The biggest lesson would be to involve the end users as you develop and implement systems like this. Make sure the end users are involved in the process. You need to consider the technical readiness level, or TRL, and you also should consider the cultural readiness level, or CRL. Don’t underestimate the importance of that. In our population where people are building airplanes, the mechanics don’t want to be seen as geeks and wearing this equipment. I think CRL is very powerful, and I think a lot of people underestimate that.
PwC: AR often is described as the merging of the physical and the digital. Would you talk about what that means?
Paul Davies: AR is only a medium to get people better connected to digital data. It’s a complicated piece of technology, but that is its main goal. Think about how people have retrieved information over time. Twenty or 30 years ago, people would go to a library and find a book. Later, people could do searches at the library by using a computer. Now people can do searches online, and everyone has a cell phone. If someone wants to find some information, they can do it in a few seconds. But all of those techniques are still confining; the digital world is the other side of a screen from where a person is.
AR will bring the two together. It will put that content into a person’s world in a way such that the person doesn’t even need to search. It will be predictive. It will say, here’s the information you need for where you’re at, and it will display it to you.
I’m pretty excited about this. I can’t begin to think of all the applications that AR will enable. It’s going to be very exciting, and we’re only scratching the surface here.