August 22, 2016
Barry Po is the senior director of product and business development at NGRAIN.
Barry Po of NGRAIN forecasts how authoring solutions will redefine AR interfaces in the future.
PwC: Barry, can you please introduce yourself and your company?
Barry Po: Sure. I’m the senior director of product and business development at NGRAIN, which is an industrial augmented reality [AR] and virtual reality [VR] company.
NGRAIN started as an interactive 3D volumetric rendering engine company, offering solutions for training and simulation to the aerospace and defense industries. During the last five or six years, NGRAIN has transitioned into the emerging area of AR and VR for enterprise applications.
We saw the opportunity for NGRAIN to take all the technology and IP [intellectual property] it created for virtual maintenance training and technical support in the field and then apply that to wearable technology like smartglasses, as well as functions such as visual recognition and tracking.
PwC: In your marketing material, NGRAIN makes reference to “voxel” as something only you use. What is a voxel and how is it different from other methods?
Barry Po: In short, a voxel is a 3D pixel. One of the ways that NGRAIN’s technology is unique compared with most 3D graphics technologies is that we build our 3D content in voxels or as a collection of 3D pixels. You can think of voxels as tiny grains of sand. The same way that you can build a sand castle out of grains of sand, you can build 3D content in voxels.
The use of voxels has many benefits for people working in 3D graphics. One is that voxels are naturally suited for 3D printing a design. A 3D scanner will typically capture a physical object in the form of a 3D point cloud. Generally you must translate the point cloud to a mesh, a process called tessellation in 3D graphics. We translate the point cloud into a format that can be rendered on a PC in real time without any need for geometric conversion. That helps retain all the features that were present in the original geometry.
“For AR to take off, the ability to rapidly create, manage, edit, and deploy 3D content is one of the problems that should be addressed.”
Also, volumetric rendering represents not just the surfaces of objects but also their interiors, so objects are fully solid from surface to core. Polygonal models in general will model only surfaces. We can model any equipment or material that has an interior density of some kind.
PwC: What are some use cases that your solutions support?
Barry Po: One use case we support is part familiarization. For a novice technician who needs to work on a piece of equipment, our technology will provide access to information about individual parts that the technician might have never seen or touched before.
That’s very valuable in the field, as workers now have the ability to perform at a level of expertise that’s much higher than what they otherwise would be able to do. Also, they don’t need to keep technical manuals and outdated publications. Current information is available to them on demand. They can also collect and document information as they’re working on the equipment.
Another use case we support is automated damage assessment and visual inspection. This capability is particularly useful in manufacturing situations where products require very tight levels of tolerance. Lockheed Martin is one of NGRAIN’s customers, and the company deployed damage assessment technology operationally for use in the F-35 and the F-22 fighter programs. The maintenance folks can perform diagnostics and damage assessment in real time after fighter jets come in from a flight.
PwC: Are these solutions on tablets or smartglasses?
Barry Po: Today they are used on tablets. We have been exploring how to bring them to smartglasses. Data processing and overall functionality present no real problems for the ability to port the software.
Porting interaction is the interesting issue, because the way people work or annotate on a tablet will not be the same as the way they annotate using a pair of smartglasses. That’s certainly one of the areas where further work must be done.
PwC: What is the challenge with authoring AR content?
Barry Po: If you look at the history of 3D computer graphics, you’ll see that creating 3D content is a very difficult problem. You don’t need to look much further than work in game development or visual special effects in the entertainment industry. The amount of production effort that’s required to create meaningful content is enormous. For AR to take off, the ability to rapidly create, manage, edit, and deploy 3D content is one of the problems that should be addressed.
PwC: How are you addressing this challenge?
Barry Po: Our authoring solution, NGRAIN Vergence, pulls from our years of experience building 3D content for simulation and training systems. Our objective was to create a tool that people other than 3D graphics authoring experts and 3D modelers can use. The reality is that enterprises probably don’t have very many 3D graphics experts, but they do have a lot of domain experts or subject matter experts.
“The system will know what you are doing, and it will anticipate what you will do next. This capability will move the interaction from deterministic to probabilistic scenarios.”
Vergence allows you to create and deploy AR and VR content without writing any code. You can import 3D content from a variety of sources, such as a CAD [computer-aided design] model, 3D point cloud scans, and other methods, and then link it to enterprise knowledge. This knowledge can be instructional content, step-by-step directions, checklists, videos, charts, or real-time sensor information. All of that can be tied into a 3D asset.
Then you can deploy the application to the device of your choice, whether that’s a mobile device, a desktop, or perhaps a pair of smartglasses.
PwC: You mentioned that porting interfaces is one sticky point. How do you expect the interfaces for AR evolve?
Barry Po: Fundamentally, the UI [user interface] has not changed very much since the mouse and keyboard. There has been movement toward touch interfaces during the last five or six years. There has been some advancement in speech interfaces as well.
There has been some work in the use of gesture interfaces in AR and VR, such as the six degrees of freedom input or multidimensional input methods. But the problem of building an interaction language that can be used on a device that works in the real world is not fully solved yet. In addition, the solution must be robust enough to work in a general-purpose setting, including low light or bright light, indoors or outdoors, and so on.
I believe we as an industry will evolve to a point where interface isn’t necessarily about an interaction language. Instead, we will create systems that are smart enough or intelligent enough to interpret the context in which you’re working. The system will know what you are doing, and it will anticipate what you will do next. This capability will move the interaction from deterministic to probabilistic scenarios.
Ultimately, the deterministic and probabilistic will combine, so the system will rely on not just gesture alone, but a combination of context, voice, gestures, touch, and so forth as the means of rich interaction. The interactions will become truly natural. I think such technology is on its way.
PwC: Where is the industry today on this journey?
Barry Po: We’re getting there. Today we have the ability to do what is called optically transparent AR—the ability to use the tracking technology in conjunction with our rendering engine to overlay graphics on top of the physical pieces of equipment. That is possible today.
“For AR, the industry will move away from the notion of apps or bite-sized solutions that work in isolation from one another, and toward a new paradigm of a fully integrated system that is interconnected and available through common natural interfaces.”
However, we don’t quite yet have the ability to overlay information in context and dynamically respond to the environment as it changes. As the tracking capability gets better—such as tracking a 3D object in 3D space—then the ability to figure out the state the object is in gets better and so does our ability to render the right content on top of the objects.
PwC: NGRAIN creates the authoring tool to create AR applications. What are you learning about the nature of AR applications? Will they be like mobile apps or will they be very different?
Barry Po: I think the very nature of what an app is probably will change. The reality is that for people working in the field, there is no notion of an app. What they’re really looking for is a personal assistant.
For AR, the industry will move away from the notion of apps or bite-sized solutions that work in isolation from one another, and toward a new paradigm of a fully integrated system that is interconnected and available through common natural interfaces.
Obviously, the engineering work that’s required to make that happen will be a significant investment. But the tools and technologies that are necessary to build such an integrated system already exist.