Augmented reality will empower more than 110 million deskless workers

August 19, 2016

by



Tags: 


Ketan Joshi is the vice president of marketing for Atheer.

Ketan Joshi is the vice president of marketing for Atheer.

Ketan Joshi of Atheer shares how the smartglasses form factor will bring the digital revolution to the deskless workforce.

 

PwC: Ketan, can you please tell us about Atheer and your role there?

Ketan Joshi: Sure. I am the head of marketing for Atheer. Atheer was started in 2011, and the whole idea was to bring rich, context-specific information to users on the move, without them needing to hold a device. The primary way to access information today is through smartphones, or in some cases tablets, which are a small-display form factor. Your hands are engaged in using the device. This requirement limits your experience as your hands are not free to do the hands-on work.

Some customers have complained consistently that they move back and forth between using a tablet or a paper manual and doing work. This back and forth is very inefficient. We wanted to solve that.

PwC: Is that a big opportunity?

Ketan Joshi: Our analysis has found that there are more than 110 million deskless workers in the world whose job is to be knowledgeable about the technology of the physical equipment that they service or about the medical history of the person they’re treating. In the field service industry, the equipment is constantly changing, often becoming more complex as businesses integrate more and more emerging technologies into their product. Just imagine how cars and the technology in them have changed.

The deskless workers need access to rich information, schematics, videos, pictures, flows, lists, instructions, charts, and so on. The variety and the velocity of what these workers do and how they do it is changing rapidly. Existing methods of training and shipping manuals or PDFs are too slow.

The opportunity we saw was to bring information into their field of view in a hands-free, seamless manner or to support collaboration with a remote expert while exchanging rich information. How can we do all of that? We call it AiR computing, short for Augmented interactive Reality. It’s an ability to pull information out of thin air, so to speak, to make people in the field efficient and productive.

“Visual ergonomics is essential for the device to be worn for long periods of time.”

PwC: How are you solving this challenge?

Ketan Joshi: We concluded that the most logical solution is a smartglasses form factor. By keeping very small displays in front of a user’s eyes, we can create much larger virtual images. So we started creating the smartglasses platform.

We see early opportunities for use cases in the enterprise in what we call FAST workflows. FAST stands for Fixing, Assembling, Surveying, and Treating. We believe organizations performing these tasks will be the early adopters of smartglasses-based augmented reality [AR] solutions.

PwC: How do you expect the next few years to evolve?

Ketan Joshi: Today, the smartglasses solutions are in the early introduction phase. Some customers are confused and wonder where they should use these solutions. We are engaged with Fortune 500 companies to identify specific use cases, such as the FAST workflows. We help clients understand how they will get a return on investment.

I expect that will continue throughout 2016 as enterprises digest this concept, trying it out in smaller quantities and proving the concept. In 2017, we expect that to take off in a much more significant fashion. Consumer use is probably three to five years away.

PwC: What should smartglasses do?

Ketan Joshi: Many smartglasses today are not that smart. They are essentially a notification solution, comparable to smart watches, in that they display a line of text or a number or two. In warehousing applications, for example, they guide the picker on what and where the next item is: in aisle number 3, shelf 4. That’s useful, for sure, but limited in nature.

Smartglasses must do a lot more. We asked many users: What do you really need? Where is the pain? Consistently the feedback was that they want rich information, not just notifications. They want information such as videos, models, 3D content, task flows, checklists, and so on. All this content requires significantly more display space to be useful. Plus, users want the information in their field of view because they don’t want to look out the corner of their eye all the time, as that causes fatigue. Having information in the field of view was the number one feedback.

Number two was the need for interaction, whether that is checking off items from a list, navigating a 3D model, playing a video, or something else. They want rich and natural interactivity.

The third most common feedback was that they want an enterprise-grade ecosystem. In other words, all the investment they have in content and infrastructure—including cloud infrastructure, mobile apps, enterprise applications, and others—ideally should be usable or accessible in the smartglasses, so they are not starting from scratch.

PwC: Are smartglasses for use during short periods of time or long periods of time?

Ketan Joshi: Both scenarios are possible, so we must design for use over long periods of time. The smartglasses need to be comfortable. We have created and patented capabilities that we call visual ergonomics. So far we have 15 issued patents, and many of them are related to visual ergonomics—how you present the content to your visual system in the most optimal fashion to give it a comfortable feel. Visual ergonomics is essential for the device to be worn for long periods of time.

“Head motion can also support interaction. It’s just like having multiple screens on your desk and switching between them by turning your head.”

PwC: What is an example of visual ergonomics?

Ketan Joshi: Visual ergonomics is concerned with where you display, how you display, and at what space you display. For example, most stereoscopic displays are tuned to focus at infinity. The assumption is that the content being displayed is at a distant location. However, workers are interacting with the physical world within arm’s length. They are focusing there, a few feet away. So in some glasses they must focus at infinity to get information and then focus at a few feet to work. They go back and forth, and it can cause strain. What we chose to do—and this is part of our intellectual property based on the studies we did—was to place our focus at arm’s length, the same area where the person is working or looking at objects. Now we don’t fatigue a user’s eyes from constantly changing focus back and forth.

PwC: Earlier you mentioned interactivity as a key requirement. How will that happen in a smartglasses form factor?

Ketan Joshi: Indeed, interaction is very important. The more natural the interaction, the better it will be. Gestures will be a big part of it. Hand gestures can map to what you do on a touch screen, so you can get equivalent functionality to navigate, click, and so on.

In some cases, hands will be busy working or holding objects. In those situations, voice-based interaction can allow users to step through instructions. In a noisy environment, gaze tracking or eye tracking can enable some interaction.

Head motion can also support interaction. It’s just like having multiple screens on your desk and switching between them by turning your head. We can create a virtual space around the user. For example, we could have a checklist on the left-hand side and the specific instructions of assembly on the right-hand side. You turn to the left to see the checklist and to the right for the instructions.

We have combined seamlessly all three methods of interaction—gestures, voice, and head motion—into a single device.

PwC: What are some challenges in overlaying information on the physical world? Are best practices evolving?

Ketan Joshi: Yes, indeed, there are lessons we are learning. For instance, accuracy of the information as it relates to the physical world is a big concern, and we must design interfaces to avoid the possibility of error or confusion. We had an oil and gas customer who experimented with some smartglasses. They faced a challenge when things were close together, such as two conduits next to each other; one is for the “hot” electricity, and the other is for neutral. Because of the swaying effect—the small movements of the head causing movements in the overlaid information—the smartglasses might overlay the wrong label on the conduits. It might show the hot as neutral instead of the other way around. Rather than overlaying information, the way to address the problem in this case is to show a labeled schematic wiring diagram that identifies which side is hot and which is neutral. Users can glance at that, perform the right action, and avoid costly and potentially unsafe situations.

“In the future, people would want the interactions with digital objects to be as easy as picking up the object and manipulating it in the same way as a physical object.”

PwC: Often, AR is described as merging the physical and digital worlds. What does that mean to you?

Ketan Joshi: There are many aspects to the concept of merging the physical and digital worlds. The first is how you represent information in the visual system. Can you display digital information associated with physical objects, such that the information is overlaid on the physical objects in the real world? That marries the two domains.

The second aspect is being able to show virtual objects in 3D digital form alongside physical objects. The promise is to show the virtual objects indistinguishable from the physical objects.

The third aspect relates to interaction. Is the digital interaction as easy as interacting with physical objects? For instance, I’m accustomed to picking up a business card and interacting with it, like flipping the card over. In the digital world, can I interact with objects the same way? This capability has a lot to do with understanding natural gestures. The way we, at Atheer, have implemented gestures is very similar to the touch-screen actions of tapping, swiping, and zooming, which people are accustomed to already. I’m sure that in the future, people would want the interactions with digital objects to be as easy as picking up the object and manipulating it in the same way as a physical object. Today, no gesture system in the world understands these complex gestures completely. We’ll get there—sooner than you think.

Industries

Contacts

Chris Curran

Principal and Chief Technologist, PwC US Tel: +1 (214) 754 5055 Email

Vicki Huff Eckert

Global New Business & Innovation Leader Tel: +1 (650) 387 4956 Email

Mark McCaffery

US Technology, Media and Telecommunications (TMT) Leader Tel: +1 (408) 817 4199 Email