Service robots: The next big productivity platform

September 8, 2016

by



Tags: 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Innovative new capabilities in robot cognition, the physical manipulation of objects, and interaction with humans—delivered in loosely coupled, modular packages—define the dawn of an emerging market in service robots.

The delivery of room service in a hotel typically takes about 20 minutes, 15 to 18 minutes of which is walking the hallways and riding elevators. The remaining time is interacting with the guest and making the delivery. “Does anybody write on their resume that they’re skilled at walking down hallways without bumping into things and they know how to ride elevators?” asks Steve Cousins, CEO of Savioke, a company that makes service robots for hospitality and other industries. He uses the example to point out how much certain tasks entail mundane actions that are non-differentiating.

If technology can automate some of these mundane tasks, it would free humans to focus on higher-value activities. This goal is becoming possible. Robots have not been considered for automating these tasks because they have been limited to highly engineered, limited-access environments in manufacturing. But robots are breaking free of their cages. Today they can explore and build a model of their world, make plans for achieving targeted goals based on that model, and deal with changes and exceptions so as to respond appropriately.

“Service robots can drive down the cost of deliveries dramatically, like one or two orders of magnitude cheaper,” Cousins says. The keys to higher levels of service productivity are robot innovations that allow them to operate autonomously, augmenting human capability and capacity. It opens up the business, hotels in this case, to innovations and process improvement opportunities not previously possible. “The hotel can ask: What else could I deliver? What other services could I offer?” Cousins adds.

Service robots are at an inflection point, opening new contexts for productivity gains beyond what industrial robots have done. Driving the innovation are solutions emerging from two directions. First, advances in three technical domains enable robots that are more intelligent, capable of complex manipulations, and able to work in diverse environments. From a second direction, ecosystem forces are lowering the barrier to innovation, engaging a much broader community of innovators while making it much easier to teach and train robots.

How robots are evolving

For more than five decades, robots have been used to automate dirty, dangerous, and dull tasks in manufacturing operations. They work in highly controlled and engineered spaces, in part to prevent harm to humans or damage to other operations. Their behavior, while precise, cannot adapt to changes in the environment. The spaces they operate in must be modeled to the finest detail, and their actions must be programmed to an equally fine level of detail, removing all possibilities of unanticipated variation.

This situation is changing. The robot market, which has grown quite large for industrial applications, is poised for radical growth in a broad range of services applications. These applications are transforming manufacturing and non-manufacturing operations, and they build off new capabilities that address the challenges of working in changing, uncertain, and uncontrolled environments, such as alongside humans without being a danger to them.

The heightened interest in robots is also associated with unreasonable hype. Although improving in many ways, robots for the next five years will fall far shy of Hollywood’s depictions of their capabilities, which have caused many people to form unrealistic expectations. The easiest of tasks, such as opening a room door or folding clothes and performing other domestic chores, would baffle most robots today.

The almost-human Hollywood depictions can be ignored, because far simpler robots optimized for a class of tasks—such as deliveries, packaging, or others—have significant market potential. Delivering on this potential depends on overcoming several fundamental technical challenges. Some of these are:

  • Giving robots the ability to perceive, understand, and act in a wide range of dynamic environments
  • Making robots simpler to program and use, and thereby engaging a wider group of innovators and users
  • Improving the manipulation capability of robots to handle a diverse array of tasks
  • Reducing the cost and size of robots
  • Enabling robots to work with and around humans while keeping everyone safe

The good news is that robust progress is happening on all fronts, and the pace at which innovations are coming to market seems to be accelerating. While these challenges are diverse, a small set of emerging technology areas hold the keys to advance robotics into the broad services economy.

“Service robots are at an inflection point, opening new contexts for productivity gains beyond what industrial robots have done.

Technology trends shaping the future of robotics

Innovations are emerging from many scientific and engineering disciplines. They come from within the robotics ecosystem, such as algorithms for manipulation, motion planning, and others. And they come from outside the core robotics ecosystem, such as artificial intelligence, machine learning, machine vision, and 3D sensors. Communities have been forming around these research topics, and they incorporate academics, R&D staff of high tech companies, and early stage startups.

As PwC reviewed the fundamental challenges for advancing robotics, it became clear that they aren’t isolated topics. Advancements in three technical domains are addressing them. (See Figure 1.)

  • Cognition: The robot’s ability to perceive, understand, plan, and navigate in the real world. Improved cognitive ability means robots can work in diverse, dynamic, and complex environments autonomously. Some key developments and trends in robot cognition are highlighted in the sidebar “Cognition.”
  • Manipulation: Precise control and dexterity for manipulating objects in the environment. Significant improvement in manipulation means robots can take on a greater diversity of tasks and use cases. Some key developments and trends in manipulation are highlighted in the sidebar “Dexterous manipulation.”
  • Interaction: The robot’s ability to learn from and collaborate with humans. Improved interaction—including support for verbal and nonverbal communications, observing and copying human behavior, and learning from experiences—means robots will increasingly be able to work alongside humans. Keeping humans and the environment safe is an absolute requirement. Some key developments and trends in interaction are highlighted in the sidebar “Interaction.”

Figure 1: Technological progress in three emerging domains is moving the robotics industry toward service robots


Independent and interrelated development across these domains is moving the robotics ecosystem forward.

In addition, two forces are expanding the footprint of robots and making them mainstream. (See Figure 2.) Autonomous learning methods are anchoring progress in the three core areas and are probably the single most important technology for the success of service robots. Autonomous learning expands the variety and diversity of tasks that robots can perform.

Second, the rise of modular platforms across all the important technology domains is dramatically lowering the barrier to developing robots and associated innovations. Platforms establish horizontal solutions to common technical challenges with robots, allowing developers to focus on differentiating components that can be bolted on to these standard elements. Modular platforms also make it possible to engage a much larger pool of innovators, thereby expanding the potential use cases for robots.

Altogether these forces will become the next big driver of productivity in the enterprise. “This opportunity with robots will be like combining the impact of electricity, communications, and the Internet,” predicts Eugene Izhikevich, CEO of Brain Corporation.


Figure 2: Two forces, autonomous learning and modular platforms, are greatly expanding the number of innovators and the tasks that robots can take on


Cognition: From perception to action to learning

Cognition is the process by which intelligent entities receive and handle information. It is not one discrete thing, but a synergistic combination of multiple capabilities. For robotics, cognition is a combination of perception, understanding, motion planning, and automated learning. (See Figure 3.) Cognition is the key to how service robots will deal with nonengineered, unconstrained environments, learn from their encounters, and apply the new knowledge to similar situations in the future.

Accurately sensing the environment and recognizing objects in that environment has been a huge challenge for robotics. Accuracy was limited as 2-D sensors were the primary source of information used to map the environment and recognize objects. With the rise of inexpensive 3D depth sensors, such as Microsoft’s gaming accessory Kinect, perception has moved from two dimensions to three. The Kinect motion detector was released in 2010, and it generates a 3D image by projecting a dense pattern of infrared laser points and then analyzing their reflections by using a custom chip. It retails for about $150.

Perceptual information is analyzed to model the topology of the environment (geometry, shape, and so on) and to recognize objects.

The information is also used in techniques such as simultaneous localization and mapping (SLAM) to generate a real-time map of an environment and locate the robot within that map. With a map and location, a robot can develop an action plan to move toward a desired goal, while avoiding other objects and people and refreshing the plan in real time as necessary when the environment changes.

Understanding often must go beyond mere topology to the semantic details of the environments and objects therein. Robot developers have found that affordances (a concept borrowed from psychology) can advance the process of understanding and supply a direct link between perception and action. The handle on a teacup affords the possibility of a person raising the cup without burning his or her fingers—it’s the tea drinker’s affordance. For a robot, this framework means it can use its perception and understanding to find the affordances in its environment, select which are susceptible to its repertoire of manipulations and motions, and decide which it should use to accomplish its mission.

Advances in artificial intelligence, particularly the deep learning discussed in the main article, allow robots to autonomously learn complex skills directly from humans via demonstration, observation, and feedback as well as from the robots’ own actions.

“This opportunity with robots will be like combining the impact of electricity, communications, and the Internet.”


Figure 3: Robot cognition is a synergistic combination of perception, understanding, motion planning, and automated learning


Autonomous learning: Gateway to service robots

Unlike their manufacturing ancestors, service robots must operate in offices, homes, hospitals, and warehouses—all of which are nonengineered environments in the real world. Service robots used in manufacturing will also have to work in nonengineered environments. This requirement has some particular challenges. “It is very difficult and expensive to [pre-] program the robot to respond appropriately in every possible environment,” Izhikevich explains. In industrial robots, where the robot repeats a certain motion over and over, the programming task is simpler (although it can be expensive) because the robot is explicitly instructed what to do in a controlled environment. To explicitly program robots to perform the appropriate behavior in all situations is an impossible task.

Service robots require a new approach. “Today, there’s a huge renaissance in neural networks and machine learning,” Izhikevich says. “They bring a different approach, based on autonomous learning and feedback from the environment.” Izhikevich has built an operating system for robots, and learning capability is at its core. “In this approach, the model of the environment and robot behavior is very simple, but the approach will work because we can teach a robot by giving it lots and lots of data in the form of images, speech, sounds, actions, or something else.”

A combination of factors—inexpensive computing hardware available in small sizes; the ability to sense, capture, and store large amounts of data; and sophisticated algorithms to process and understand the data in real time—make learning-based approaches a good fit for robots in real-world environments. Rather than programmers explicitly instructing the robot what to do, the robot will continuously learn during training and working, and it will adapt its behavior in new situations based on past experiences.

Autonomous learning implies that the robot will learn what works and what does not, without constant supervision, although the robot occasionally might receive feedback from humans. This ability to learn autonomously from experience, just as humans do, will be a game changer in the dynamic environments associated with robot- provided services.

With automated learning, programming takes a backseat to training, which can be accomplished with far less expense and expertise. Anyone, even children or students, will be able to use a remote control interface to demonstrate how a robot should carry out a particular task. With each training session, the robot will learn what action to take in which circumstances and what is expected of it. Ultimately, learning-based approaches should open up many more use cases for robots.

New techniques are also being applied to the learning of robots. The latest generation of robot cognition relies on deep learning, a form of machine learning patterned after human cognition.

“Over the past couple of years, we’ve seen a ton of advancement in AI [artificial intelligence], specifically in deep learning, for things like image recognition and object recognition,” says Duy Huynh, CEO of Robotbase. The deep learning approach has improved the accuracy of software in tasks such as voice and object recognition. In some cases, deep learning is performing better than humans.

Deep learning–style algorithms typically require a level of computational power usually found in the cloud, not in the confines of a robot. Cloud-connected robots are becoming common, which means robots can take advantage of deep learning resources in the cloud, a phenomenon also called cloud robotics. Via the cloud, robots will increasingly share knowledge and experience just as people do. Keeping capabilities resident in the cloud means new robots can be put into service with little effort, thanks to the latest learning from robot training that preceded them.


Table 1: Modular platforms are emerging for many important functional characteristics of a robot


“If a robot breaks down, you can instantly have another robot take over with all the knowledge that the first robot had,” says Roger Matus, vice president at Neurala. Neurala’s solution, which the company also calls “brains for bots,” uses the cloud to capture and share robot experiences. “For example, you have a company with 20 robots, and you just bought your 21st robot. You connect it to the Neurala Intelligence Engine, and suddenly it knows everything the other 20 robots do.”

A new ecosystem: The rise of modular platforms

During most of the industrial phase of robotics, each vendor, research lab, or innovator typically developed proprietary hardware and software components to build their robot. Today, the robotics ecosystem is advancing through modular platforms that are emerging across key areas of functionality in the form of operating systems, packaged libraries, and cloud-resident services. (See Table 1.) These platforms promote the reuse of successful innovations, and they make robot development dramatically faster and easier.

“Five years ago, you would have needed to write all of your own software from scratch. Now, if you have an idea for what a robot could do in some entrepreneurial way, you have a much better starting point and a much larger community upon which to rely,” says Brian Gerkey of the Open Source Robotics Foundation (OSRF). OSRF supports the Linux-based Robot Operating System (ROS) widely used in academia, government labs, enterprises and startups.

Operating systems (OS) for robots provide the communications infrastructure to program, operate, debug, and control the robot as a system of systems. They take care of many low-level details, such as message passing, memory sharing, device drivers, and resource allocation. The details of the hardware and the particular sensor, camera, or motor the robot uses also are abstracted away and can be ignored by the engineer. Developers are freed from the need to create their own control framework. “Using ROS, we were able to build both the hardware and the software and put a robot in the field in 10 months. That is amazing,” Cousins says.

ROS is an open source OS. So is Tekkotsu, which was originally written for Sony’s AIBO robots but today is maintained by Carnegie Mellon University. More recently, startups such as Brain Corporation offers the operating system BrainOS. Brain Corporation has designed its OS to facilitate learning by combining advances in machine learning, computer vision, and computational neuroscience. Another startup, Neurala offers its OS solution as “brains for bots” based on deep learning techniques to support cognitive requirements in robot applications.


Dexterous manipulation: Be like people

For service robots, manipulation is related to two objectives. One is grasping—the act of gripping an object appropriately so it does not slip or break. The second is moving the grasped object through the environment to perform the desired task. These actions could be pick-and-place operations, pouring a beverage, assembling an object, or many others. Advances in manipulation will rely on progress in cognitive capabilities, because the ability to perceive, act, and learn are common to both cognition and manipulation.

Broadly, advances in manipulation involve the dexterity with which a robot can grip any object and move it through space without bumping into things or harming the environment or the object. In this process, the robot might need to make decisions about where to grip, how much force to apply, whether to push an object or pick it up, how to orient the object (can’t turn over a glass of water, for instance), and so on.

So far, manipulation tasks have been kept simple by relying on a combination of known objects, simplified objects, controlled environments, or narrowly defined tasks. Recent advances take advantage of the fact that the environment for service robots is well matched to humans. For example, making grippers like a human hand that has multiple fingers will allow the robot to grip most everyday objects the way they are meant to be held or interacted with by human hands and fingers. Multi-fingered hands, tactile sensing, and physically realistic simulators are important enablers of dexterous manipulation.

Robots have been able to perform complex manipulation tasks when operated by humans through tele-operation, such as grasping everyday objects, using a power drill, and retrieving items from a fridge. The combination of this dexterity with the advances in automated learning discussed in the main article suggests that robots can be expected to continuously demonstrate progress in their ability to perform complex manipulation tasks.

 


Other modular platforms gaining widespread adoption are two algorithm libraries used for robot perception, the Open Source Computer Vision (OpenCV) and the Point Cloud Library (PCL). Both are free for commercial use. OpenCV originated with Intel in 2006 to supply open-source code for computer vision. PCL includes algorithms for recognizing objects within 3D arrays of geometric coordinates (that is, point clouds) that represent the surfaces of scanned objects or scenes.

“PCL has become the most popular library for 3D perception on the planet,” says Radu Rusu, CEO of Fyusion, a 3D visual systems startup.

Likewise, the ROS-based MoveIt! software package, now supported by SRI International, is a platform for handling robot navigation and movement planning during manipulation by a robotic arm.

OSRF’s Gazebo is another important platform for designing and building robots. It provides a simulation environment that developers can use to test their software targeted for robot movements The simulation environment can test software without being deployed on the hardware, speeding development.

Modular platforms can radically accelerate innovation in robotics. They make the world of robotics accessible to developers and innovators who currently might be outside the robotics ecosystem. “We want to provide a pretty easy way for iOS and Android developers to build robot applications by using AI in their apps, things like facial or object recognition. They can do it with a simple line of code or with an API [application programming interface] call,” Huynh says. Robotbase is creating a modular platform that will make deep learning techniques for object recognition, face recognition, and other tasks available to mobile app developers. The large base of mobile app developers will be in a position to use their skills to develop new behaviors for robots.

Platforms also open robot development to hardware in common use every day, rather than staying limited to proprietary chips used in a small number of expensive devices. “We’re focused on industry-standard hardware components that are cheap. At the end we want to be able to run on everything,” says Neurala’s Matus. “Our current design spec is the iPad Air, so our solutions will run on [its hardware].”

The future outlook for the relative boundaries of individual platforms is far from settled. “The debate on what constitutes a robotics system and how you split up into components is still very active,” notes Rusu. What seems clear is that their rise and adoption will continue. “Within subdomains that are large enough that people work on a library of capabilities, I expect they will emerge as platforms with their own identities,” suggests Gerkey.

Ultimately these platforms redefine ecosystem dynamics. They lower the barriers to innovation, create the potential for a much larger community of researchers and developers, and set the stage for more mainstream and diverse use cases.

Service robots open up new context for productivity impact

CEOs have been upbeat about the productivity potential from robotics; 94 percent who already use robots say they increased productivity in their business. (See Figure 4.) This productivity has come largely from industrial robots in circumstances where a robot’s speed, strength, and consistency are ideal for working on dull, dirty, and dangerous tasks. With service robots, the potential for productivity impact can be extended into many more areas.


Figure 4: CEOs are generally positive about the impact of robots — of CEOs whose companies use robots, 94 percent believe that introducing robotics has led to increased productivity


Figure 5: The productivity context over time has expanded from physical to digital to a blend of physical and digital


Growth in productivity during the past century or more has been fostered by many different technologies. At the same time, the contexts that drive this productivity have been evolving. (See Figure 5.) Technologies in a physical context amplify human and animal muscle. The industrial revolution was principally about productivity gains from the physical augmentation of muscle power.

Technologies in a digital context amplify and augment cognitive processing and communication capability. The computing and communications revolution has been principally about productivity gains from the power achieved by making information and analytics widely available.

What is emerging today is at the intersection of technologies that blend the physical and digital contexts. Broader and more expansive information capture and processing, combined with smarter manipulation and movement of physical materials, can deliver new benefits. Technologies such as the Internet of Things, cheap and powerful sensors, and 3D printing are some examples of technologies in this combined context. This context is where service robots, as they process digital information and do physical work, are poised to create impact.


Interaction: Toward human-centric design

Service robots will work among and with humans. Therefore, their adoption and success is tied to the development of methods and modes of interactions that are intuitive and easy to use and that can be learned by humans without extensive training for either the human or the robot. The aspiration is to make robots interact in all the modes that humans are accustomed to when interacting with each other: audio, visual, tactile, and social. (See Figure 6.)

Modes of interaction are already expanding beyond mere programmatic control using a keyboard and mouse. Speech-based solutions allow humans to provide instructions and commands by voice. Vision-based solutions are evolving to interpret human facial expression, body movement, eye movement, and gestures to communicate intent and responses. Advances in understanding social interaction are making it possible to model emotions and communicate emotional states in a nonverbal manner. There are also advances in brain-computer interfaces that can allow paralyzed people to make robots move as the users intend.

An important area of progress is how robots learn new behaviors and actions by interacting with humans. As discussed in the main article, advances in autonomous learning methods bring the same repertoire of human learning methods to robots, such as learning by demonstration, feedback, and repetition.

As methods of interaction expand, of equal importance is the need to keep humans safe from robots working in the same spaces. Lightweight and human-friendly designs are making robots less imposing and physically safer to be around. Robots are also becoming capable of monitoring their speed and proximity to humans and modulating the power or force exerted in case of contact. For instance, variable impedance actuators are helping robots to be rigid when moving slowly and soft when moving fast, so robot behavior softens to absorb force (such as when coming in contact with a human or unintended object) and stiffens to transfer energy (to carry out its task).

The design of robots and their interaction methods is evolving to be more human- centric. The promise is that robots will emerge as true partners to humans in ways that augment a human’s capability and capacity. The respective strengths of humans and robots can be leveraged for productivity that neither could otherwise attain alone.


Figure 6: Evolving methods and modes of interaction between humans and robots


In fact, a broader cognition revolution is under way, thanks to the resurgence of advances in artificial intelligence, machine learning, machine vision, deep learning, and so on. In many ways, service robots are the first technology that can apply cognition to physical tasks such as manipulation and movement in varied and diverse domains. This capability opens up a new spectrum of potential productivity advances.

“In many ways, service robots are the first technology that can apply cognition to physical tasks such as manipulation and movement in varied and diverse domains. This capability opens up a new spectrum of potential productivity advances.”

Conclusion

While the Hollywood cliché of human- like robots remains out of reach, robots are approaching a technological inflection point that will let them operate more and more reliably in dynamic, unscripted environments. Currently, they offer enhanced productivity when used in cages in factories. In the foreseeable future, they will be more prevalent in warehouses, hospitals, hotels, and many new contexts, assuming certain challenges are met.

PwC sees great promise in a number of technologies designed to meet these challenges. Increasingly, robots can sense the details of their environments, recognize objects, and respond to information and objects with safe, useful behaviors. This ability will increase the number and complexity of the tasks that they can perform.

Service robots are in the early stages of a long development cycle. Incremental, evolutionary gains can be expected in the next three to five years as the field gets past several technological challenges. After that, rapid gains can be expected, especially for service robots in real-world environments, as new robot models take advantage of increasingly powerful and standard modular platforms combined with increasingly powerful autonomous learning capabilities.

For businesses, the coming wave of service robots is an opportunity to look at productivity challenges in creative ways. While many technological advances are falling into place, what the robotics industry needs most is great ideas for what robots should do. “We still lack the winning application ideas,” says Gerkey. “We need people who are probably coming from outside of the robotics community, who have a great idea for how to apply all this technology to compelling business challenges.” Many of the most compelling innovations will be inspired by a deep understanding of the new context that service robots will occupy—a context that combines physical and digital technologies and thereby alters the competitive dynamics in an industry.

 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Contacts

Chris Curran

Principal and Chief Technologist, PwC US Tel: +1 (214) 754 5055 Email

Vicki Huff Eckert

Global New Business & Innovation Leader Tel: +1 (650) 387 4956 Email

Mark McCaffrey

US Technology, Media and Telecommunications (TMT) Leader Tel: +1 (408) 817 4199 Email