September 8, 2016
Brian Gerkey is CEO of the Open Source Robotics Foundation (OSRF), an organization supporting the development, distribution, and adoption of open source software in robotics.
Brian Gerkey of the Open Source Robotics Foundation explains how platform technologies in robotics will accelerate innovation.
PwC: Brian, you have been involved in robotics for many years. Can you please tell us about your background and your work?
Brian Gerkey: Sure. I got into robotics in the mid-1990s when I was in grad school at the University of Southern California. I encountered the problem that many people encountered at that time and still do today: to develop a robot, you must build a lot of infrastructure that is largely orthogonal to the robotics task at hand.
Essentially, a robot is a computer with a bunch of peripherals that don’t want to cooperate. You need to write a lot of tools and libraries to access the peripherals, the sensors, and the actuators and then debug the entire system. As a graduate student, I spent a lot of time building software tools before I could do the robot-related experiments.
I’ve dedicated my career to building an open source platform that will take care of all those common tasks and infrastructure details. Before OSRF, I was at Willow Garage, which was a robotics incubator in the Silicon Valley, where we developed a platform that became known as ROS for the Robot Operating System.
ROS has everything, from device drivers for all of the sensors and actuators that you might want to use to the developer tools that let you visualize the state of the system, debug the system, describe it, bring it up and shut it down, and other things. It also has algorithms for capabilities such as navigation, path planning, and perception, which are all common subdomains within robotics.
PwC: What is the genesis of OSRF?
Brian Gerkey: When I was at Willow Garage, we put ROS out for everyone’s use, and a lot of people started using it and adding to it. Around 2012, we decided that this burgeoning open source community would be best served by an independent nonprofit R&D organization to act as the neutral hub of the community. That’s why we founded OSRF. Our mission is to support the development and distribution of open source software for robotics.
Right now, half of our work is on the ROS platform and the other half is on the Gazebo platform, which is a robot simulator. Robots are difficult enough to work with that it’s very important to have an effective simulation, so you can test all of your software without using the hardware each time.
PwC: What have you learned about platforms for robotics? How will they evolve?
Brian Gerkey: Right now, I think users perceive the platform from the perspective of their own use case. There’s ROS in the very narrow sense, which could be just the internal message passing system and developer tools. Then there is ROS in the very broad sense, which is all the software that can conceivably interact with those tools. Somewhere in the middle is how most people would think of it. These differences are mainly semantic.
I would expect to see more of what’s already happening today. Within subdomains of robotics there are enough people working on libraries of capabilities—such as planning and navigation, as well as perception—that I expect they will emerge as platforms with their own identities.
Gazebo, a robot simulator, is a great example of that. A project called MoveIt!, currently being developed at SRI International, is all about a high degree of freedom in motion planning. A perception library called the Point Cloud Library originally was developed in the ROS community and broke out on its own. I think we’ll see more and more of that. You’ll get a focused community working on solving the problems that it cares about.
PwC: What feedback have you received about ROS from the industry?
Brian Gerkey: As we work more with different industries, we’re getting a demand from them to be a little more selective and draw a circle around a subset of that overall platform. This conversation is evolving, because the different industries have different concerns. For instance, the automotive industry is using ROS on self-driving car projects, the industrial automation community is using ROS to control robot arms in factories, and the Department of Defense community is using it for self-driving military vehicles. These different groups have overlapping but distinct concerns about what they’re looking for in a production-ready system.
Other feedback during the last couple of years is to establish standards that take away some of the flexibility. These industrial engineers need to deploy a system, and they want us to tell them what the best practices are. They don’t want all the freedom that academic users might want.
PwC: How will you address this feedback?
Brian Gerkey: We’re rolling these concerns into the development of ROS 2.0. For example, 2.0 specifically includes the concept of lifecycle management. Instead of saying, “You just write your own main function and call into these libraries,” we’ll say, “The best practice is to wrap up your functionality as a component and then our system will execute it for you.”
There’s a tradeoff here. When we execute the program for them, that places a bit of a constraint on a developer, but it allows us to introspect the system in a much deeper and more powerful way and reflect that state back to the rest of the system.
Another significant change we are making is to the underlying messaging system. ROS 1.0 uses a custom system we developed for doing discovery—how different components find each other—and for transporting the messages between components over a network. We’re replacing that with a new service built on top of an open middleware standard called DDS, the Data Distribution Service. It’s already used very broadly in industry and by government, and it is a proven industrial-strength communications layer.
One key benefit will be better behavior in environments that were outside the scope of ROS 1.0, such as over very weak Wi-Fi links or delayed links where you need to wait for the satellite to come into view before you forward the message. There are also benefits in interacting with real-time systems and in quality of service and survivability.
The main benefit is that it’s a production-ready platform that enterprises can use commercially, rather than only to do R&D.
PwC: What’s being done to extend the appeal of robotics to the large Android or iOS developer communities to reduce the learning curve for engineers?
“That’s been a long-term goal of mine and of the ROS project—to make the ability to program a robot possible for anyone who has solid software engineering know-how.”
Brian Gerkey: That’s been a long-term goal of mine and of the ROS project—to make the ability to program a robot possible for anyone who has solid software engineering know-how. We’ve gone a long way in that direction, but we’re not to the point where it’s as easy to program a robot as it is to program a mobile phone application.
We’re moving quickly, but I think the inherent complexity of the robot versus the mobile phone means that it’s taking longer to get to that broader developer community.
PwC: What emerging technologies are having the most impact on robotics today?
Brian Gerkey: During the past five years, one of the biggest changes has been the availability of low-cost 3D sensing. This capability was pioneered by the PrimeSense technology that went into the Microsoft Kinect. Now many similar systems are available.
For years before that, if you wanted to get a 3D view of your environment, you had to put a very expensive laser on an expensive pan-and-tilt mechanism, and actuate it. Suddenly Microsoft came along with gaming accessory Kinect and it cost only $150 and gave you better data than you’d ever been able to get before.
PwC: It took the robotics world from 2-D to 3D very quickly, right?
Brian Gerkey: It did. We had been working in 3D before, but in a limited way with a low update rate. Now the camera provides data at 30 hertz, and the cost is just unbeatable.
Another key development has been robot arms that are safe to operate around people. Historically, robot arms have been used only in cages and factories; you couldn’t have people near them, because they’re dangerous. Today, a new generation of robot arms is safe to use around people, such as the robot arms from Universal Robots and Baxter from Rethink Robotics, which is based on ROS, by the way.
Other smaller companies are developing novel robot arms. Altogether these efforts are providing a new generation of manipulation capabilities that can be brought into the regular world. Today, most of these solutions are tailored for manufacturing logistics settings, which have had a need for manipulation. In the not too distant future, the same technology will get into offices, hospitals, and eventually homes.
These are some key trends for hardware innovations. On the software side, at the risk of tooting our own horn, I think the evolution of open platforms such as ROS have really lowered the entry barrier for people to get into robotics. Five years ago, you would have needed to write all of your own software from scratch. Now, if you have an idea for what a robot could do in some entrepreneurial way, you have a much better starting point and a much larger community upon which to rely.
PwC: What is your advice to CTOs and other executives who impact strategy? How should they take advantage of the developments in robotics?
Brian Gerkey: Depending on what their business is, my first message is that now is a great time to act. There’s a lot of excitement about robotics now that wasn’t there before. Many young, creative people are jumping into this field. There is great talent, and they are ready to apply themselves. Combine that with advances in sensors, manipulation capability, and open platforms, and most of the building blocks are in place.
“We [the robotics community] still lack the winning application ideas. We need people who are probably coming from outside of the robotics community, who have a great idea for how to apply all this technology to compelling business challenges.”
Missing in most cases are the great ideas for what the robot should do. We still lack the winning application ideas. We need people who are probably coming from outside of the robotics community, who have a great idea for how to apply all this technology to compelling business challenges. CTOs and other business executives can fill this gap and unlock many great business opportunities, whether they are in improving internal operations, creating new products, or tapping new markets.
In addition, I would strongly recommend that they build on top of an existing open platform. It doesn’t have to be ours.