Brain Corporation: Lowering the barrier to innovation in robotics

September 8, 2016

by



Tags: 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Eugene Izhikevich is the founder, chairman, and CEO of Brain Corporation.

Eugene Izhikevich is the founder, chairman, and CEO of Brain Corporation.

Eugene Izhikevich of Brain Corporation explains why programming robots is a key barrier to their success and forecasts how brain-inspired learning will change the game.

 

PwC: Eugene, can you please tell us about your background and your company?

Eugene Izhikevich: Sure. I spent most of my career in the academic world until 2009. I was senior fellow of theoretical neurobiology at the Neurosciences Institute, where I built biologically detailed models of the brain. In 2005, I implemented the largest thalamocortical model, simulating the same number of neurons and synapses as the human brain; that is, 100 billion neurons and 1 quadrillion synapses.

I started Brain Corporation in 2009. We did contract research with Qualcomm to build a biologically detailed model of the visual system. After finishing the contract and in the last few years, we’ve applied what we learned and developed to robots.

PwC: Why robots? What challenges or opportunities are you seeing?

Eugene Izhikevich: Almost 100 years ago, science fiction writers promised robotics to us. So where are the robots? It is not difficult to build the body of the robots. Many toy companies make amazing toy robots. The problem is that it is very difficult and expensive to program the robot to respond appropriately in every possible environment.

The conventional approach to robotics is that you build the body, put the computer inside, and then write software programs that tell the robot what to do in any particular situation. This task is relatively easy to accomplish for industrial robots, where a robot must repeat the same movement 1 million times and there are no uncertainties. Even here, the cost of programming can be higher than the cost of the installation and the hardware, but it is a relatively easy programming task, as you explicitly tell a robot what to do.

When you put a robot in a home or office, there are floors, chairs, doors, people walking around, and things that move—all of which can be extremely confusing to robots. With all these uncertainties, explicitly telling a robot what to do in all situations is almost impossible. Conventional robotics is failing us for that reason.

PwC: What is the answer to this challenge?

“Today, there’s a huge renaissance in neural networks and machine learning.”

Eugene Izhikevich: Today, there’s a huge renaissance in neural networks and machine learning. They bring a different approach, based on autonomous learning and feedback from the environment. In this approach, the model of the environment and robot behavior is very simple, but the approach will work because we can teach a robot by giving it lots and lots of data in the form of images, speech, sounds, actions, something else. The next time the robot comes across a new situation, it will know what to do because the action will approximate a situation it has seen before and responded to.

PwC: How does your solution work?

Eugene Izhikevich: In our product, we put together advancements from machine learning, computer vision, and a lot of computational neuroscience. We have a simplified model of the mammalian brain that abstracts capabilities of the cerebellum, ganglia, and cortical structures.

We take input from a vision sensor [camera] and other sensors, for example, and feed them into this model. Developers can use BrainOS to guide the robot’s actions by using a remote control application. Since you know what the robot can see, you just perform the actions for the robot, such as avoiding obstacles and picking up things. After a while, the robot says, uh-huh, every time I see this, my arm must move this way, my joint must move that way, my wheels must turn over there, and so on. With every new example, the robot learns how to behave and what action to take. The robot is merely repeating learned responses. It has no intentionality; that is, it has no idea why it is doing the task.

PwC: What role is visual perception playing in your solution?

Eugene Izhikevich: You can divide visual perception into two different tasks. One is called vision for recognition, which is to do a visual search. If you show an object, a robot will recognize what the object is. The second task is vision for action. If I ask you to please pick up this cup, you don’t need to know whether it is a plastic cup, a paper cup, or a can of soda. You know exactly how to pick it up. You need to know how much force or torque you must apply, which comes from experience.

Most of the algorithms and applications on the Internet today are vision for recognition. They’re trying to find patterns—to tell you what’s in the picture. For robotics, you also need to solve the problem of vision for action, which is what we are doing. A robot has a camera, an object in front of it, and a gripper. How will the robot pick up the object? The robot doesn’t need to name the object, but it needs to know what it can do with the object.

PwC: How does BrainOS compare with other operating systems for robots, such as ROS [Robot Operating System]?

Eugene Izhikevich: As an operating system, they are similar. They all are a set of libraries and a communications infrastructure to program, operate, and control the robot. BrainOS is different in that it is centered on learning. We start from learning as the core and build everything else around it. In other operating systems, the learning capability is an add-on.

The current version of BrainOS is based on Ubuntu Linux and some capabilities on top of that. We do connect to other operating systems; we have a bridge from BrainOS to ROS.

PwC: What is the state of robot cognition, and what do you expect in the future?

“We’re still in the very early stage. Right now is the cusp or the inflection point, in the sense that finally the computing hardware has become fast enough that we can use very sophisticated capabilities inside robots or in the cloud.”

Eugene Izhikevich: We’re still in the very early stage. Right now is the cusp or the inflection point, in the sense that finally the computing hardware has become fast enough that we can use very sophisticated capabilities inside robots or in the cloud. This speed wasn’t possible even five years ago. Today for $30, we have in our pockets or in the robots something that is 10 times faster than Cray-1 and can run on something like AA batteries. Thanks to these capabilities, we can run very sophisticated algorithms.

I believe these new capabilities will create a revolution in robotics, because robots will be able to do many tasks that have been beyond their ability. For example, you can have robots in agriculture picking strawberries, tomatoes, and other produce. You could have robots in your home, not just vacuuming, but picking up stuff from the floor, putting it away, cooking, cleaning, and so on.

For this revolution to happen, I am not thinking of a universal humanoid robot that performs many different tasks. I am thinking of single-task robots, each performing a simple task. Right now in California, manual labor to pick strawberries is $0.5 billion per year. To pick a strawberry, all you need is a four-wheeled or a three-wheeled platform that moves along the berm and uses a small gripper that sees the strawberry, extends the arm, and picks the strawberry without crushing or smashing it.

PwC: How would someone actually build a learning robot as you describe? If someone wants to build a trash picking robot, for example, how could they use your technology?

Eugene Izhikevich: Let’s say a group of innovators wants to build a trash picking robot. They must create a design. Let’s assume it uses an arm to pick up trash, instead of a little bulldozer that moves trash on the floor.

First you build a robotic platform and put an arm with a gripper on it. Next you take our solution bStem, which is a developer kit and integrated platform that has cameras for vision, and BrainOS and integrate them with the robot. Our kit includes the ability to remotely control the robot. You can move individual joints by sending commands to bStem, which passes them to the motors and so on.

You have a remote control toy at this point. Next you put the robot in the environment and use the remote control to make it pick up trash as you might do it. You move to the trash, position the gripper, grab, move to the garbage container, and release. As you do this, our system is looking at all the sensory inputs, the camera input, the accelerometer input, and any other sensors, and is learning the actions to take in any situation. Once it is confident that it has seen something multiple times, it will start issuing commands even before the developers use the remote control. It will foresee potential actions.

After a while, the robot becomes easier and easier to train because it looks like somebody else is helping; it feels like somebody else is playing with the remote control and helping or predicting what a person intends to do and doing it. Initially, the robot might pick up only half the trash. As you continue teaching, its accuracy will increase and mistakes will decrease. When it makes a mistake, such as picking up something that is not trash or ignoring trash, the trainer can interfere in real time and provide corrective action.

Whether it’s a good product or not, whether it will be successful or not with consumers, is a separate issue. But this example illustrates that developers can take a robot body and our software, teach it what needs to be done, and then just copy and paste it to 10,000 units and sell them.

PwC: How big is the commercial opportunity with robots?

Eugene Izhikevich: The opportunity is huge. I believe everybody wants robots. When we succeed in this field, the impact on society will be greater than the impact of the Internet. This opportunity with robots will be like combining the impact of electricity, communications, and the Internet. There are all sorts of boring, dirty, and dull duties and chores that we would rather someone else do.

The reason we don’t have robots yet, as I mentioned earlier, is that it’s difficult to program them. The only solution we see is through training—through learning.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Industries

Contacts

Chris Curran

Principal and Chief Technologist, PwC US Tel: +1 (214) 754 5055 Email

Vicki Huff Eckert

Global New Business & Innovation Leader Tel: +1 (650) 387 4956 Email

Mark McCaffery

US Technology, Media and Telecommunications (TMT) Leader Tel: +1 (408) 817 4199 Email