Why machines need humans to learn

June 20, 2016

by



Tags: 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Devices get smarter all the time, but to become truly intelligent, humans must create the models for gadgets to “learn” from.

Think back to when you were a child. You were in a data-rich environment, but only starting to make sense of it all. You barely knew the names for things, much less how one thing related to another.

Now fast forward to adulthood. By this point, most of your experiences fit into webs of different but familiar contexts. You understand your co-workers’ roles, their strengths and weaknesses, where they come from, what their families are like, who they work best with. Every news item you read fits into a category. Every business task you tackle has a process and associated procedures, many of which you could do in your sleep.

In data terms, as you’ve matured, you’ve developed a model of your world so you can interact with it. That model contains numerous nested, overlapping, and sometimes conflicting models. When you react to new data, you’re constantly refining and adjusting those models to make sense of the models and to interact effectively with the people, places, and things in the world you know. There’s a representation in your mind of how the world works—a model of models.

Big representation is the act of creating a model of models for each specific use case, according to John Sviokla of PwC. To perform data modeling, intelligent assistants need a lot more help than children do. How can people help them build a model of models to make sense of a vast data landscape? By framing and breaking each problem into component parts, one at a time.

To date, the utility of big data has been delivered primarily within the context of simple, standalone data models—the small representation precursor to big representation. Companies now constantly review conversations about their products in a long, skinny table of social media mentions and compare and contrast them. Previously, that was a novelty. But now, to protect corporate brand health, it has become a necessity. Large data sets also can be brought together, John notes, to create a pricing continuum for insurance buyers that might consist of a million price points.

But now, to get more out of big data, enterprises need big representation. Today’s artificial intelligence (AI) challenge is not just big data, but rather the discovery, construction, and evolution of a model of models for each intelligent assistant. Each intelligent assistant must make sense of its own world, its own business context.

Ideally, machines teach themselves and the learning is continual, but that can’t happen in a vacuum. Humans must be in the loop, particularly to make model development and evolution possible. How is Google able to make advances in self-driving cars? As this video about the application of AI in business presents, at the heart of a self-driving car is an intelligent assistant that has an evolving model of models to make sense of the changing environment that surrounds a moving car.



One model Google has developed is critical. Google Maps, GPS, and radar together achieved only 3-foot accuracy—not enough to avoid hitting a guardrail or the median. Accurate distances in real time between the car and the objects it approaches are vital—thus the need for LIDAR, a laser remote sensing system that casts laser beams in all directions from the roof of the car and then captures and measures their reflection to calculate distances.

With the help of LIDAR, engineers at Google developed a model of a 3-D space a car is moving through. Think about the big data the system must process to avoid collisions and allow the car to move through cities in heavy traffic. In this case, the dynamic model of 3-D space—which humans identified as essential and put in place—is one of at least four models that allow data collected while driving to be processed in a contextualized way, creating the intelligence of Google’s self-driving cars.

Bottom line: Some things can be represented, but others can’t—at least not yet

Business, John points out, follows the edge of automation as that edge moves forward and enables enterprises to structure and operationalize parts of the data environment that hadn’t been structured or operationalized before. But there are always large sections of the business decision and operations landscape that remain unstructured and nonautomated.

In 2016, a major form of competitive advantage has emerged: the use of big representation in places it hasn’t been used before. What’s the new model of models your industry can leverage?

 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Industries

Contacts

Chris Curran

Principal and Chief Technologist, PwC US Tel: +1 (214) 754 5055 Email

Vicki Huff Eckert

Global New Business & Innovation Leader Tel: +1 (650) 387 4956 Email

Mark McCaffrey

US Technology, Media and Telecommunications (TMT) Leader Tel: +1 (408) 817 4199 Email