Diving deeper into the realm of AI

July 18, 2017

by



Tags: 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Deep learning is a crucial step toward achieving true AI—but the human brain still reigns supreme.

This may be the first time in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s led by legendary computer scientists such as Marvin Minsky and John McCarthy, AI’s future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks—still extremely slow and limited compared to the human brain, but useful in practical ways.

The best-known AI milestones—in which software systems beat expert human players in Jeopardy!, chess, Go, poker, and soccer—differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss, or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be manipulated, and it can experience multiple failures with no serious risks. That’s not the case with other AI applications, in which risks may include autonomous vehicle crashes, factory failures, or inaccurate translations.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience—that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For example, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection, and insemination. No one has programmed it to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called “deep learning” has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images of human faces, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level; then eyes, mouths, and noses; entire faces; and finally specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human–machine conversation, language translation, and vehicle navigation.

Five industry applications of deep learning



Although it is the most similar duplication of the human brain scientists have developed, a deep learning neural network cannot be leveraged to solve all problems. Currently, it requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent. For now, at least, true AI, which will simulate the brain’s fully autonomous decision-making, remains a product of our imaginations.

 


This content first appeared in strategy+business magazine as part of its Strategist Guide to Artificial Intelligence.

© 2017 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Mentions of Strategy& refer to the global team of practical strategists that is integrated within the PwC network of firms. For more about Strategy&, see www.strategyand.pwc.com. No reproduction is permitted in whole or part without written permission of PwC. “strategy+business” is a trademark of PwC.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Industries

Contacts

Chris Curran

Principal and Chief Technologist, PwC US Tel: +1 (214) 754 5055 Email

Vicki Huff Eckert

Global New Business & Innovation Leader Tel: +1 (650) 387 4956 Email

Mark McCaffrey

US Technology, Media and Telecommunications (TMT) Leader Tel: +1 (408) 817 4199 Email