Facebook Pitches In on Intel's Coming Artificial Intelligence Chip -- Update
October 17 2017 - 3:40PM
Dow Jones News
By Ted Greenwald and Jack Nicas
LAGUNA BEACH, Calif. -- Intel Corp. on Tuesday said it is
working with Facebook Inc. and other firms on a coming chip
specially designed for artificial intelligence, as the
semiconductor giant moves to capitalize on a fast-growing market
and aims a direct shot at rival Nvidia Corp.
Intel Corp. on Tuesday said Facebook Inc. is providing technical
input for a coming chip specially designed for artificial
intelligence, as the semiconductor giant moves to capitalize on a
fast-growing market and aims a direct shot at rival Nvidia
Corp.
Intel's new chip will be among the first of a new breed of
processors designed from the ground up to accelerate the popular AI
technique known as deep learning, which enables computers to
recognize objects in photos, words in spoken statements, and other
features that otherwise would require human judgment.
Called the Nervana Neural Network Processor, the chip is the
fruit of Intel's acquisition last year of startup Nervana Systems.
Intel expects to ship the initial version on a limited basis later
this year and make it widely available next year through Intel
Nervana Cloud, a cloud-computing service, and as an appliance that
customers can install in their own data centers.
Facebook is "starting to take a look at (the chip's) early phase
and say, 'Hey, this really could change the way we think about
artificial intelligence. And help us really steer how we build
software and hardware," Intel Chief Executive Brian Krzanich said
at The Wall Street Journal's WSJ D.Live technology conference
Tuesday. "This is the first piece of silicon. We have a whole
family of silicon planned."
Intel said it is working with a select group of companies to
fine-tune the chip. The company said the technology could
contribute to advances in medical diagnoses, financial fraud
detection, weather prediction, self-driving cars and other
areas.
Estimates vary widely on the potential size of the market for
AI-specific hardware in data centers. Karl Freund, an analyst at
Moor Insights & Strategy, estimates the market is worth at
least $500 million this year and could grow to as much as $9
billion by 2020.
Nvidia serves that market virtually single-handedly. Its chips
were designed to process graphics but proved more efficient in some
deep-learning tasks than Intel's conventional processors, such as
its Xeon line.
Deep learning has emerged as an effective way for computers to
find useful information in the floods of data washing over the
internet and corporate networks, especially imagery, sounds,
documents, and other data that isn't in strictly organized formats,
such as spreadsheets and databases. However, it requires huge
quantities of computing power to process immense stores of
data.
With deep learning, computers study large volumes of test data
for patterns, in a phase called training, and then apply what
they've learned to make decisions about new data.
The Nervana NNP is designed to speed up the training phase by
taking shortcuts specific to neural networks, the software
structures that drive deep learning. For instance, training
calculations can occur at low precision, saving processing power
for further calculations. The chip is also designed to be ganged,
so large numbers of NNPs can work together on a single task.
Intel declined to provide metrics for evaluating the performance
of the new chip. Intel late last year announced its goal to boost
by 100 times the speed of training achieved by graphics processors.
However, Nvidia itself since then has multiplied the speed of its
own chips.
"Intel will be competitive but is unlikely to have a huge
advantage," Mr. Freund said.
Mr. Krzanich said at the conference Tuesday that the Nervana
chip "is not only about speed, but it's the amount of data we can
go look at." He said such chips will enable new use cases, such as
more accurate weather forecasting, which require analyzing enormous
data sets.
Several other companies are working on chips designed to
accelerate AI tasks. For example, Alphabet Inc.'s Google division
has introduced two generations of AI chips it calls Tensor
Processing Units, or TPUs, for use in Google's own data
centers.
Write to Ted Greenwald at Ted.Greenwald@wsj.com and Jack Nicas
at jack.nicas@wsj.com
(END) Dow Jones Newswires
October 17, 2017 15:25 ET (19:25 GMT)
Copyright (c) 2017 Dow Jones & Company, Inc.
Meta Platforms (NASDAQ:META)
Historical Stock Chart
From Aug 2024 to Sep 2024
Meta Platforms (NASDAQ:META)
Historical Stock Chart
From Sep 2023 to Sep 2024