Intel buys an Artificial Intelligence Startup

Intel buys an Artificial Intelligence StartupDeep learning has been a hot topic this year, with higher-profile announcements from the kind of like IBM, Google, Facebook, Nvidia, Qualcomm, and Tesla. Now, Intel is tossing its own hat into the ring by purchaseing the deep understanding programs and hardware developer Nervana Systems.

Nervana Systems has a cloud-based AI product that it sells to customers who want to alter deep learning for their own specific use-cases and businesses as well as a proprietary, GPU-specific body dubbed Neon. The organization’s third product hasn’t actually launched yet, but it might be the principle reason why Intel purchased this organization in particular. The Nervana Engine is an ASIC that focuses on the benefits of what GPUs bring to the table, rather than the not-insignificant amounts of hardware that ultimately aren’t fascinating to deep understanding problems.

Nervana hasn’t printed much information on its upcoming ASIC, but we know the chip uses HBM.

The reason that GPUs are fascinating for these kinds of programs is because they include enormous arrays of middles that can be employed to solve specialized problems. Resources like ROPs, texture caches, and FP64 (or even FP32) support aren’t very important for deep understanding, however — that’s why Pascal’s 16-item half-precision mode was something Nvidia talked up when it unveiled GP100 earlier this year. Nervana’s existing Neon engine alnoticedy tests on Nvidia hardware, but Intel’s decision to purchase the organization will likely put an end to more permissive licensing arrangements.

Why Intel requirements in

Right now, Intel is stuck in a item of a tight spot. The organization’s consumer revenues have declined alongside the PC market’s downturn, but its data service and HPC markets remain really healthy. Intel missed out on the entire mobile and tablet market, and alnoticedy had to cancel its strategies to create new business for itself in those spaces (a failure we chronicled in a two-part article earlier this year).

This techniques from not wanting to miss an emerging market, however. Intel has been getting the kind of with product lines and markets that stretch from its own dominance of the data service, consumer, and higher performance computing markets. Even though gear like Xeon Phi could theoretically be used for deep understanding, Xeon Phi is produced to perform massive vector calculations, not the half-precision procedures that a deep understanding network uses. It also packs far fewer middles than an Nvidia Tesla or even an related AMD card, though we’d caution towards treating middle counts as indicative of deep understanding performance.

If deep learning is as central to the future of AI and computing as the field has claimed, entering the market by getting a organization with specialized ASIC hardware and proven designs is an excellent way for Intel to ensure that it remains relevant as computing continues to evolve. It could also be noticed as a tacit admission that Intel isn’t necessarily sure how to continue to generate the evolution of microprocessors farther than it has alnoticedy.

I’ve talked before about how Intel isn’t just dragging its feet on Moore’s law — there are fundamental limits to silicon engineering, and they aren’t going away. Moves like this could be noticed to mean that even Intel recognizes that the era of huge improvements in common objective compute performance are mostly over. Machines will continue to draw less power and be slightly more adept over time, but the last major leap for Intel’s CPUs was Sandy Bridge over Nehalem. Haswell and Skylake were much more small improvements.

Moving into markets like this gives Intel the opportunity to discover other types of compute architectures, not as replacements for x86, but as higher-performance supplements to it.