You may have never heard of Syntiant, a machine learning chip startup targeting voice interfaces. But you have probably heard of the technology companies that on Wednesday invested $25 million in the startup's second funding round. Behind the investment were the venture capital subsidiaries of Microsoft, Amazon, Intel, Motorola, Robert Bosch and Applied Materials.
The Irvine, California-based Syntiant is tapping into the growing voice assistant market powered by Amazon and Google. Over the next few years, microphones could be embedded in everything from thermostats and refrigerators to wearables and headphones. The company’s hardware is designed to support voice recognition tasks such as identifying voice commands and detecting sounds like a baby crying or glass breaking.
“Its technology has enormous potential to drive continued adoption of voice services like Alexa, especially in mobile scenarios that require devices to balance low power with continuous, high-accuracy voice recognition,” said Paul Bernard, director of Amazon’s Alexa Fund. The company last year raised $5 million in its first funding round led by Intel Capital along with Seraph Group, Danhua Capital and Embark Ventures.
Syntiant, which is headed by former engineering executives from Broadcom, plans to enter volume production in the first half of 2019. Chief executive officer Kurt Busch said that some of the new funding would be used to double its headcount from 25 to around 60 employees over the next year. “We will be doubling our engineering team and building out a sales force,” Busch told Electronic Design.
The company has positioned its processor-in-memory architecture as faster and more efficient than CPUs, GPUs and DSPs. But unlike Nvidia, which dominates the training market, Syntiant is focused on running inference inside electronic devices without shipping the work to the cloud. That could improve privacy and lower the latency of voice control interfaces and other inference applications, which usually run inside data centers.
As artificial intelligence moves out of research and development, the market for inferencing is forecast to outgrow training. Competition is mounting around inference chips that can be embedded in edge nodes—for instance, servers installed on factory floors—and edge devices—such as wearables and smart speakers. Syntiant is focused on edge devices, where it will have to battle for customers with major corporations like NXP Semiconductors.
The company's first chip is based on 40-nanometer NOR memory, and it stores the neural network in the same place that inference takes place. That limits the distance that data needs to travel around the processor, saving power that typically would be used accessing neural networks stored in DRAM. Using 8-bit precision, chips based on Syntiant's architecture can handle 20 trillion operations per second per watt, Busch said.
In contrast, chips based on Nvidia’s Volta architecture can supply 0.4 TOPS, he said.
Since it was founded last year, Syntiant has raised $30 million of total funding, not an insignificant amount given the high development costs involved with building silicon. But other startups targeting training in data centers, such as Graphcore and Cerebras Systems, have drawn significantly more money to compete with Nvidia, Intel and Xilinx in data centers. Designing with the 40-nanometer node allows Syntiant to keep costs down.
The development of a chip based on 7-nanometer technology costs about $550 million, while the average 40-nanometer chip is one-tenth of the cost, according to International Business Strategies. Completing chips based on the 16-nanometer node takes slightly more than $100 million. “By focusing on the architecture, we do not have rely on Moore’s Law to get better power, performance and die area cost,” Busch said.