CSE alum startup SambaNova collects $56m in funding for AI chip research
SambaNova’s approach, stemming from work by Olukotun and co-founder Christopher Ré at Stanford University, seeks to create a new platform from scratch that is optimized specifically for AI operations.
As artificial intelligence applications grow and multiply, researchers are racing to design a new generation of hardware that meets their unique computational needs. The market for these “AI chips” is booming even in its infancy, and this week Google’s parent company made its first-ever investment in the space.
Startup SambaNova Systems, co-founded by alumnus Kunle Olukotun (BSE EE ’85; MSE PhD CSE ’87 ’91), earned $56M in its series A funding round to develop a computing platform that may reimagine how we power machine learning and data analytics.
A crisis exists in the world of architecture as the computing industry works to overcome the limits of silicon. AI applications add a layer of complexity to the issue, as their fundamental operations are built on mathematics that CPUs aren’t designed to handle efficiently. Currently, graphics processing units (GPUs) are a favorite alternative in the industry for their ability to handle these operations more quickly; but researchers in the field believe there’s room to improve even more by rethinking the actual substrate where the computations happen.
SambaNova’s approach, stemming from work by Olukotun and co-founder Christopher Ré at Stanford University, seeks to create a new platform from scratch that is optimized specifically for AI operations. In doing so, its founders hope to outclass GPUs in speed, power usage, and even the size of the chip.
Olukotun, the “father of multi-core processors” and Cadence Design Systems Professor of Electrical Engineering and Computer Science at Stanford, revolutionized computing in the 1990’s with his work on Stanford’s Hydra chip multiprocessor (CMP). This project brought about multi-core technology as we know it today, where it is commonplace in consumer and high-end computing systems.
Now Olukotun wants SambaNova to build the new standard for applications ranging from image processing aboard self-driving vehicles to training models for complex medical problems.
“All sorts of approaches have been tried, but they’re all facing the fundamental limit that is Moore’s Law is slowing down,” Olukotun said in a statement. “To get more performance going forward, we need to think about a much more efficient way of doing computations.”
The funding comes as Google CEO Sundar Pichai and other execs have called Google an “AI-first” company. Alongside Apple and Amazon, they’ve taken an intense interest in designing hardware optimized for consumer products that increasingly rely on AI.
Dave Munichiello of GV (formerly Google Ventures) will take a seat on SambaNova’s board of directors, and predicts that these large early investments are a sign that tech companies large and small will be seeking similar solutions in as little as five years.
In addition to the research of Kunle Olukotun and Christopher Ré, SambaNova benefits from the experience of co-founder and CEO Rodrigo Liang, who ran a team of nearly 1000 chip designers at Oracle as Senior Vice-President responsible for SPARC Processor and ASIC Development. The company has already assembled a team of over 50 employees since its founding in November, 2017.
“SambaNova’s software-defined infrastructure anticipates and supports a rapidly-evolving ecosystem,” says Munichiello in SambaNova’s first press release. “We firmly believe that over time this computing approach will lead the industry in distributed machine learning and data analytics infrastructure.”
More About Kunle Olukotun
Following his development of the Hydra CMP system, Olukotun founded Afara Websystems, a company that designed and manufactured low power server systems with chip multiprocessor technology. With Afara he developed the multi-core processor Niagara, later acquired by Sun Microsystems. Niagara-derived microprocessors currently power all Oracle SPARC-based servers.
In other research, Olukotun made significant advances in the development of transactional memory technology to simplify multicore programming, and he also pioneered the use of domain-specific languages for programming heterogeneous computer systems. While at Michigan, he was advised by Bredt Family Professor of Computer Science and Engineering Trevor Mudge.
Olukotun now leads the Stanford Pervasive Parallelism Lab, which focuses on making heterogeneous parallel computing easy to use, and he is a member of the Data Analytics for What’s Next (DAWN) Lab, which is developing infrastructure for usable machine learning.
EE Times, 3/21/2018
SambaNova Press Release, 3/15/2018