Tripling product revenues, comprehensive developer tools, and scalable inference IP for vision and LLM workloads, position Quadric as the platform for on-device AI.
BURLINGAME, Calif., Jan. 14, 2026 /PRNewswire/ — Quadric®, the inference engine that powers on-device AI chips, today announced an oversubscribed $30 million Series C funding round, bringing total capital raised to $72 million.
ACCELERATE Fund, managed by BEENEXT Capital Management, led the round. Uncork Capital returned with one of the largest insider commitments through its opportunity fund, joined by insider Pear VC. New investors include Volta, Gentree, Wanxiang America, Pivotal, and Silicon Catalyst Ventures.
The funding comes as Quadric hits a revenue inflection: product revenues more than tripled in 2025 vs. 2024. Quadric is entering 2026 with accelerating design-win momentum, driven by growing adoption of the General Purpose NPU (GPNPU) processor IP across edge LLM, automotive, and enterprise vision applications.
“We’ve been deeply impressed by Quadric’s innovative architecture, its disruptive approach to AI inference at the edge, and their strong market traction particularly in Asian markets,” said Hero Choudhary, Managing Partner at BEENEXT. “Those attributes indicate a very clear path for further growth with a strong potential to be a generational business. We believe Quadric is poised to revolutionize the edge AI hardware sector, and we look forward to supporting their journey as they continue to push the boundaries of what is possible.”
Quadric’s Platform for On-Device AI
Making a good AI inference chip is hard. Making one that stays good is harder.
Most edge AI chips today are legacy architectures with NPU accelerators bolted on as an afterthought. The supporting software toolchains are often a hack-job stitched together to validate a handful of models and considered “done.” These stacks work fine for the models they were built for, but when a developer tries to inference a new model and obtain good performance, they break down.
Meanwhile, building an AI inference chip costs hundreds of millions of dollars. Customers can’t afford to bet on an architecture that becomes obsolete when models shift—and in AI, models always shift.
Quadric Chimera™ processor IP is designed for this reality. Unlike fixed-function NPUs locked to today’s model architectures, Chimera is fully programmable: it runs any AI model—current or future—on a single unified architecture. This future-proofs the silicon investment against model-driven obsolescence.
Combined with a toolchain built from the ground up—not bolted on—Chimera enables chip designers to deploy computer vision and on-device LLM applications, including models up to 30 billion parameters, with industry-leading inference performance per watt. Customers can go from engagement to production-ready LLM-capable silicon in under six months.
Chimera GPNPU cores scale from 1 tera operations per second (TOPS) to 864 TOPS and are available in both commercial-grade and automotive safety-enhanced (ASIL-ready) configurations.
Proven Traction, Platform Potential
“Quadric is the only AI processor IP company we’ve seen reach this level of product revenue, and that traction is a direct result of real customer adoption—not hype,” said Jeff Clavier, Founding Partner at Uncork Capital, a seed investor that has participated in every round. “What makes this especially compelling is the entrenched on-device AI software ecosystem forming around Chimera; that ecosystem has the makings of a generational platform.”
Quadric licensees now span automotive, edge LLM, office automation, and autonomous driving use cases. Coincident with this funding, Quadric announced two new license wins: an edge-server LLM silicon provider in Asia (name withheld pending product announcement), and Tier IV of Japan, a pioneer in self-driving software.
Growth Capital for Customer Success
“I want our customers to have the best AI inference chips in the market. Chips with world-class software, leading performance per watt, and immunity to the model obsolescence plaguing AI accelerators,” said Veerbhan Kheterpal, CEO and co-founder of Quadric. “This is growth capital, and we’re putting it behind the teams and technology that make our customers successful.”
About Quadric
Quadric is the inference engine inside on-device AI chips. Trusted by leading chip designers, Quadric’s General Purpose NPU (GPNPU) processor IP and end-to-end toolchain enable customers to go from engagement to production-ready AI silicon in under six months. Chimera scales to 864 TOPS, with automotive-grade options. Headquartered in Burlingame, California, with teams across North America, Asia, and Europe. Learn more at https://quadric.ai
SOURCE Quadric, Inc.
