Artificial Intelligence Fintech Investments News

Neurophos Raises $7.2 Million

Neurophos Raises $7.2 Million Round to Create Super-fast, Energy-Efficient Metamaterial-Based Optical AI Chips for Data Centers

Neurophos has raised a $7.2M USD seed round to productize a breakthrough in metamaterials and optical AI inference chips. Neurophos has joined the Silicon Catalyst incubator program to accelerate product development.

Neurophos, a spinout from Duke University and Metacept Inc., has raised a $7.2M USD seed round to productize a breakthrough in both metamaterials and optical AI inference chips.

The company has been funded in a round led by Gates Frontier and supported by MetaVC, Mana Ventures, and others. The seed funding will enable the production of a proprietary metasurface that serves as a tensor core processor enabled by its advanced optical properties. The company will also hire a team of engineers in Austin, Texas, a major silicon engineering hub.

While GPUs have had massive success in accelerating AI workloads, digital approaches are typically limited by power consumption. On the other hand, proponents of optical computing techniques claim that photonics can vastly reduce power consumption and therefore accelerate compute speeds far beyond the bounds of what is possible with modern GPUs.

Unfortunately, despite vast amounts of capital having recently been invested in optical compute for AI, the success of the field has been limited, largely because existing optical devices are too large and bulky to scale. However, metamaterials enable new paradigms for controlling the flow of light. The discovery of metamaterials has unleashed an enormous burst of creativity, leading to demonstrations of invisibility cloaks, negative refractive index materials, and many other exotic products.

Read More About Fintech Interviews: Global Fintech Interview with Andrey Korchak, CTO at Monite

Neurophos’ optical metasurfaces are designed for use in data centers and their approach is already shattering world records in computational energy efficiency. Neurophos plans to use high-speed silicon photonics to drive a metasurface in-memory processor capable of fast, efficient, AI compute.

The estimated global data center electricity consumption in 2022 was 240-340 TWh(1), or around 1-1.3% of global electricity demand, and the exponential growth of AI inference workloads is threatening to push this demand to unsustainable levels. Neurophos’ technology will provide way more compute per dollar spent on CAPEX and OPEX, at the same energy consumption, and reduce the total cost of ownership of AI accelerator chips and data centers.

Says Patrick Bowen, Neurophos CEO: “The most important factor in optical processors is scaling. Optical processors become both exponentially faster and more energy efficient on a per-operation basis as you make them larger. This means that in a finite chip area, the most important factor is how small you can build the optical devices that compose your processor. By leveraging metamaterials in a standard CMOS process, we have figured out how to shrink an optical processor by 8000X, which will give us orders of magnitude improvement over GPUs today.”

Alexander Hayes, co-founder Metacept Inc., says: “Leaving the speed and energy use bottlenecks behind by deviating far from the Von Neumann architecture represents one of the most exciting and potentially important metamaterial and photonic applications we’ve ever considered.”

MetaVC Partners provided Neurophos’ initial funding and an exclusive license to the fund’s metamaterials IP portfolio for optical computing. Neurophos was spun out of Metacept, an incubator led by David R. Smith, James B. Duke Professor of Electrical and Computer Engineering, focused on creating metamaterials-based companies and collaborating with Professor Smith’s research group at Duke University. Neurophos CTO Tom Driscoll previously founded metamaterials-based radar company Echodyne.

Says David Smith, Duke University: “The Neurophos team has realized that the really fundamental problems of analog inference processing require a breakthrough at the level of the fundamental physics of the optical modulators. Their metamaterial is a ground-up breakthrough enabling an extraordinarily dense computing chip for next-generation AI applications.”

Neurophos AI chips can be fabricated using standard CMOS processes. This gives easy access to volume manufacturing.

The company is also joining Silicon Catalyst, the world’s only incubator + accelerator focused on semiconductor solutions, (including Photonics, MEMS, sensors, IP, materials & Life Sciences) to accelerate startups from idea through prototype, and onto a path to volume production. Silicon Catalyst has developed an unparalleled support ecosystem for its semiconductor start-ups, providing a strong network of financiers, business advisors, and industry professionals who help companies to launch and scale in the market. In addition, the incubator provides privileged access to services, expertise, and intellectual property that can empower their companies’ technological innovation.

Paul Pickering, Managing Partner, Silicon Catalyst says: “Neurophos represents much-needed progress in analog optical computing, bringing the performance of silicon photonics to the existing manufacturing infrastructure of CMOS foundries. We are confident that they will be one of the leaders of the next generation of AI hardware. This is how you get to tomorrow quickly and without wasted capital. We are thrilled to have them in the program.”

Neurophos Breakthroughs In Depth

Neurophos’ advancements will decrease the size and energy needs of silicon photonic optical chips, making them more suitable for running artificial intelligence platforms such as LLM (Large Language Models).

Neurophos’ metamaterial-based optical modulators are more than 1000 times smaller than those from a standard foundry PDK (Process Design Kit). This enables a technology roadmap to deliver over 1 million TOPS (Trillions of Operations Per Second) of performance. For comparison, an Nvidia H100 SMX5 today delivers at most 4000 TOPS of DNN (Deep Neural Network) performance.

Optical chips have the potential to increase processor speed while reducing power massively. Neurophos will enable this technology to be used in AI data centers. That market, which is dominated by Intel and Nvidia, currently uses traditional silicon semiconductors that create enormous amounts of heat and are struggling to scale to the performance demands of LLM for AI.

Neurophos combines two breakthroughs. The first is an optical metasurface that enables silicon photonic computing capable of ultra-fast AI inference that outstrips the density and performance of both traditional silicon computing and silicon photonics.

The second is a Compute-In-Memory (CIM) processor architecture which is fed by high-speed silicon photonics to deliver fast, efficient matrix-matrix multiplication, which make up the overwhelming majority of all operations when running, for instance, a neural net.

The metasurface-enabled optical CIM elements are thousands of times smaller than traditional silicon photonics modulators, enabling the processing of vastly larger matrices on-chip. This results in an unprecedented increase in the computational density. In optical computing, energy efficiency is proportional to array size, so Neurophos’ processor is hundreds of times more energy efficient than alternatives.

Fintech Insights : Leveraging the Power of Payments to Forge Better Employee Relations

[To share your insights with us, please write to  pghosh@itechseries.com ] 

Related posts

Wise Launches New Interest Feature for US Customers, Bolstering Multi-Currency Account

Business Wire

Liminal Raises $4.7 Million Seed Round Led By Elevation Capital

Fintech News Desk

Redfin Report: 89% of People With Mortgages Have an Interest Rate Below 6%

Fintech News Desk
1