Numem Previews Edge AI Chiplet-Based Solution at 2025 Chiplet Summit
What you’ll learn:
- The 2025 Chiplet Summit is on January 21-23 at the Santa Clara Convention Center.
- Numem will preview its innovative chiplets that aid the transition of AI processing from the data center to the edge.
- Numem will demonstrate its patented NuRAM/SmartMem technology, which reduces standby power by up to 100X compared to SRAM, delivers up to 4X faster performance than HBM while operating at ultra-low power.
The rapid growth of AI workloads and AI processors/GPUs are exacerbating the memory bottleneck caused by the slowing performance improvements and scalability of SRAM and DRAM, presenting a major obstacle to maximizing system performance. Thus, there’s a pressing need for intelligent memory solutions that offer higher power efficiency and greater bandwidth, coupled with a reevaluation of traditional memory architectures.
At this week’s Chiplet Summit, being held at the Santa Clara Convention Center in California from January 21-23, Numem will preview its innovative chiplets. These nonvolatile, high-speed, ultra-low-power solutions leverage MRAM to overcome memory challenges in chiplet architectures.
“Numem is fundamentally transforming memory technology for the AI era by delivering unparalleled performance, ultra-low power, and non-volatility,” said Max Simmons, CEO of Numem. “Our solutions make MRAM highly deployable and help address the memory bottleneck at a fraction of the power of SRAM and DRAM. Our approach facilitates and accelerates the deployment of AI from the data center to the edge, opening up new possibilities without displacing other memory architectures.”
Key features and benefits include:
- Unparalleled bandwidth: Delivers up to 4 TB/s per 8-die memory stack, exceeding existing AI memory HBM solutions.
- High capacity: Supports 4 GB per stack package, enabling scalability for demanding AI workloads.
- Nonvolatile with SRAM-like performance: Combines ultra-low read/write latency with persistent data retention, offering high reliability and efficiency. Provides the scalability and power needed to address the demands of future AI and data-centric workloads.
- Power smart: Game-changing power efficiency and AI-edge- and data-center-based solutions with the ability to implement multi-state flex power functions (active/standby/deep sleep).
- Broad application compatibility: Optimized for AI applications across OEMs, hyperscalers, and AI accelerator developers to drive the adoption of chiplet-based designs in high-growth markets. Designed with standard industry interfaces such as UCIe to facilitate ecosystem compatibility.
- Advanced integration:Complements other chiplet components (e.g., CPUs, GPUs, and accelerators), enhancing the performance and efficiency of the overall system.
- In-compute intelligence: Makes memory smarter by helping to manage incoming data, read/write times, dynamic programmable multi-state flex power, and self-testability.
- Proven technology: State-of-the-art memory subsystem IP based on proven foundry MRAM process. Offers improved radiation performance that mitigates exposure to soft errors.
Demonstrations will be given in Numem’s booth #322 on the show floor. Sampling is expected to begin in late Q4 of this year. Meet up with Electronic Design’s Bill Wong at the summit – info here.
AndyT's Nonlinearities blog arrives the first and third Monday of every month. To make sure you don't miss the latest edition, new articles, or breaking news coverage, please subscribe to our Electronic Design Today newsletter.