Samsung Brings LPDDR5X to AI Data Centers with SOCAMM2 Memory Module
This evolution has elevated energy efficiency to a critical priority alongside performance, spurring demand for low-power memory solutions capable of sustaining always-on AI workloads while reducing power consumption.
As artificial intelligence adoption accelerates globally, data centers are facing a surge in computational workloads driven by the shift from large-scale model training to continuous AI inference. This evolution has elevated energy efficiency to a critical priority alongside performance, spurring demand for low-power memory solutions capable of sustaining always-on AI workloads while reducing power consumption.
Responding to this need, Samsung Electronics has developed SOCAMM2 (Small Outline Compression Attached Memory Module), an LPDDR-based server memory module designed specifically for AI data centers. The company has already begun supplying customer samples, signaling readiness for real-world deployment. SOCAMM2 combines the low-power advantages of LPDDR technology with a modular, detachable design, delivering higher bandwidth, improved energy efficiency, and flexible system integration for next-generation AI servers.
Built on Samsung’s latest LPDDR5X DRAM, SOCAMM2 expands the role of memory in data-center environments by bridging the gap between traditional DDR-based server modules and the demands of AI-accelerated systems. While RDIMM modules remain central to general-purpose servers, SOCAMM2 offers a complementary alternative optimized for AI workloads that require fast responsiveness and lower power consumption. According to Samsung, SOCAMM2 delivers more than twice the bandwidth of conventional RDIMM while consuming over 55 percent less power, maintaining stable performance under intensive AI inference operations.
The module’s modular architecture provides significant benefits for system operators. Unlike soldered LPDDR solutions, SOCAMM2’s detachable design allows for easy upgrades or replacements without modifying the mainboard, reducing downtime and lowering total cost of ownership. Its enhanced power efficiency also simplifies thermal management, easing cooling requirements in high-density AI data centers where heat control is a growing challenge.
At the system level, SOCAMM2’s horizontal orientation improves space utilization compared to the vertical layout of traditional RDIMM modules. This design supports more flexible airflow and heat-sink placement, enabling smoother integration with CPUs and AI accelerators while remaining compatible with both air- and liquid-cooling systems.
Samsung is also strengthening ecosystem partnerships to accelerate adoption of LPDDR-based server memory. The company is working closely with NVIDIA to optimize SOCAMM2 for NVIDIA’s accelerated AI infrastructure, ensuring the module meets the performance and efficiency demands of next-generation inference platforms. NVIDIA has emphasized that as AI workloads increasingly shift toward rapid inference and complex reasoning, memory solutions like SOCAMM2 will be essential for future data centers.
In parallel, the industry has initiated formal JEDEC standardization efforts for LPDDR-based server modules. Samsung is actively contributing alongside key partners to help establish consistent design guidelines, paving the way for broader adoption and smoother integration across future AI platforms.
With SOCAMM2, Samsung is positioning LPDDR technology as a mainstream solution for AI servers, supporting the industry’s transition toward more compact, power-efficient, and high-bandwidth AI infrastructure. As AI workloads continue to grow in scale and complexity, the company says it will further expand its LPDDR-based server memory portfolio, reinforcing its commitment to enabling the next generation of AI data centers.

