Generative AI is creating a lot of excitement, being used in games, creativity software like Adobe Express, and car infotainment systems but we cannot forget that, there’s still a hitch: it demands massive computing power. As this technology gains traction for its creative abilities, we need equally strong and scalable tech to handle the heavy lifting.
Thinking about this scenario, SK Hynix has officially announced the launch of its latest memory solution, the HBM3E, with data processing capabilities of approximately 1TB/s. This advancement in memory technology is intended to cater to the demands of next-generation AI apps.
NVIDIA’s Hyperscale and HPC Computing unit, led by Vice President Ian Buck, aims to leverage the capabilities of SK Hynix’s HBM3E to enhance various high-demanding tasks. These tasks span from weather forecasting and energy exploration to computational fluid dynamics and life sciences, all of which heavily rely on AI processing.
The upgraded iteration of this memory technology boasts the capability to process data at speeds of up to 1.15 terabytes per second. To put this into perspective, it can handle data equivalent to over 230 Full HD 5GB videos in a single second.
Additionally, SK Hynix highlights a 10% improvement in heat dissipation with this new memory iteration, attributed to the employment of the latest Mass Reflow Molded Underfill (MR-MUF) process technology.
An interesting feature of SK Hynix’s HBM3E memory is its backward compatibility. This means it can work as a direct RAM upgrade in systems currently utilizing HBM3 technology alongside CPUs and GPUs.
SK Hynix has outlined its production timeline, aiming for mass production to start in the first half of 2024. The company is presently engaged in the sampling process, with big customers like NVIDIA among those receiving early access to this new memory solution.