Apple has announced the release of its latest M3 chips, which are designed to empower AI developers with enhanced capabilities for large transformer models.
The M3 chips offer an expanded memory capacity with up to 128GB of unified memory, double the capacity of the M1 and M2 chips. This expanded memory capacity is especially critical for AI/ML workloads, which demand extensive memory resources to train and execute large language models and complex algorithms.
The M3 chip also features a redesigned GPU architecture that is purpose-built for superior performance and efficiency in AI/ML workloads. This architecture incorporates dynamic caching, mesh shading, and ray tracing capabilities to expedite AI/ML workflows and optimise overall computational efficiency.
It’s neural engine is up to 60% faster than the previous generation, further accelerating machine learning algorithms while prioritizing user privacy.
In addition to these enhancements, the M3 chip is also 15% faster than the M2 chip and 60% faster than the M1 chip in overall performance. This makes it the most powerful Apple chip to date, and a compelling choice for AI developers and other users who require the highest levels of performance and efficiency.
Apple’s M3 chips are currently available in the 14-inch MacBook Pro. The 16-inch MacBook Pro is expected to receive the M3 Pro and M3 Max chips in the near future.
The sources for this piece include an article in AnalyticsIndiaMag.