Click to Skip Ad
Closing in...

Apple is fully embracing AI for Apple Silicon – here’s the first proof

Published Dec 6th, 2023 7:09AM EST
Siri on the Vision Pro headset. Apple GPT
Image: Apple Inc.

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

We all know Apple moves in its own way. While some were disappointed that the company didn’t mention AI at WWDC or during any of its keynotes this year, several reports – and Apple itself – showed that Cupertino is heavily betting on this technology by investing a lot of money into research and also under the hood of its products.

Now, the company is giving a little taste of its AI capabilities by quietly releasing its Deep Learning framework as an open-source code. This news was discovered by X user Delip Rao. The new MLX framework runs natively on Apple Silicon with a single pip install and no other dependencies, and it’s brought by people by the machine learning research team at Apple. Here are the top features of MLX:

  • Familiar APIs: MLX has a Python API that closely follows NumPy. MLX also has a fully featured C++ API, which closely mirrors the Python API. MLX has higher-level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simplify building more complex models.
  • Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.
  • Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.
  • Dynamic graph construction: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive.
  • Multi-device: Operations can run on any of the supported devices (currently, the CPU and GPU).
  • Unified memory: A notable difference between MLX and other frameworks is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data.

On Github, Apple researchers explain how people should use its AI framework: “MLX is designed by machine learning researchers for machine learning researchers. The framework is intended to be user-friendly but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.”

With that in mind, 2024 looks like the year Tim Cook won’t stop talking about AI or, at least, machine learning during Apple keynotes.

José Adorno Tech News Reporter

José is a Tech News Reporter at BGR. He has previously covered Apple and iPhone news for 9to5Mac, and was a producer and web editor for Latin America broadcaster TV Globo. He is based out of Brazil.