Apple has released a revolutionary AI model for instruction-based image editing. Spotted by VentureBeat, this open-source AI model MGIE can edit images based on natural language instructions.
According to a paper published by Apple researchers, instruction-based image editing improves the controllability and flexibility of image manipulation via natural commands without elaborate descriptions or regional masks. The study shows “promising capabilities in cross-modal understanding and visual-aware response generation via LM” as they investigated how MLLMs facilitate edit instructions and MLLM-guided image editing.
“Extensive experimental results demonstrate that expressive instructions are crucial to instruction-based image editing, and our MGIE can lead to a notable improvement in automatic metrics and human evaluation while maintaining competitive inference efficiency.”
With that, this image editing AI model made by Apple can produce concise and clear instructions for the editing process, create Photoshop-style modifications, optimize the photo quality, and edit specific parts of a picture, such as faces, eyes, hair, clothes, and accessories.
At this moment, it’s unclear if Apple will add those ground-breaking image editing models to the iPhone. In addition to this discovery, Apple has also recently shown how to use Large Language Models with limited memory.
That said, these are some of the AI features we could actually get with iOS 18:
- Auto-summarizing and auto-complete features for core apps and productivity software (Pages, Keynote)
- Better playlist creation in Apple Music
- Siri (a big overall overhaul with a focus on AI)
- Code completion in a new version of Xcode for developers
- AppleCare tools to assist employees in helping customers with troubleshooting
BGR will keep following Apple’s latest efforts on AI, including all rumors and reports about iOS 18 and iPhone 16.