One reason I’ve been underwhelmed by AI is that companies consistently frame it as a solution to every problem under the sun. That’s why Meta’s new Segment Anything Model (SAM 2) is so intriguing to me. SAM 2 doesn’t answer questions, write code, generate AI images, or compose music. Instead, as its name suggests, the new AI model simply segments objects in video and images — but it does its one job really, really well.
Meta describes SAM 2 as “the first unified model for real-time, promptable object segmentation in images and videos.” You can use it to select multiple objects that can be tracked in real time, even if they’re moving around erratically in a video.
Here are just a few exciting examples of the new Segment Anything Model in action:
According to Meta, SAM 2’s improvements over the first model include better segmentation in images, better tracking in videos, and three times less interaction time than existing interactive video segmentation methods.
If you want to try it out for yourself to see what the AI model is capable of, Meta has a demo on its site that lets you track several objects in a short video and then make various edits, including changing the background and adding effects to the selected objects.
If you want to read more about SAM 2, check out Meta’s latest blog post.