Click to Skip Ad
Closing in...

Meet the research team using AI to help catch suspects in mass shootings

Published Apr 30th, 2018 2:09PM EDT
Image: Cultura/REX/Shutterstock

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Data in the public domain like tweets and social media video is increasingly relied on to make sense of the immediate aftermath of tragedies like mass shootings. A team of researchers, meanwhile, is in the early stages of developing a digital tool that takes that analysis a step farther — automating the data collection and more quickly pinpointing relevant information, to save time that would otherwise be spent combing manually through things like tweets and Facebook posts.

Alex Hauptmann, a research professor in the Language Technology Institute at the Carnegie Mellon School of Computer Science, along with a team of academics has already helped out behind the scenes in the investigations into tragedies like the Boston Marathon bombing and the Las Vegas concert shooting. He’s been working on tools that use a combination of speech recognition, image understanding, natural language processing and machine learning to process data from readily accessible public sources. And to then put that data in a form that’s useful, such as in a way that could let first responders know what’s happening on the ground.

One of the things he and his team are working on now is around automating the analysis and extraction of insights from that public data. “It would otherwise take untold person-hours of sifting through all sorts of video,” Hauptmann tells BGR. “I don’t think we’re doing anything that can’t be done by hand. But it’s incredibly tedious to do that. And so especially as you have more and more video, going through it efficiently means doing very fine-grained analysis. So we’re building tools to automate the parts that we can.”

He’s working with a group of around half a dozen people, mostly grad students and post-docs. They’re focused on things like counting the number of people in a video, at a location. On things like 3D reconstruction of scenes, and placing people within that scene based on where they are in the video.

Here’s a walkthrough of how some of that would work — like detecting where in a grainy user-generated video gunshots can be heard:

“We’re also looking at analyzing the Twitter stream and taking out which tweets are relevant to this event,” Hauptmann continues. “Which tweets link to footage of the event and also which tweets provide relevant contextual info about the event.

“For example, with the Las Vegas shooting there was a tweet at a particular time saying the shooting is still going on, and we’re crouched safe. It was really relevant if, for example, you were a first responder going there, and there was a tweet saying there’s still shooting going on. Few tweets are often relevant, so filtering these out could be helpful.”

The team he’s working with is still working on the practicality of that — doing research to explore how that filtering could work and how much of a difference it could make.

In a paper he helped write (“Reconstructing Human Rights Violations Using Large Eyewitness Video Collections”) for the “Journal of Human Rights Practice,”  the project is laid out this way:

“Our system takes a video collection from a major event as input and then puts all videos into a global timeline by synchronization and onto a map by localization. Given the synchronization and location result, users can utilize our powerful tool for various kinds of event information retrieval, including gunshot detection, crowd size estimation, 3D reconstruction and person tracking. Once extracted, data can be expressed in a prose summary or as entries in a database. Such analysis is vital, but incredibly time-consuming and very expensive if people have to be paid to do the work. It is also emotionally challenging to watch numerous videos and extract information from them if the data being gathered deals with issues such as torture, rape, mass death, or extrajudicial killings.”

He stresses that the project is still quite preliminary. It’s an example of how AI and sophisticated digital capabilities could be used to reach important conclusions in the wake of major public tragedies. But there’s still a long way to go. As Hauptmann puts it, “this is not a solved problem yet.”

Andy Meek Trending News Editor

Andy Meek is a reporter based in Memphis who has covered media, entertainment, and culture for over 20 years. His work has appeared in outlets including The Guardian, Forbes, and The Financial Times, and he’s written for BGR since 2015. Andy's coverage includes technology and entertainment, and he has a particular interest in all things streaming.

Over the years, he’s interviewed legendary figures in entertainment and tech that range from Stan Lee to John McAfee, Peter Thiel, and Reed Hastings.