Artificial intelligence system learns concepts shared across video, audio, and text

Artificial intelligence system learns concepts shared across video, audio, and text

A machine-learning model can identify the action in a video clip and label it, without the help of humans.

Artificial intelligence system learns concepts shared across video, audio, and text | MIT News | Massachusetts Institute of Technology

Humans observe the world through a combination of different modalities, like vision, hearing, and our understanding of language. Machines, on the other hand, interpret the world through data that algorithms can process.

So, when a machine “sees” a photo, it must encode that photo into data it can use to perform a task like image classification. This process becomes more complicated when inputs come in multiple formats, like videos, audio clips, and images.

“The main challenge here is, how can a machine align those different modalities? As humans, this is easy for us. We see a car and then hear the sound of a car driving by, and we know these are the same thing. But for machine learning, it is not that straightforward,” says Alexander Liu, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author of a paper tackling this problem.