IMOTION - Intelligent Multi-Modal Augmented Video Motion Retrieval System (Finished)

Video is increasingly gaining importance as medium to capture and disseminate information. This is not only the case for personal use but also (and most importantly) for professional and educational applications. With the enormous growth of video collections, effective yet efficient content-based retrieval of (parts of) videos is becoming more and more essential. Conventionally, video retrieval relies on metadata such as manual annotations, or inherent features extracted from the video. However, the most decisive information that distinguishes video content from static content, the movement of individual objects across subsequent frames, so far is largely ignored. This is particularly the case for so-called augmented video where additional spatio-temporal data on the movement of objects (e.g., captured by dedicated sensors systems) is available in addition to the actual video content. The IMOTION project will develop and evaluate innovative multi-modal user interfaces for interacting with augmented videos. Starting with an extension of existing query paradigms (keyword search in manual annotations), image search (query by example in key frames), IMOTION will consider novel sketch- and speech-based user interfaces. In particular, novel types of motion queries will be supported where users can specify motion paths of objects, via sketches, gestures, natural language interfaces, or combinations thereof. Several types of user interfaces (voice, tablets, multi-touch tables, interactive paper) will be supported and seamlessly combined so as to smoothly migrate a session from one type of user interface to another during the process of specifying and refining a query. This will be based on novel approaches to representation learning and the extraction of high-level motion descriptors from augmented videos, based on a motion ontology. In addition, IMOTION will develop novel index structures that jointly support traditional video features and the additional motion metadata.
A major contribution will be the quantitative and qualitative evaluation and user studies of the intelligent multi-modal interfaces and query paradigms developed in two concrete use cases – sample applications from which the project will select include, but are not limited to, augmented sports videos where users search on the basis of trajectories of player or ball movements, educational videos from the natural sciences where users search for animal movements inside a horde or a swarm, or sketch-based searches for currents in the sea captured by sensors integrated into buys. The IMOTION consortium will openly publish the augmented video collections and the motion metadata created in the course of the project's evaluation activities.

 

 

Awards

VBS Award MMM Demo Award

 

Demonstration

Parts of IMOTION have been released within the vitrivr project as open source projects and have participated to Google Summer of Code. 

 

 

In the Media

Start / End Dates

01.01.2014 - 31.05.2018

Partners

Funding Agencies

IMOTION is a CHIST-ERA project, funded by

Funding

Total project funding: 770'065.- EUR Project funding UNIBAS: 558'934.- CHF Infrastructure support provided by Microsoft Azure (a blog post on the IMOTION project can be found here).

Project extension until May 31, 2018 (additional funds at UNIBAS: 126'092.- CHF).

Staff

Research Topics

Publications

2019
2018
2017
2016
2015
2014