Skip to content

A simple attention deep learning model to answer questions about a given video with the most relevant video intervals as answers.

License

Notifications You must be signed in to change notification settings

AmrHendy/multimedia_question_answering

Repository files navigation

Multimedia Question Answering

Increasing trend in the research community for video processing using artificial intelligence. Trending Tasks:

  • Video classification.
  • Video content description.
  • Video question answering (VQA).

Main Idea

The main idea of the project is that searching for partition of video which is most relevent to a corresponding query "Question".
Instead of watching the complete video to find the interval you want to watch, you will give our model the video and the query which describes the part you want, then our model will give you the intervals sorted by relevance to the given query.

Examples

Watch the video

Dataset

We use the Microsoft Research Video to Text (MSR-VTT) dataset.
Example of the dataset is shown below.

Extracted Visual Feature

We extracted the visual features of the data set using 3 different models.

Architecture

Here is the base architecture which is used in paper here.

Checkpoints

We have trained the model using different visual features extractors and changed a bit in the model architecture.

  • Using ResNet visual features extractor (like paper): gdrive link

  • Using NASNet visual features extractor: gdrive

  • Using Inception-ResNet-v2 visual features extractor: gdrive link

  • Using Squeeze and Excitation technique with Inception-ResNet-v2: gdrive line

  • Using Dropout technique: gdrive link

  • Using Squeeze and Excitation along with Dropout: gdrive link

  • Using Squeeze and Excitation technique and increasing hidden dimension of the LSTMs: gdrive link

Results

From the results obtained in the explained experiments, we found out that the best results obtained are from using Inception-ResNet-v2 as feature extractor for the visual features.
Our model outperforms the original paper model in all used metrics as shown in the following table:

These results obtained from testing on the test set which contains 2990 videos.

You can see the comparison between all models in the following figure:

Authors

Contribute

Contributions are always welcome!

Please read the contribution guidelines first.

License

This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details

Releases

No releases published

Packages

No packages published

Languages