Conventional Transformer-based Video Question Answering (VideoQA) approaches generally encode frames independently through one
or more image encoders followed by interaction between frames and question. However,
such schema incur significant memory use and
inevitably slow down the training and inference
speed. In this work, we present a highly efficient approach for VideoQA based on existing
vision-language pre-trained models where we
concatenate video frames to a n × n matrix
and then convert it to one image. By doing
so, we reduce the use of the image encoder
from n 2 to 1 while maintaining the temporal
structure of the original video. Experimental
results on MSRVTT and TrafficQA show that
our proposed approach achieves state-of-theart performance with nearly 4× faster speed
and only 30% memory use. We show that
by integrating our approach into VideoQA systems we can achieve comparable, even superior, performance with a significant speed up
for training and inference. We believe the proposed approach can facilitate VideoQA-related
research by reducing the computational requirements for those who have limited access to budgets and resources. Our code is publicly available at https://github.com/lyuchenyang/
Efficient-VideoQA for research use.