CoS: Chain-of-Shot Prompting for Long Video Understanding

Jian Hu 1
Zixu Cheng 1
Chenyang Si 2
Wei Li 2
Shaogang Gong 1
1CV Lab, Queen Mary University of London, 2S-Lab, Nanyang Technological University
{jian.hu, zixu.cheng, s.gong}@qmul.ac.uk, {chenyang.si, wei.l}@ntu.edu.sg
Code [GitHub]
ArXiV [Paper]


Motivation: The critical problem of how to select shots in video understanding. In a video that depicts how a boy gradually gains a dragon's trust, different sampling methods create two distinct narratives: split video A shows the boy being attacked by the dragon, while split video B shows him happily sharing food with the dragon. This shows that minor differences in video sampling leads to significant variations in semantic understanding.


Abstract

Multi-modal Large Language Models (MLLMs) struggle with long videos due to the need for excessive visual tokens. These tokens exceed massively the context length of MLLMs, resulting in filled by redundant task-irrelevant shots. How to select shots is an unsolved critical problem: sparse sampling risks missing key details, while exhaustive sampling overwhelms the model with irrelevant content, leading to video misunderstanding. To solve this problem, we propose \textbf{C}hain-\textbf{o}f-\textbf{S}hot prompting (\textbf{CoS}). The key idea is to frame shot selection as test-time visual prompt optimisation, choosing shots adaptive to video understanding semantic task by optimising shots-task alignment. CoS has two key parts: (1) a binary video summary mechanism that performs pseudo temporal grounding, discovering a binary coding to identify task-relevant shots, and (2) a video co-reasoning module that deploys the binary coding to pair (learning to align) task-relevant positive shots with irrelevant negative shots. It embeds the optimised shot selections into the original video, facilitating a focus on relevant context to optimize long video understanding. Experiments across three baselines and five datasets demonstrate the effectiveness of CoS.



Framework

The overall framework of CoS. It first utilises LLaVA to perform a mosaicing binary coding to bootstrap video summarisation for temporal grounding on a long video. Specifically, every four shots are aggregated into a mosaicing composition image. LLaVA determines whether task-related elements exist within each composition image by encoding a binary value of 1 or 0 ('yes' or 'no'), thereby identifying sparsely distributed task-related shots to achieve pseudo temporal grounding. Given this binary video summary, task-related positive shots \( S^p \) and irrelevant negative shots \( S^n \) are generated and represented by binary codes. \( S^p \), \( S^n \) and the original frame sequence \( X \) sampled from original video \( V \) are then fed into the MLLM for co-reasoning, minimising interference of irrelevant video content.

Experiments


Qualitative Evaluation




BibTeX

@article{hu2025cos,
  title={CoS: Chain-of-Shot Prompting for Long Video Understanding},
  author={Hu, Jian and Cheng, Zixu and Si, Chenyang and Li, Wei and Gong, Shaogang},
  journal={arXiv preprint arXiv:2502.06428},
  year={2025}
}