CVPR Findings 2026

VIDEOP2R

Video Understanding from Perception to Reasoning

Yifan Jiang1*, Yueying Wang2, Rui Zhao2†, Toufiq Parag3*, Zhimin Chen2,
Zhenyu Liao2, Jayakrishnan Unnikrishnan2
1USC    2Amazon    3Keystone AI
*Work done while at Amazon   Corresponding author
VIDEOP2R Main Figure
Comparison between GRPO-based video RFT framework (process-agnostic) and VIDEOP2R (process-aware). VIDEOP2R models perception and reasoning as distinct processes with separate reward signals, enabling more effective credit assignment during reinforcement learning.

Abstract

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VIDEOP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VIDEOP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning. In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VIDEOP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO.

Process-Aware Framework

Models perception and reasoning as distinct processes with separate supervision signals.

VIDEOP2R-CoT-162K

High-quality process-aware chain-of-thought dataset built through a three-step generation pipeline with automatic verification.

PA-GRPO

Process-aware RL algorithm providing separate perception and reasoning rewards for fine-grained credit assignment.

SotA on 6/7 Benchmarks

Consistent 1.9%–9.1% accuracy gains over base models across seven diverse video benchmarks.

Two-Stage RFT Framework

VIDEOP2R follows the standard RFT setup with a specific focus on modeling video reasoning into perception and reasoning as distinct processes.

VIDEOP2R Framework
Overall Framework. Illustration of the VIDEOP2R RFT framework (left) and the three-step CoT generation pipeline (right). The SFT stage constructs process-aware CoT data; the RL stage refines the model with PA-GRPO.

1 SFT Stage: Process-Aware CoT Annotation Pipeline

To address the lack of process-aware CoT data, we design a process-aware CoT template that explicitly disentangles perception from reasoning: <observation>...</observation> for extracting visual evidence, and <think>...</think><answer>...</answer> for reasoning and final answer. We then build a three-step pipeline to generate high-quality annotations at scale.

1

Process-Aware CoT Generation

For each VQA sample, Qwen2.5-VL-72B-Instruct generates an initial CoT trace with explicit <observation> and <think><answer> segments following our process-aware template.

2

CoT Verification

We evaluate each generated response with task-specific metrics (exact match, word error rate, ROUGE), discarding samples with low-quality answers or template deviations.

3

Observation Sufficiency Verification

Claude 3.7 Sonnet validates the <observation> in a text-only setting, assessing whether the visual evidence is sufficient to support the correct answer without the raw video.

Applying the pipeline on 260K VQA data produces VIDEOP2R-CoT-162K — 162K high-quality process-aware CoT samples with perception and reasoning traces for SFT warm-up.

2 RL Stage: Process-Aware Group Relative Policy Optimization (PA-GRPO)

Standard GRPO assigns a single scalar reward to the entire trajectory, blurring credit assignment between perception and reasoning. PA-GRPO provides separate rewards for each process and normalizes them independently, enabling fine-grained credit assignment during RL.

PA-GRPO Algorithm
PA-GRPO Algorithm. Each sampled response is split into perception tokens (oP) and reasoning tokens (oR). Separate accuracy, format, and length rewards are computed for each process, then normalized within their respective groups to yield process-aware advantages.

Perception Reward (Racc,P)

LLM-as-Judge evaluation: Claude 3.7 Sonnet assesses whether the <observation> segment contains sufficient visual evidence to support the correct answer in a text-only setting.

Reasoning Reward (Racc,R)

Task-specific rule-based evaluation: exact word match for categorical tasks, ROUGE-based similarity for open-ended QA, and error-based scores for numerical problems.

Format & Length Rewards

Separate format rewards enforce template adherence for each process. Length rewards favor concise yet informative outputs within target ranges (128–320 tokens for perception, 320–512 for reasoning).

Main Results

SotA performance on 6 out of 7 video reasoning and understanding benchmarks

Model Video Reasoning Video Understanding Avg
VSI. VideoMMMU MMVU VCR. MV. TempCom. VideoMME
Open-Source 7B Models
LLaVA-OneVision-7B 32.433.849.2 56.758.2
LongVA-7B 29.223.9 56.952.6
Qwen2.5-VL-7B 30.148.160.044.3 59.072.656.652.9
RFT on Qwen2.5-VL-7B
Video-R1 35.852.363.849.0 63.973.259.356.8
VideoChat-R1 33.954.063.049.0 67.972.557.756.9
Time-R1 29.051.062.949.6 63.173.759.355.5
VersaVid-R1 33.751.964.349.8 62.974.058.856.5
VideoRFT 36.851.168.549.6 62.173.759.857.4
VIDEOP2R (Ours) 36.855.065.451.0 68.174.560.058.7

Best result in bold purple, second best underlined. All numbers in %.

Ablation Study

Validating the contribution of each process-aware component

Model Variant Video Reasoning Video Understanding Avg
VSI.VideoMMMUMMVUVCR. MV.TempCom.VideoMME
Two-stage Training
VIDEOP2R (Ours) 36.855.065.451.0 68.174.560.0 58.7 +5.8
  - SFT-only 35.253.761.646.9 62.372.457.2 55.6 +2.7
  - RL-only 35.854.664.646.3 60.873.855.9 56.0 +3.1
Process-aware Modeling
VIDEOP2R (Ours) 36.855.065.451.0 68.174.560.0 58.7 +5.8
  - process-agnostic RL (GRPO) 37.453.662.848.3 63.873.355.4 56.4 +3.5
  - process-agnostic SFT (no RL) 34.348.961.647.3 59.069.754.0 53.5 +0.6
Reward Design
VIDEOP2R (Ours) 36.855.065.451.0 68.174.560.0 58.7 +5.8
  - without RR 36.051.660.346.8 62.172.557.9 55.3 +2.4
  - without RP 37.453.662.848.3 63.873.355.4 56.4 +3.5
  - without RL 40.052.763.248.4 65.573.960.0 57.7 +4.8
  - without separation 37.153.264.948.8 65.073.259.7 57.4 +4.5
Baseline: Qwen2.5-VL-7B 30.148.160.044.3 59.072.656.652.9

Analysis

Understanding why process-aware modeling works

Effect of Perception
Effect of perception on downstream reasoning. VIDEOP2R's perception output alone (text-only, 55.5%) surpasses raw video input (52.9%), demonstrating that its perceptions capture semantically rich information for reasoning.
Reward Analysis
Training dynamics & think-answer mismatch analysis. PA-GRPO exhibits fewer advantage collapse samples and significantly lower think-answer mismatch rates compared to standard GRPO.
Case Study
Qualitative results. Left: A success case showing an Aha Moment where VIDEOP2R performs process-aware inference by accurately describing visual cues and reasoning over them. Right: A failure case where the model identifies correct visual details but lacks domain-specific knowledge (molar volume = 22.4).