TrajectoryMover: Generative Movement of Object Trajectories in Videos
Abstract
TrajectoryAtlas enables generative video editing by generating large-scale synthetic paired video data and training a video generator to move object 3D motion trajectories while preserving plausibility and identity.
Generative video editing has enabled several intuitive editing operations for short video clips that would previously have been difficult to achieve, especially for non-expert editors. Existing methods focus on prescribing an object's 3D or 2D motion trajectory in a video, or on altering the appearance of an object or a scene, while preserving both the video's plausibility and identity. Yet a method to move an object's 3D motion trajectory in a video, i.e., moving an object while preserving its relative 3D motion, is currently still missing. The main challenge lies in obtaining paired video data for this scenario. Previous methods typically rely on clever data generation approaches to construct plausible paired data from unpaired videos, but this approach fails if one of the videos in a pair can not easily be constructed from the other. Instead, we introduce TrajectoryAtlas, a new data generation pipeline for large-scale synthetic paired video data and a video generator TrajectoryMover fine-tuned with this data. We show that this successfully enables generative movement of object trajectories. Project page: https://chhatrekiran.github.io/trajectorymover
Community
TrajectoryMover is a video-to-video editing method that moves an object’s trajectory while preserving its relative 3D motion, enabled by a large-scale synthetic paired video data generation pipeline for this task.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TRACE: Object Motion Editing in Videos with First-Frame Trajectory Guidance (2026)
- Search2Motion: Training-Free Object-Level Motion Control via Attention-Consensus Search (2026)
- PISCO: Precise Video Instance Insertion with Sparse Control (2026)
- Tri-Prompting: Video Diffusion with Unified Control over Scene, Subject, and Motion (2026)
- FaceCam: Portrait Video Camera Control via Scale-Aware Conditioning (2026)
- FlexAM: Flexible Appearance-Motion Decomposition for Versatile Video Generation Control (2026)
- HorizonForge: Driving Scene Editing with Any Trajectories and Any Vehicles (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.29092 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper