MergeVLA: Cross-Skill Model Merging Toward a Generalist Vision-Language-Action Agent

UQMM Lab, The University of Queensland

*Indicates Equal Contribution
MergeVLA teaser

Overview of MergeVLA architecture.

Abstract

Recent Vision-Language-Action (VLA) models reformulate vision-language models by tuning them with millions of robotic demonstrations. While they perform well when fine-tuned for a single embodiment or task family, extending them to multi-skill settings remains challenging: directly merging VLA experts trained on different tasks results in near-zero success rates. This raises a fundamental question: what prevents VLAs from mastering multiple skills within one model? With an empirical decomposition of learnable parameters during VLA fine-tuning, we identify two key sources of non-mergeability (1) Finetuning drives LoRA adapters in the VLM backbone toward divergent, task-specific directions beyond the capacity of existing merging methods to unify. (2) Action experts develop inter-block dependencies through self-attention feedback, causing task information to spread across layers and preventing modular recombination. To address these challenges, we present MergeVLA, a merging-oriented VLA architecture that preserves mergeability by design. MergeVLA introduces sparsely activated LoRA adapters via task masks to retain consistent parameters and reduce irreconcilable conflicts in the VLM. Its action expert replaces self-attention with cross-attention-only blocks to keep specialization localized and composable. When the task is unknown, it uses a test-time task router to adaptively select the appropriate task mask and expert head from the initial observation, enabling unsupervised task inference. Across LIBERO, LIBERO-Plus, RoboTwin, and multi-task experiments on the real SO101 robotic arm, MergeVLA achieves performance comparable to or even exceeding individually finetuned experts, demonstrating robust generalization across tasks, embodiments, and environments.

Results

Experiment 1: LIBERO Benchmark

LIBERO results across task splits. Comparison between finetuned and merged variants of MergeVLA. All numbers are success rates (%). $\mathbf{S}$ indicates that task masks are used during merging. "Params (B)" denotes the total number of model parameters (in billions) required to evaluate on all four tasks, including the LLM backbone and the action expert. Gray-highlighted rows correspond to per-task finetuned checkpoints evaluated on their own tasks, serving as upper-bound references for model merging.
Experiment 1 Results

Experiment 2: LIBERO-Plus Benchmark

Robustness of different models under visual and language shifts on LIBERO-Plus. All results are success rates (%) averaged over 4 task suites. Gray-highlighted rows correspond to per-task finetuned checkpoints evaluated on their own tasks, serving as upper-bound references for model merging. Shift definitions: S1 - Background Textures; S2 - Camera Viewpoints; S3 - Language Instructions; S4 - Lighting Conditions; S5 - Object Layout; S6 - Robot States; S7 - Sensor Noise.
Experiment 2 Results

Experiment 3: RoboTwin Benchmark

RoboTwin success rates (%) of different variants of MergeVLA across embodiments and tasks. $\mathbf{T}_1$: Place Container Plate, $\mathbf{T}_2$: Handover Block, $\mathbf{T}_3$: Open Microwave. Gray-highlighted rows correspond to per-task finetuned checkpoints evaluated on their own tasks, serving as upper-bound references for model merging.
Experiment 3 Results

Experiment 4: Real-World Robot

Real-world SO-101 robot performance, reported as success rates (%) over 20 rollouts per task.
Experiment 4 Results

Rollout Videos

LIBERO Benchmark

LIBERO-Plus Benchmark

RoboTwin Benchmark

Real-World Robot Experiments

(a) Pick Cube
(b) Push Cube
(c) Stack Cube

BibTeX

      
        @misc{fu2025mergevlacrossskillmodelmerging,
              title={MergeVLA: Cross-Skill Model Merging Toward a Generalist Vision-Language-Action Agent}, 
              author={Yuxia Fu and Zhizhen Zhang and Yuqi Zhang and Zijian Wang and Zi Huang and Yadan Luo},
              year={2025},
              eprint={2511.18810},
              archivePrefix={arXiv},
              primaryClass={cs.RO},
              url={https://arxiv.org/abs/2511.18810}, 
        }