This paper studies how to reuse a pool of LoRA adapters for new tasks after retrieval, focusing on composition and auditing rather than training a fresh adapter. Its residual merging and view-reliability analysis should interest teams building adapter libraries and post-training reuse pipelines.
arXiv:2605.01429v1 Announce Type: new Abstract: Libraries of Low-Rank Adaptation (LoRA) adapters are becoming a practical by-product of parameter-efficient adaptation. Once such adapters accumulate, a natural question is no longer how to train one adapter for one task, but how to reuse an open pool of adapters for a new task given only a small support set. Prior work has shown that LoRA modules can be composed at the task level and dynamically selected at the instance level. However, open-pool…