--> TeleStyle

TeleStyle: Content-Preserving Style Transfer in Images and Videos

Institute of Artificial Intelligence, China Telecom (TeleAI)

Abstract

Content-preserving style transfer—generating stylized outputs based on content and style references—remains a significant challenge for Diffusion Transformers (DiTs) due to the inherent entanglement of content and style features in their internal representations. In this technical report, we present TeleStyle, a lightweight yet effective model for both image and video stylization. Built upon Qwen-Image-Edit, TeleStyle leverages the base model’s robust capabilities in content preservation and style customization. To facilitate effective training, we curated a high-quality dataset of distinct specific styles and further synthesized triplets using thousands of diverse, in-the-wild style categories. We introduce a Curriculum Continual Learning framework to train TeleStyle on this hybrid dataset of clean (curated) and noisy (synthetic) triplets. This approach enables the model to generalize to unseen styles without compromising precise content fidelity. Additionally, we introduce a video-to-video stylization module to enhance temporal consistency and visual quality. TeleStyle achieves state-of-the-art performance across three core evaluation metrics: style similarity, content consistency, and aesthetic quality.

Video Stylization

afk arena style
victorian style
tintoretto style
diego rivera style
atompunk style
chibi anime style
post-impressionist style
minimalist design style
cookierun kindom style
stained glass window style
collage montage style
architectural sketching style
tasha tudor style
posto-impressionism style
rubber hose animation style
low poly 3d style
henri matisse style
neo-expressionism style
synchromism style
jeffrey catherine jones style
warframe style
logo
logo
logo
logo
logo
logo
logo
logo

BibTeX

@article{teleai2026telestyle,
    title={TeleStyle: Content-Preserving Style Transfer in Images and Videos}, 
    author={Shiwen Zhang and Xiaoyan Yang and Bojia Zi and Haibin Huang and Chi Zhang and Xuelong Li},
    journal={arXiv preprint arXiv:2601.20175},
    year={2026}
}