Table of Contents
- 🚩 News (Oct 2025): We release new VisionTSpp model on [Hugging Face]. Based on the original VisionTS-base (
visiontspp_model.ckpt), we have added a VisionTS-large version (visiontspp_large.ckpt). Furthermore, we provide versions of both models trained on the [GiftEvalPretrain] dataset (visiontspp_base_gifteval_no_leakage.ckpt&visiontspp_large_gifteval_no_leakage.ckpt). These models are intended for evaluation on the [Gift-Eval] leaderboard and designed to prevent data leakage. - 🚩 News (Sep 2025): We release the Hugging Face Space [VisionTSpp on Huggingface Space] ! You can now quickly experience the forecasting capability of VisionTS++ directly in your browser! You can also upload your own custom time series CSV file for zero-shot forecasting!
 - 🚩 News (Aug 2025): The inference code is now released! Please try [demo.ipynb] to run VisionTS++ on multivariate and probablistic time series forecasting.
 - 🚩 News (Aug 2025): VisionTS++ (also called VisionTSpp) preprint has been made available on [arXiv], as well as the training code and pre-trained models. Specifically, we provide the VisionTSpp-1.0-base model on [Huggingface], which we perform continual pre-traininig on Large-scale Open Time Series Archive (LOTSA data) based on Masked AutoEncoder (MAE) visual backbone.
 
🎉 We’re excited to release the Huggingface Space for VisionTS++!
👉 Try it now: VisionTSpp on Huggingface Space
Experience the powerful forecasting capabilities of VisionTS++ instantly — no setup or environment configuration required. Simply open the Space in your browser and start exploring!
Want to test it on your own data? Just upload a custom time series CSV file, and VisionTS++ will perform zero-shot forecasting out of the box.
- In this work, we propose a new time series foundation model, VisionTS++, a SOTA time series foundation model by continual pre-training visual MAE on large-scale time series data, supporting multi-variate forecasting and probablistic forecasting!
 
- Compared to VisionTS, VisionTS++ is equipped with three key innovations, including a 
vision-model-based filteringmechanism that identifies high-quality time series data for pre-training, acolorized multivariate conversionmethod that transform multivariate time series into multi-subfigure RGB images, and amulti-quantile forecastingapproach using parallel reconstruction heads to generate forecasts of different quantile levels. - Therefore, VisionTS++ more effectively supports multivariate and probablistic time series forecasting.
 
The VisionTS++ model is also uploaded to our package in PyPI. Please run the following command for installing VisionTS++ and VisionTS:
pip install visiontsIf you want to develop the inference code, you can also build from source.
git clone https://github.com/Keytoyze/VisionTS.git
cd VisionTS
pip install -e .Then, you can refer to [demo.ipynb] on forecasting time series using VisionTS++, with clear visualizations of image reconstruction.
In this demo, we show VisionTS++'s capability of effectively handling multivariate and probabilistic time series forecasting.
- Clone repository:
 
git clone https://github.com/HALF111/VisionTSpp.git
cd VisionTSpp- Create virtual environment:
 
virtualenv venv --python=python3.10
. venv/bin/activate- Build from source:
 
pip install -e '.[notebook]'- Create a 
.envfile: 
touch .envWe also provide scripts for continual pre-training on Large-scale Open Time Series Archive (LOTSA data) based on Masked AutoEncoder base (MAE-base) visual backbone. If you want to perform continual pre-training yourself, please run the following instructions.
- You should start with preparing the data for pre-training first, by downloading the Large-scale Open Time Series Archive (LOTSA data).
Assuming you've already created a 
.envfile, run the following commands. 
huggingface-cli download Salesforce/lotsa_data --repo-type=dataset --local-dir PATH_TO_SAVE
echo "LOTSA_V1_PATH=PATH_TO_SAVE" >> .env- Afterwards, you should download MAE-base model from following links: MAE-base. You can choose to download MAE-large or MAE-huge as well.
 
You should also write the path where you save the MAE models in the .env file, for example:
echo "VISIONTS_CHECKPOINT_PATH=./project/benchmarks/ckpt" >> .env- Finally, you can simply run the following script to start the continual pre-training (the same as in run.sh).
 
# base model
python -m cli.train -cp conf/pretrain run_name=VisionTSpp_base  model=visionts data=lotsa_v1_weightedYou can also try continual pre-training on MAE-large or MAE-huge:
# large model:
python -m cli.train -cp conf/pretrain run_name=VisionTSpp_large  model=visionts_large data=lotsa_v1_weighted
# huge model:
python -m cli.train -cp conf/pretrain run_name=VisionTSpp_huge  model=visionts_huge data=lotsa_v1_weightedAdditionally, if you want to modify some configurations during training, you can refer to the settings in default.yaml, model/visionts.yaml and data/lotsa_v1_weighted_image.yaml.
If you're using VisionTS++ or VisionTS in your research or applications, please cite them using this BibTeX:
@misc{chen2024visionts,
      title={VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters}, 
      author={Mouxiang Chen and Lefei Shen and Zhuo Li and Xiaoyun Joy Wang and Jianling Sun and Chenghao Liu},
      year={2024},
      eprint={2408.17253},
      archivePrefix={arXiv},
      url={https://arxiv.org/abs/2408.17253}, 
}
@misc{shen2025visiontspp,
      title={VisionTS++: Cross-Modal Time Series Foundation Model with Continual Pre-trained Visual Backbones}, 
      author={Lefei Shen and Mouxiang Chen and Xu Liu and Han Fu and Xiaoxue Ren and Jianling Sun and Zhuo Li and Chenghao Liu},
      year={2025},
      eprint={2508.04379},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.04379}, 
}We deeply appreciate the following github repos for their valuable code base or datasets:



