Scaling Video Prediction with Spatio-Temporal Patches

Authors

  • L. R Kulyk Vinnytsia National Technical University
  • O. B. Mokin Vinnytsia National Technical University

Keywords:

machine learning, neural networks, natural language processing (NLP), transformers, computer vision (CV), convolutional neural networks (CNN), variational autoencoder (VAE), synthetic data, optimization, artificial neutral networks

Abstract

The article presents a new architecture for video data processing, the Vision Byte Latent Transformer (V-BLT), which adapts the principles of successful byte-level language models to the visual modality. Unlike standard approaches that use fixed-size patching, which are computationally inefficient due to the uniform resource allocation regardless of visual content complexity, V-BLT operates directly on the video byte stream. This allows for avoiding information loss associated with prior tokenization and enhances processing flexibility. The key contributions include the concept of spatiotemporal latent patches, the implementation of N-dimensional Rotary Positional Embeddings to preserve data coherence in the flattened byte stream, and a multi-level transformer architecture for hierarchical processing. To validate the hypothesis and test the model, a new synthetic dataset with rotating 2D and 3D shapes was developed for a controlled evaluation of the model’s spatiotemporal reasoning capabilities. It is experimentally demonstrated that V-BLT effectively predicts future frames, achieving high scores on MSE, SSIM, and PSNR metrics comparing to ViViT and UNet3D with better computational efficiency. The developed architecture according to the design has the ability to generate per-pixel entropy maps that visualize prediction uncertainty and correlate with dynamically complex regions of the scene. This paves the way for the implementation of dynamic, content-dependent, on-the-fly allocation of computational resources, representing a promising direction for creating more efficient and scalable foundation models for video analytics.

Author Biographies

L. R Kulyk, Vinnytsia National Technical University

 Post-Graduate Student of the Chair of System Analysis and Information Technologies

O. B. Mokin, Vinnytsia National Technical University

Dr. Sc. (Eng.), Professor, Professor of the Chair of System Analysis and Information Technologies

References

A. Arnab, et al., “ViViT: A Video Vision Transformer,” in ArXiv e-prints, 2021. [Online]. Available: https://arxiv.org/abs/2103.15691 . Accessed: September 26, 2025.

A. Dosovitskiy, et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” in ArXiv e-prints, 2020. [Online]. Available: https://arxiv.org/abs/2010.11929 . Accessed: September 26, 2025.

Z. Liu, et al., “Video Swin Transformer,” in ArXiv e-prints, 2022. [Online]. Available: https://arxiv.org/abs/2106.13230 . Accessed: September 26, 2025.

A. Pagnoni, R. et al., “Byte Latent Transformer: Patches Scale Better than Tokens,” in ArXiv e-prints, 2024. [Online]. Available: https://arxiv.org/abs/2412.09871 . Accessed: September 26, 2025.

L. Xue, A. Barua, et al., “ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models,” ArXiv e-prints, 2021. [Online]. Available: https://arxiv.org/abs/2105.13626 . Accessed: September 26, 2025.

Л. Р. Кулик, і О. Б. Мокін, «Створення синтетичного набору даних для оцінювання архітектур нейромережевих моделей,» в Матеріали LIV науково-технічної конференції підрозділів ВНТУ, Вінниця, 24-27 березня 2025 р.

G. Aleksandrowicz, and G. Barequet, “Counting polycubes without the dimensionality curse,” Discrete Mathematics, vol. 309, no. 13, pp. 4576-4583, 2009. https://doi.org/10.1016/j.disc.2009.02.023. Accessed: September 26, 2025.

D. Tran, et al., “A Closer Look at Spatiotemporal Convolutions for Action Recognition,” in ArXiv e-prints, 2018. [Online]. Available: https://arxiv.org/abs/1711.11248. Accessed: September 26, 2025.

W. Yan, et al., “VideoGPT: Video Generation using VQ-VAE and Transformers,” in ArXiv e-prints, 2021. [Online]. Available: https://arxiv.org/abs/2104.10157. Accessed: September 26, 2025.

J. Ho, et al., “Video Diffusion Models,” in АrXiv e-prints, 2022. [Online]. Available: https://arxiv.org/abs/2204.03458. Accessed: September 26, 2025.

A. Blattmann, et al., “Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models,” in ArXiv e-prints, 2023. [Online]. Available: https://arxiv.org/abs/2304.08818. Accessed: September 26, 2025.

J. Su, et al., “RoFormer: Enhanced Transformer with Rotary Position Embedding,” in arXiv e-prints, 2021. [Online]. Available: https://arxiv.org/abs/2104.09864. Accessed: September 26, 2025.

A. F. Bobick, and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2001.

Python Software Foundation, Python Language Reference, version 3.12. [Online]. Available: https://www.python.org. Accessed: September 26, 2025.

C. Sullivan, and B. E. A. Larson, PyVista: 3D plotting and mesh analysis through a streamlined interface for the Visualization Toolkit (VTK). [Online]. Available: https://pyvista.org. Accessed: September 26, 2025.

Simple Shape Dataset Toolbox GitHub. [Online]. Available: https://github.com/leo27heady/simple-shape-dataset-toolbox. Accessed: September 26, 2025.

A. Vaswani, et al., “Attention Is All You Need,” in ArXiv e-prints, 2017. [Online]. Available: https://arxiv.org/abs/1706.03762. Accessed: September 26, 2025.

I. Loshchilov, and F. Hutter, “Decoupled Weight Decay Regularization,” in ArXiv e-prints, 2017. [Online]. Available: https://arxiv.org/abs/1711.05101. Accessed: September 26, 2025.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.

A. Paszke, et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32, 2019, pp. 8024-8035.

Vision Byte Latent Transformer GitHub. [Online]. Available: https://github.com/leo27heady/visionBLT. Accessed: September 26, 2025.

W. Kay, et al., “The Kinetics Human Action Video Dataset,” in ArXiv e-prints, 2017. [Online]. Available: https://arxiv.org/abs/1705.06950. Accessed: September 26, 2025.

K. Soomro, A. R. Zamir, and M. Shah, “UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild,” in ArXiv e-prints, 2012. [Online]. Available: https://arxiv.org/abs/1212.0402. Accessed: September 26, 2025.

Tan C, et al., “OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning,” in arXiv e-prints, 2022. [Online]. Available: https://arxiv.org/abs/2306.11249. Accessed: September 26, 2025.

Rope-Nd GitHub. [Online]. Available: https://github.com/limefax/rope-nd. Accessed: September 26, 2025.

Ozgun Cicek, et al., “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation,” in ArXiv e-prints, 2016. [Online]. Available: https://arxiv.org/abs/1606.06650. Accessed: September 26, 2025.

M. Havrylovych, and V. Danylov, “Research on hybrid transformer-based autoencoders for user biometric verification,” System Research and Information Technologies, no. 3, pp. 42-53, 2023. [Online]. Available: https://doi.org/10.20535/SRIT.2308-8893.2023.3.03. Accessed: September 26, 2025.

Vasyl Lytvyn, et al., “Detection of Similarity Between Images Based on Contrastive Language-Image Pre-Training Neural Network,” Machine Learning Workshop at CoLInS, 2024. [Online]. Available: https://doi.org/10.31110/COLINS/2024-1/008. Accessed: September 26, 2025.

Abstract views: 2

Published

2025-12-11

How to Cite

[1]
L. R. Kulyk and O. B. Mokin, “Scaling Video Prediction with Spatio-Temporal Patches”, Вісник ВПІ, no. 5, pp. 129–139, Dec. 2025.

Issue

Section

Information technologies and computer sciences

Metrics

Downloads

Download data is not yet available.

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 > >>