Batched Training Comparison of Quantum Sequence Models for Time Series Forecasting
Analysis
This paper provides a system-oriented comparison of two quantum sequence models, QLSTM and QFWP, for time series forecasting, specifically focusing on the impact of batch size on performance and runtime. The study's value lies in its practical benchmarking pipeline and the insights it offers regarding the speed-accuracy trade-off and scalability of these models. The EPC (Equal Parameter Count) and adjoint differentiation setup provide a fair comparison. The focus on component-wise runtimes is crucial for understanding performance bottlenecks. The paper's contribution is in providing practical guidance on batch size selection and highlighting the Pareto frontier between speed and accuracy.
Key Takeaways
- •Batched forward pass scales well, but backward pass scaling is modest, limiting overall training speedup.
- •QFWP generally outperforms QLSTM in accuracy (RMSE and directional accuracy).
- •QLSTM achieves the highest throughput at larger batch sizes, demonstrating a speed-accuracy trade-off.
- •The paper provides a practical benchmarking pipeline and guidance on batch size selection for these quantum models.
“QFWP achieves lower RMSE and higher directional accuracy at all batch sizes, while QLSTM reaches the highest throughput at batch size 64, revealing a clear speed accuracy Pareto frontier.”