Search:
Match:
2 results

Analysis

This paper investigates the compositionality of Vision Transformers (ViTs) by using Discrete Wavelet Transforms (DWTs) to create input-dependent primitives. It adapts a framework from language tasks to analyze how ViT encoders structure information. The use of DWTs provides a novel approach to understanding ViT representations, suggesting that ViTs may exhibit compositional behavior in their latent space.
Reference

Primitives from a one-level DWT decomposition produce encoder representations that approximately compose in latent space.

Analysis

This article introduces DB2-TransF, a new approach to time series forecasting that leverages learnable Daubechies wavelets. The core idea is to use these wavelets for feature extraction and representation learning. The paper likely presents experimental results demonstrating the effectiveness of DB2-TransF compared to existing methods. The use of wavelets suggests a focus on capturing both temporal and frequency domain information within the time series data.
Reference

The article likely discusses the advantages of using learnable Daubechies wavelets, such as their ability to adapt to the specific characteristics of the time series data and their efficiency in capturing both local and global patterns.