Search:
Match:
23 results
Physics#Higgs Physics, 2HDM🔬 ResearchAnalyzed: Jan 3, 2026 08:37

Correlating Resonant Di-Higgs and Tri-Higgs Production in 2HDM

Published:Dec 31, 2025 13:56
1 min read
ArXiv

Analysis

This paper investigates the Two-Higgs-Doublet Model (2HDM) and explores correlations between different Higgs boson production processes. The key finding is a relationship between the branching ratios of H decaying to hh and VV, and the potential for measuring tri-Higgs production at the High-Luminosity LHC. This is significant because it provides a way to test the 2HDM and potentially discover new heavy scalars.

Key Takeaways

Reference

For heavy scalar masses between 500 GeV and 1 TeV, we find that Br($H\to hh$)/ Br($H\to ZZ)\approx 9.5.

Probing Dark Jets from Higgs Decays at LHC

Published:Dec 31, 2025 12:00
1 min read
ArXiv

Analysis

This paper explores a novel search strategy for dark matter, focusing on a specific model where the Higgs boson decays into dark sector particles that subsequently produce gluon-rich jets. The focus on long-lived dark mesons decaying into gluons and the consideration of both cascade decays and dark showers are key aspects. The paper highlights the importance of trigger selection for detection and provides constraints on the branching ratios at the high-luminosity LHC.
Reference

The paper finds that appropriate trigger selection constitutes a crucial factor for detecting these signal signatures in both tracker system and CMS muon system. At the high-luminosity LHC, the exotic Higgs branching ratio to cascade decays (dark showers) can be constrained below $\mathcal{O}(10^{-5}-10^{-1})$ [$\mathcal{O}(10^{-5}-10^{-2})$] for dark meson proper lifetimes $c\tau$ ranging from $1$ mm to $100$ m.

Model-Independent Search for Gravitational Wave Echoes

Published:Dec 31, 2025 08:49
1 min read
ArXiv

Analysis

This paper presents a novel approach to search for gravitational wave echoes, which could reveal information about the near-horizon structure of black holes. The model-independent nature of the search is crucial because theoretical predictions for these echoes are uncertain. The authors develop a method that leverages a generalized phase-marginalized likelihood and optimized noise suppression techniques. They apply this method to data from the LIGO-Virgo-KAGRA (LVK) collaboration, specifically focusing on events with high signal-to-noise ratios. The lack of detection allows them to set upper limits on the strength of potential echoes, providing valuable constraints on theoretical models.
Reference

No statistically significant evidence for postmerger echoes is found.

Analysis

This paper introduces RGTN, a novel framework for Tensor Network Structure Search (TN-SS) inspired by physics, specifically the Renormalization Group (RG). It addresses limitations in existing TN-SS methods by employing multi-scale optimization, continuous structure evolution, and efficient structure-parameter optimization. The core innovation lies in learnable edge gates and intelligent proposals based on physical quantities, leading to improved compression ratios and significant speedups compared to existing methods. The physics-inspired approach offers a promising direction for tackling the challenges of high-dimensional data representation.
Reference

RGTN achieves state-of-the-art compression ratios and runs 4-600$\times$ faster than existing methods.

Analysis

This paper addresses the limitations of traditional methods (like proportional odds models) for analyzing ordinal outcomes in randomized controlled trials (RCTs). It proposes more transparent and interpretable summary measures (weighted geometric mean odds ratios, relative risks, and weighted mean risk differences) and develops efficient Bayesian estimators to calculate them. The use of Bayesian methods allows for covariate adjustment and marginalization, improving the accuracy and robustness of the analysis, especially when the proportional odds assumption is violated. The paper's focus on transparency and interpretability is crucial for clinical trials where understanding the impact of treatments is paramount.
Reference

The paper proposes 'weighted geometric mean' odds ratios and relative risks, and 'weighted mean' risk differences as transparent summary measures for ordinal outcomes.

Temperature Fluctuations in Hot QCD Matter

Published:Dec 30, 2025 01:32
1 min read
ArXiv

Analysis

This paper investigates temperature fluctuations in hot QCD matter using a specific model (PNJL). The key finding is that high-order cumulant ratios show non-monotonic behavior across the chiral phase transition, with distinct structures potentially linked to the deconfinement phase transition. The results are relevant for heavy-ion collision experiments.
Reference

The high-order cumulant ratios $R_{n2}$ ($n>2$) exhibit non-monotonic variations across the chiral phase transition... These structures gradually weaken and eventually vanish at high chemical potential as they compete with the sharpening of the chiral phase transition.

Analysis

This paper investigates the impact of the momentum flux ratio (J) on the breakup mechanism, shock structures, and unsteady interactions of elliptical liquid jets in a supersonic cross-flow. The study builds upon previous research by examining how varying J affects atomization across different orifice aspect ratios (AR). The findings are crucial for understanding and potentially optimizing fuel injection processes in supersonic combustion applications.
Reference

The study finds that lower J values lead to greater unsteadiness and larger Rayleigh-Taylor waves, while higher J values result in decreased unsteadiness and smaller, more regular Rayleigh-Taylor waves.

Analysis

This paper addresses the challenge of training efficient remote sensing diffusion models by proposing a training-free data pruning method called RS-Prune. The method aims to reduce data redundancy, noise, and class imbalance in large remote sensing datasets, which can hinder training efficiency and convergence. The paper's significance lies in its novel two-stage approach that considers both local information content and global scene-level diversity, enabling high pruning ratios while preserving data quality and improving downstream task performance. The training-free nature of the method is a key advantage, allowing for faster model development and deployment.
Reference

The method significantly improves convergence and generation quality even after pruning 85% of the training data, and achieves state-of-the-art performance across downstream tasks.

Analysis

This paper addresses the fairness issue in graph federated learning (GFL) caused by imbalanced overlapping subgraphs across clients. It's significant because it identifies a potential source of bias in GFL, a privacy-preserving technique, and proposes a solution (FairGFL) to mitigate it. The focus on fairness within a privacy-preserving context is a valuable contribution, especially as federated learning becomes more widespread.
Reference

FairGFL incorporates an interpretable weighted aggregation approach to enhance fairness across clients, leveraging privacy-preserving estimation of their overlapping ratios.

Analysis

This paper addresses a critical issue in machine learning, particularly in astronomical applications, where models often underestimate extreme values due to noisy input data. The introduction of LatentNN provides a practical solution by incorporating latent variables to correct for attenuation bias, leading to more accurate predictions in low signal-to-noise scenarios. The availability of code is a significant advantage.
Reference

LatentNN reduces attenuation bias across a range of signal-to-noise ratios where standard neural networks show large bias.

Analysis

This paper introduces SPECTRE, a novel self-supervised learning framework for decoding fine-grained movements from sEMG signals. The key contributions are a spectral pre-training task and a Cylindrical Rotary Position Embedding (CyRoPE). SPECTRE addresses the challenges of signal non-stationarity and low signal-to-noise ratios in sEMG data, leading to improved performance in movement decoding, especially for prosthetic control. The paper's significance lies in its domain-specific approach, incorporating physiological knowledge and modeling the sensor topology to enhance the accuracy and robustness of sEMG-based movement decoding.
Reference

SPECTRE establishes a new state-of-the-art for movement decoding, significantly outperforming both supervised baselines and generic SSL approaches.

Analysis

This paper investigates the formation of mesons, including excited states, from coalescing quark-antiquark pairs. It uses a non-relativistic quark model with a harmonic oscillator potential and Gaussian wave packets. The work is significant because it provides a framework for modeling excited meson states, which are often overlooked in simulations, and offers predictions for unconfirmed states. The phase space approach is particularly relevant for Monte Carlo simulations used in high-energy physics.
Reference

The paper demonstrates that excited meson states are populated abundantly for typical parton configurations expected in jets.

Analysis

This paper addresses a crucial experimental challenge in nuclear physics: accurately accounting for impurities in target materials. The authors develop a data-driven method to correct for oxygen and carbon contamination in calcium targets, which is essential for obtaining reliable cross-section measurements of the Ca(p,pα) reaction. The significance lies in its ability to improve the accuracy of nuclear reaction data, which is vital for understanding nuclear structure and reaction mechanisms. The method's strength is its independence from model assumptions, making the results more robust.
Reference

The method does not rely on assumptions about absolute contamination levels or reaction-model calculations, and enables a consistent and reliable determination of Ca$(p,pα)$ yields across the calcium isotopic chain.

Analysis

This post introduces S2ID, a novel diffusion architecture designed to address limitations in existing models like UNet and DiT. The core issue tackled is the sensitivity of convolution kernels in UNet to pixel density changes during upscaling, leading to artifacts. S2ID also aims to improve upon DiT models, which may not effectively compress context when handling upscaled images. The author argues that pixels, unlike tokens in LLMs, are not atomic, necessitating a different approach. The model achieves impressive results, generating high-resolution images with minimal artifacts using a relatively small parameter count. The author acknowledges the code's current state, focusing instead on the architectural innovations.
Reference

Tokens in LLMs are atomic, pixels are not.

Analysis

This paper investigates the impact of non-local interactions on the emergence of quantum chaos in Ising spin chains. It compares the behavior of local and non-local Ising models, finding that non-local couplings promote chaos more readily. The study uses level spacing ratios and Krylov complexity to characterize the transition from integrable to chaotic regimes, providing insights into the dynamics of these systems.
Reference

Non-local couplings facilitate faster operator spreading and more intricate dynamical behavior, enabling these systems to approach maximal chaos more readily than their local counterparts.

Analysis

This paper investigates the processing of hydrocarbon dust in galaxies, focusing on the ratio of aliphatic to aromatic hydrocarbon emission. It uses AKARI near-infrared spectra to analyze a large sample of galaxies, including (U)LIRGs, IRGs, and sub-IRGs, and compares them to Galactic HII regions. The study aims to understand how factors like UV radiation and galactic nuclei influence the observed emission features.
Reference

The luminosity ratios of aliphatic to aromatic hydrocarbons ($L_{ali}/L_{aro}$) in the sample galaxies show considerably large variations, systematically decreasing with $L_{IR}$ and $L_{Brα}$.

Research#llm📝 BlogAnalyzed: Dec 25, 2025 22:17

Octonion Bitnet with Fused Triton Kernels: Exploring Sparsity and Dimensional Specialization

Published:Dec 25, 2025 08:39
1 min read
r/MachineLearning

Analysis

This post details an experiment combining Octonions and ternary weights from Bitnet, implemented with a custom fused Triton kernel. The key innovation is reducing multiple matmul kernel launches into a single fused kernel, along with Octonion head mixing. Early results show rapid convergence and good generalization, with validation loss sometimes dipping below training loss. The model exhibits a natural tendency towards high sparsity (80-90%) during training, enabling significant compression. Furthermore, the model appears to specialize in different dimensions for various word types, suggesting the octonion structure is beneficial. However, the author acknowledges the need for more extensive testing to compare performance against float models or BitNet itself.
Reference

Model converges quickly, but hard to tell if would be competitive with float models or BitNet itself since most of my toy models have only been trained for <1 epoch on the datasets using consumer hardware.

Research#Image Compression🔬 ResearchAnalyzed: Jan 10, 2026 09:17

SLIM: Diffusion-Powered Image Compression for Machines

Published:Dec 20, 2025 03:48
1 min read
ArXiv

Analysis

This research explores a novel approach to image compression using diffusion models, potentially enabling more efficient data storage and transmission for machine learning applications. The use of semantic information to inform the compression process is a promising direction for achieving higher compression ratios.
Reference

The paper focuses on Semantic-based Low-bitrate Image compression for Machines.

Analysis

This research utilizes machine learning to predict reactivity ratios in radical copolymerization, potentially accelerating materials discovery and optimization. The chemically-informed approach suggests a focus on interpretability and physical understanding, which is a positive trend in AI research.
Reference

The research focuses on the prediction of reactivity ratios.

Research#Compression🔬 ResearchAnalyzed: Jan 10, 2026 14:35

Context Cascade Compression: Pushing Boundaries in Text Compression

Published:Nov 19, 2025 09:02
1 min read
ArXiv

Analysis

This ArXiv paper likely presents a novel approach to text compression, possibly through leveraging context to achieve higher compression ratios. The focus on pushing the "upper limits" suggests significant technical advancements.
Reference

This requires access to the ArXiv paper to pull a key fact.

Research#llm🏛️ OfficialAnalyzed: Jan 3, 2026 09:47

Sora is here

Published:Dec 9, 2024 10:00
1 min read
OpenAI News

Analysis

The article announces the availability of OpenAI's video generation model, Sora. It highlights key features like resolution (1080p), duration (up to 20 seconds), and aspect ratios (widescreen, vertical, square). It also mentions the ability to use existing assets and generate content from text.
Reference

Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios.

Research#image compression👥 CommunityAnalyzed: Jan 3, 2026 06:49

Stable Diffusion based image compression

Published:Sep 20, 2022 03:58
1 min read
Hacker News

Analysis

The article highlights a novel approach to image compression leveraging Stable Diffusion, a powerful AI model. The core idea likely involves using Stable Diffusion's generative capabilities to reconstruct images from compressed representations, potentially achieving high compression ratios. Further details would be needed to assess the efficiency, quality, and practical applications of this method. The use of Stable Diffusion suggests a focus on semantic understanding and reconstruction rather than pixel-level fidelity, which could be advantageous in certain scenarios.
Reference

The summary provides limited information. Further investigation into the specific techniques and performance metrics is needed.

Compressing Images with Stable Diffusion

Published:Sep 1, 2022 03:21
1 min read
Hacker News

Analysis

The article discusses using Stable Diffusion, a generative AI model, for image compression. This suggests a novel approach to image storage and potentially improved efficiency compared to traditional methods. The use of AI for compression is an interesting development.
Reference

Further analysis would require examining the specific techniques used, the compression ratios achieved, and the impact on image quality. The article likely explores these aspects.