- Spectral Alignment as Predictor of Loss Explosion in Neural Network Training Loss explosions in training deep neural networks can nullify multi-million dollar training runs. Conventional monitoring metrics like weight and gradient norms are often lagging and ambiguous predictors, as their values vary dramatically across different models and even between layers of the same model, making it difficult to establish a unified standard for detecting impending failure. We introduce Spectral Alignment (SA), a novel, theoretically-grounded metric that monitors the distributional alignment between layer inputs and the principal singular vectors of weight matrices. We show that a collapse in the sign diversity of this alignment is a powerful early predictor of representational collapse and training divergence. Empirical results on language models demonstrate that monitoring the SA distribution provides a significantly earlier and clearer warning of loss explosions than traditional scalar metrics. SA's low computational overhead makes it a practical tool for safeguarding model training. 5 authors · Oct 5
22 Why Low-Precision Transformer Training Fails: An Analysis on Flash Attention The pursuit of computational efficiency has driven the adoption of low-precision formats for training transformer models. However, this progress is often hindered by notorious training instabilities. This paper provides the first mechanistic explanation for a long-standing and unresolved failure case where training with flash attention in low-precision settings leads to catastrophic loss explosions. Our in-depth analysis reveals that the failure is not a random artifact but caused by two intertwined phenomena: the emergence of similar low-rank representations within the attention mechanism and the compounding effect of biased rounding errors inherent in low-precision arithmetic. We demonstrate how these factors create a vicious cycle of error accumulation that corrupts weight updates, ultimately derailing the training dynamics. To validate our findings, we introduce a minimal modification to the flash attention that mitigates the bias in rounding errors. This simple change stabilizes the training process, confirming our analysis and offering a practical solution to this persistent problem. Tsinghua University · Oct 5 2
- Spike No More: Stabilizing the Pre-training of Large Language Models Loss spikes often occur during pre-training of large language models. The spikes degrade the performance of large language models and sometimes ruin the pre-training. Since the pre-training needs a vast computational budget, we should avoid such spikes. To investigate the cause of loss spikes, we focus on gradients of internal layers. Through theoretical analyses, we reveal two causes of the exploding gradients, and provide requirements to prevent the explosion. In addition, we propose a method to satisfy the requirements by combining the initialization method and a simple modification to embeddings. We conduct various experiments to verify our theoretical analyses empirically. Experimental results indicate that the combination is effective in preventing spikes during pre-training. 4 authors · Dec 28, 2023
- Extracting SASI signatures from Gravitational Waves of Core-Collapse Supernovae using the Hilbert-Huang Transform Core collapse supernovae are among the most energetic astrophysical events in the Universe. Despite huge efforts on understanding the main ingredients triggering such explosions, we still lack of compelling evidences for the precise mechanism driving those phenomena. They are expected to produce gravitational waves due to asymmetric mass motions in the collapsing core, and emit in the meanwhile neutrinos as a result of the interactions in their high-density environment. The combination of these two cosmic messengers can provide a unique probe to study the inner engine of these processes and unveil the explosion mechanism. Among the possible detectable signature, standing accretion shock instabilities (SASI) are particularly relevant in this context as they establish a direct connection between gravitational wave emission and the outcoming neutrino flux. In this work, Hilbert-Huang transform is applied to a selected sample of 3D numerical simulations, with the aim of identifying SASI contribution and extract its instantaneous frequency. The performance of the method is evaluated in the context of Einstein Telescope. 3 authors · Oct 20
- Improving Polyphonic Sound Event Detection on Multichannel Recordings with the Sørensen-Dice Coefficient Loss and Transfer Learning The S{\o}rensen--Dice Coefficient has recently seen rising popularity as a loss function (also known as Dice loss) due to its robustness in tasks where the number of negative samples significantly exceeds that of positive samples, such as semantic segmentation, natural language processing, and sound event detection. Conventional training of polyphonic sound event detection systems with binary cross-entropy loss often results in suboptimal detection performance as the training is often overwhelmed by updates from negative samples. In this paper, we investigated the effect of the Dice loss, intra- and inter-modal transfer learning, data augmentation, and recording formats, on the performance of polyphonic sound event detection systems with multichannel inputs. Our analysis showed that polyphonic sound event detection systems trained with Dice loss consistently outperformed those trained with cross-entropy loss across different training settings and recording formats in terms of F1 score and error rate. We achieved further performance gains via the use of transfer learning and an appropriate combination of different data augmentation techniques. 6 authors · Jul 22, 2021