Focus Period lund 2026

PhD Student

University of Basel (Switzerland)

Rustem Islamov is a PhD student at University of Basel working on Optimization under the supervision of Prof. Aurelien Lucchi. His research focuses on adaptive algorithms, their interaction with loss landscape with applications in Machine Learning and Deep Learning. Rustem studies algorithms that are both efficient in practice and provably converge in theory. 

Presenting: On the Role of Batch Size in Stochastic Conditional Gradient Methods 

We study the role of batch size in stochastic conditional gradient methods under $\mu$-Kurdyka-Łojasiewicz ($\mu$-KL) condition. Focusing on momentum-based stochastic conditional gradient algorithms (e.g., Scion), we derive a new analysis that explicitly captures the interaction between stepsize, batch size, and stochastic noise. Our study reveals a regime-dependent behavior: increasing the batch size initially improves optimization accuracy but, beyond a critical threshold, the benefits saturate and can eventually degrade performance under a fixed token budget. Notably, the theory predicts the magnitude of the optimal stepsize and aligns well with empirical practices observed in large-scale training. Leveraging these insights, we derive principled guidelines for selecting the batch size and stepsize, and propose an adaptive strategy that increases batch size and sequence length during training while preservingconvergence guarantees. Experiments on NanoGPT are consistent with the theoretical predictions and illustrate the emergence of the predicted scaling regimes. Overall, our results provide a theoretical framework for understandingbatch size scaling in stochastic conditional gradient methods and offer guidance for designing efficient training schedules in large-scale optimization.