Communication Dans Un Congrès
Année : 2024
Résumé
We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised `at infinity' and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $ϕ_\infty$-maximum margin classifier. The function $ϕ_\infty$ is the \textit{horizon function} of the mirror potential and characterises its shape `at infinity'. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Radu-Alexandru Dragomir : Connectez-vous pour contacter le contributeur
https://hal.science/hal-04807077
Soumis le : mercredi 27 novembre 2024-13:45:52
Dernière modification le : jeudi 5 décembre 2024-16:10:33
Dates et versions
- HAL Id : hal-04807077 , version 1
- ARXIV : 2406.12763
Citer
Scott Pesme, Radu-Alexandru Dragomir, Nicolas Flammarion. Implicit Bias of Mirror Flow on Separable Data. NeurIPS 2024 : The 38th Annual Conference on Neural Information Processing Systems, Dec 2024, Vancouver, Canada. ⟨hal-04807077⟩
0
Consultations
0
Téléchargements