Implicit Bias of Mirror Flow on Separable Data - Equipe Signal, Statistique et Apprentissage
[go: up one dir, main page]

Communication Dans Un Congrès Année : 2024
Implicit Bias of Mirror Flow on Separable Data
1 EPFL - Ecole Polytechnique Fédérale de Lausanne (CH-1015 Lausanne, Switzerland - Suisse)
"> EPFL - Ecole Polytechnique Fédérale de Lausanne
2 S2A - Signal, Statistique et Apprentissage (Télécom Paris 19 Place Marguerite Perey 91120 Palaiseau - France)
"> S2A - Signal, Statistique et Apprentissage
3 IDS - Département Images, Données, Signal (46, rue Barrault 75013 Paris ; 15 Place Marguerite Perey 91120 Palaiseau (depuis oct 2019) - France)
"> IDS - Département Images, Données, Signal

Résumé

We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised `at infinity' and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $ϕ_\infty$-maximum margin classifier. The function $ϕ_\infty$ is the \textit{horizon function} of the mirror potential and characterises its shape `at infinity'. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.
Fichier principal
Vignette du fichier
Classif_Neurips-2.pdf (1.9 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04807077 , version 1 (27-11-2024)
Identifiants

Citer

Scott Pesme, Radu-Alexandru Dragomir, Nicolas Flammarion. Implicit Bias of Mirror Flow on Separable Data. NeurIPS 2024 : The 38th Annual Conference on Neural Information Processing Systems, Dec 2024, Vancouver, Canada. ⟨hal-04807077⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More