Profils utilisateurs correspondant à "Anastasia Koloskova"

Anastasia Koloskova

Postdoc, Stanford University
Adresse e-mail validée de stanford.edu
Cité 1981 fois

Decentralized stochastic optimization and gossip algorithms with compressed communication

A Koloskova, S Stich, M Jaggi - International Conference on …, 2019 - proceedings.mlr.press
We consider decentralized stochastic optimization with the objective function (eg data samples
for machine learning tasks) being distributed over n machines that can only communicate …

A unified theory of decentralized sgd with changing topology and local updates

A Koloskova, N Loizou, S Boreiri… - International …, 2020 - proceedings.mlr.press
Decentralized stochastic optimization methods have gained a lot of attention recently,
mainly because of their cheap per iteration cost, data locality, and their communication-efficiency. …

Decentralized deep learning with arbitrary communication compression

A Koloskova, T Lin, SU Stich, M Jaggi - arXiv preprint arXiv:1907.09356, 2019 - arxiv.org
Decentralized training of deep learning models is a key element for enabling data privacy and
on-device learning over networks, as well as for efficient scaling to large compute clusters. …

Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

A Koloskova, H Hendrikx… - … Conference on Machine …, 2023 - proceedings.mlr.press
Gradient clipping is a popular modification to standard (stochastic) gradient descent, at every
iteration limiting the gradient norm to a certain value $ c> 0$. It is widely used for example …

Consensus control for decentralized deep learning

L Kong, T Lin, A Koloskova… - … on Machine Learning, 2021 - proceedings.mlr.press
Decentralized training of deep learning models enables on-device learning over networks,
as well as efficient scaling to large compute clusters. Experiments in earlier works reveal that, …

A linearly convergent algorithm for decentralized optimization: Sending less bits for free!

D Kovalev, A Koloskova, M Jaggi… - International …, 2021 - proceedings.mlr.press
Decentralized optimization methods enable on-device training of machine learning models
without a central coordinator. In many scenarios communication between devices is energy …

Asynchronous sgd on graphs: a unified framework for asynchronous decentralized and federated optimization

M Even, A Koloskova… - … Conference on Artificial …, 2024 - proceedings.mlr.press
Decentralized and asynchronous communications are two popular techniques to speedup
communication complexity of distributed machine learning, by respectively removing the …

Decentralized gradient tracking with local steps

Y Liu, T Lin, A Koloskova, SU Stich - Optimization Methods and …, 2024 - Taylor & Francis
Anastasia Koloskova Anastasia Koloskova is a postdoctoral researcher at EPFL. She was
a PhD student at EPFL in the Laboratory of Optimization and Machine Learning with Prof. …

Data-heterogeneity-aware mixing for decentralized learning

Y Dandi, A Koloskova, M Jaggi, SU Stich - arXiv preprint arXiv:2204.06477, 2022 - arxiv.org
Decentralized learning provides an effective framework to train machine learning models with
data distributed over arbitrary communication graphs. However, most existing approaches …

Efficient greedy coordinate descent for composite problems

SP Karimireddy, A Koloskova… - The 22nd …, 2019 - proceedings.mlr.press
Coordinate descent with random coordinate selection is the current state of the art for many
large scale optimization problems. However, greedy selection of the steepest coordinate on …