[go: up one dir, main page]

[edit]

Volume 120: Learning for Dynamics and Control, 10-11 June 2020, The Cloud

[edit]

Editors: Alexandre M. Bayen, Ali Jadbabaie, George Pappas, Pablo A. Parrilo, Benjamin Recht, Claire Tomlin, Melanie Zeilinger

[bib][citeproc]

Preface

Alexandre M. Bayen, Ali Jadbabaie, George Pappas, Pablo A. Parrilo, Benjamin Recht, Claire Tomlin, Melanie Zeilinger; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:1-4

Actively Learning Gaussian Process Dynamics

Mona Buisson-Fenet, Friedrich Solowjow, Sebastian Trimpe; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:5-15

Finite Sample System Identification: Optimal Rates and the Role of Regularization

Yue Sun, Samet Oymak, Maryam Fazel; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:16-25

Finite-Time Performance of Distributed Two-Time-Scale Stochastic Approximation

Thinh Doan, Justin Romberg; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:26-36

Virtual Reference Feedback Tuning with data-driven reference model selection

Valentina Breschi, Simone Formentin; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:37-45

Direct Data-Driven Control with Embedded Anti-Windup Compensation

Valentina Breschi, Simone Formentin; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:46-54

Sparse and Low-bias Estimation of High Dimensional Vector Autoregressive Models

Trevor Ruiz, Sharmodeep Bhattacharyya, Mahesh Balasubramanian, Kristofer Bouchard; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:55-64

Robust Online Model Adaptation by Extended Kalman Filter with Exponential Moving Average and Dynamic Multi-Epoch Strategy

Abulikemu Abuduweili, Changliu Liu; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:65-74

Estimating Reachable Sets with Scenario Optimization

Alex Devonport, Murat Arcak; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:75-84

LSTM Neural Networks: Input to State Stability and Probabilistic Safety Verification

Fabio Bonassi, Enrico Terzi, Marcello Farina, Riccardo Scattolini; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:85-94

Bayesian joint state and parameter tracking in autoregressive models

Ismail Senoz, Albert Podusenko, Wouter M. Kouw, Bert Vries; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:95-104

Learning to Correspond Dynamical Systems

Nam Hee Kim, Zhaoming Xie, Michiel Panne; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:105-117

Learning solutions to hybrid control problems using Benders cuts

Sandeep Menta, Joseph Warrington, John Lygeros, Manfred Morari; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:118-126

Feed-forward Neural Networks with Trainable Delay

Xunbi A. Ji, Tamás G. Molnár, Sergei S. Avedisov, Gábor Orosz; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:127-136

Exploiting Model Sparsity in Adaptive MPC: A Compressed Sensing Viewpoint

Monimoy Bujarbaruah, Charlott Vallon; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:137-146

Structured Variational Inference in Partially Observable Unstable Gaussian Process State Space Models

Sebastian Curi, Silvan Melchior, Felix Berkenkamp, Andreas Krause; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:147-157

Regret Bound for Safe Gaussian Process Bandit Optimization

Sanae Amani, Mahnoosh Alizadeh, Christos Thrampoulidis; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:158-159

Smart Forgetting for Safe Online Learning with Gaussian Processes

Jonas Umlauft, Thomas Beckers, Alexandre Capone, Armin Lederer, Sandra Hirche; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:160-169

Linear Antisymmetric Recurrent Neural Networks

Signe Moe, Filippo Remonato, Esten Ingar Grøtli, Jan Tommy Gravdahl; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:170-178

Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence

Kaiqing Zhang, Bin Hu, Tamer Basar; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:179-190

A Finite-Sample Deviation Bound for Stable Autoregressive Processes

Rodrigo A. González, Cristian R. Rojas; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:191-200

Online Data Poisoning Attacks

Xuezhou Zhang, Xiaojin Zhu, Laurent Lessard; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:201-210

Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot

Napat Karnchanachari, Miguel Iglesia Valls, David Hoeller, Marco Hutter; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:211-224

Learning Constrained Dynamics with Gauss’ Principle adhering Gaussian Processes

Andreas Geist, Sebastian Trimpe; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:225-234

Counterfactual Programming for Optimal Control

Luiz F. O. Chamon, Santiago Paternain, Alejandro Ribeiro; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:235-244

Learning Navigation Costs from Demonstrations with Semantic Observations

Tianyu Wang, Vikas Dhiman, Nikolay Atanasov; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:245-255

Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems

Guannan Qu, Adam Wierman, Na Li; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:256-266

Black-box continuous-time transfer function estimation with stability guarantees: a kernel-based approach

Mirko Mazzoleni, Matteo Scandella, Simone Formentin, Fabio Previdi; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:267-276

Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization

Homanga Bharadhwaj, Kevin Xie, Florian Shkurti; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:277-286

Learning the Globally Optimal Distributed LQ Regulator

Luca Furieri, Yang Zheng, Maryam Kamgarpour; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:287-297

VarNet: Variational Neural Networks for the Solution of Partial Differential Equations

Reza Khodayi-Mehr, Michael Zavlanos; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:298-307

Tractable Reinforcement Learning of Signal Temporal Logic Objectives

Harish Venkataraman, Derya Aksaray, Peter Seiler; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:308-317

A Spatially and Temporally Attentive Joint Trajectory Prediction Framework for Modeling Vessel Intent

Jasmine Sekhon, Cody Fleming; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:318-327

Structured Mechanical Models for Robot Learning and Control

Jayesh K. Gupta, Kunal Menda, Zachary Manchester, Mykel Kochenderfer; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:328-337

Data-driven Identification of Approximate Passive Linear Models for Nonlinear Systems

S. Sivaranjani, Etika Agarwal, Vijay Gupta; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:338-339

Constraint Management for Batch Processes Using Iterative Learning Control and Reference Governors

Aidan Laracy, Hamid Ossareh; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:340-349

Robust Guarantees for Perception-Based Control

Sarah Dean, Nikolai Matni, Benjamin Recht, Vickie Ye; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:350-360

Learning Convex Optimization Control Policies

Akshay Agrawal, Shane Barratt, Stephen Boyd, Bartolomeo Stellato; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:361-373

Fitting a Linear Control Policy to Demonstrations with a Kalman Constraint

Malayandi Palan, Shane Barratt, Alex McCauley, Dorsa Sadigh, Vikas Sindhwani, Stephen Boyd; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:374-383

Universal Simulation of Stable Dynamical Systems by Recurrent Neural Nets

Joshua Hanson, Maxim Raginsky; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:384-392

Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability

Max Revay, Ian Manchester; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:393-403

On the Robustness of Data-Driven Controllers for Linear Systems

Rajasekhar Anguluri, Abed Alrahman Al Makdah, Vaibhav Katewa, Fabio Pasqualetti; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:404-412

Faster saddle-point optimization for solving large-scale Markov decision processes

Joan Bas Serrano, Gergely Neu; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:413-423

On Simulation and Trajectory Prediction with Gaussian Process Dynamics

Lukas Hewing, Elena Arcari, Lukas P. Fröhlich, Melanie N. Zeilinger; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:424-434

Sample Complexity of Kalman Filtering for Unknown Systems

Anastasios Tsiamis, Nikolai Matni, George Pappas; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:435-444

NeurOpt: Neural network based optimization for building energy management and climate control

Achin Jain, Francesco Smarra, Enrico Reticcioli, Alessandro D’Innocenzo, Manfred Morari; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:445-454

Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling

Kim Peter Wabersich, Melanie Zeilinger; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:455-464

Parameter Optimization for Learning-based Control of Control-Affine Systems

Armin Lederer, Alexandre Capone, Sandra Hirche; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:465-475

Riccati updates for online linear quadratic control

Mohammad Akbari, Bahman Gharesifard, Tamas Linder; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:476-485

A Theoretical Analysis of Deep Q-Learning

Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:486-489

Localized active learning of Gaussian process state space models

Alexandre Capone, Gerrit Noske, Jonas Umlauft, Thomas Beckers, Armin Lederer, Sandra Hirche; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:490-499

Generating Robust Supervision for Learning-Based Visual Navigation Using Hamilton-Jacobi Reachability

Anjian Li, Somil Bansal, Georgios Giovanis, Varun Tolani, Claire Tomlin, Mo Chen; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:500-510

Learning supported Model Predictive Control for Tracking of Periodic References

Janine Matschek, Rolf Findeisen; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:511-520

Data-driven distributionally robust LQR with multiplicative noise

Peter Coppens, Mathijs Schuurmans, Panagiotis Patrinos; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:521-530

Learning the model-free linear quadratic regulator via random search

Hesameddin Mohammadi, Mihailo R. Jovanovic’, Mahdi Soltanolkotabi; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:531-539

Lambda-Policy Iteration with Randomization for Contractive Models with Infinite Policies: Well-Posedness and Convergence

Yuchao Li, Karl Henrik Johansson, Jonas Mårtensson; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:540-549

Optimistic robust linear quadratic dual control

Jack Umenberger, Thomas B. Schön; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:550-560

Bayesian Learning with Adaptive Load Allocation Strategies

Manxi Wu, Saurabh Amin, Asuman Ozdaglar; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:561-570

Learning-based Stochastic Model Predictive Control with State-Dependent Uncertainty

Angelo Domenico Bonzanini, Ali Mesbah; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:571-580

Stable Reinforcement Learning with Unbounded State Space

Devavrat Shah, Qiaomin Xie, Zhi Xu; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:581-581

Periodic Q-Learning

Donghwan Lee, Niao He; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:582-598

Robust Learning-Based Control via Bootstrapped Multiplicative Noise

Benjamin Gravell, Tyler Summers; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:599-607

Robust Regression for Safe Exploration in Control

Anqi Liu, Guanya Shi, Soon-Jo Chung, Anima Anandkumar, Yisong Yue; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:608-619

Constrained Upper Confidence Reinforcement Learning

Liyuan Zheng, Lillian Ratliff; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:620-629

Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable Dynamical Systems

Muhammad Asif Rana, Anqi Li, Dieter Fox, Byron Boots, Fabio Ramos, Nathan Ratliff; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:630-639

Planning from Images with Deep Latent Gaussian Process Dynamics

Nathanael Bosch, Jan Achterhold, Laura Leal-Taixé, Jörg Stückler; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:640-650

A First Principles Approach for Data-Efficient System Identification of Spring-Rod Systems via Differentiable Physics Engines

Kun Wang, Mridul Aanjaneya, Kostas Bekris; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:651-665

Model-Based Reinforcement Learning with Value-Targeted Regression

Zeyu Jia, Lin Yang, Csaba Szepesvari, Mengdi Wang; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:666-686

Localized Learning of Robust Controllers for Networked Systems with Dynamic Topology

Soojean Han; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:687-696

NeuralExplorer: State Space Exploration of Closed Loop Control Systems Using Neural Networks

Manish Goyal, Parasara Sridhar Duggirala; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:697-697

Toward fusion plasma scenario planning for NSTX-U using machine-learning-accelerated models

Mark Boyer; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:698-707

Learning for Safety-Critical Control with Control Barrier Functions

Andrew Taylor, Andrew Singletary, Yisong Yue, Aaron Ames; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:708-717

Learning Dynamical Systems with Side Information

Amir Ali Ahmadi, Bachir El Khadir; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:718-727

Feynman-Kac Neural Network Architectures for Stochastic Control Using Second-Order FBSDE Theory

Marcus Pereira, Ziyi Wang, Tianrong Chen, Emily Reed, Evangelos Theodorou; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:728-738

Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time

Jeongho Kim, Insoon Yang; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:739-748

Identifying Mechanical Models of Unknown Objects with Differentiable Physics Simulations

Changkyu Song, Abdeslam Boularias; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:749-760

Objective Mismatch in Model-based Reinforcement Learning

Nathan Lambert, Brandon Amos, Omry Yadan, Roberto Calandra; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:761-770

Tools for Data-driven Modeling of Within-Hand Manipulation with Underactuated Adaptive Hands

Avishai Sintov, Andrew Kimmel, Bowen Wen, Abdeslam Boularias, Kostas Bekris; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:771-780

Probabilistic Safety Constraints for Learned High Relative Degree System Dynamics

Mohammad Javad Khojasteh, Vikas Dhiman, Massimo Franceschetti, Nikolay Atanasov; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:781-792

Lyceum: An efficient and scalable ecosystem for robot learning

Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, Emanuel Todorov; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:793-803

Encoding Physical Constraints in Differentiable Newton-Euler Algorithm

Giovanni Sutanto, Austin Wang, Yixin Lin, Mustafa Mukadam, Gaurav Sukhatme, Akshara Rai, Franziska Meier; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:804-813

Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach

Yingying Li, Yujie Tang, Runyu Zhang, Na Li; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:814-814

Learning to Plan via Deep Optimistic Value Exploration

Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:815-825

L1-GP: L1 Adaptive Control with Bayesian Learning

Aditya Gahlawat, Pan Zhao, Andrew Patterson, Naira Hovakimyan, Evangelos Theodorou; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:826-837

Data-Driven Distributed Predictive Control via Network Optimization

Ahmed Allibhoy, Jorge Cortes; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:838-839

Information Theoretic Model Predictive Q-Learning

Mohak Bhardwaj, Ankur Handa, Dieter Fox, Byron Boots; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:840-850

Learning nonlinear dynamical systems from a single trajectory

Dylan Foster, Tuhin Sarkar, Alexander Rakhlin; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:851-861

A Duality Approach for Regret Minimization in Average-Award Ergodic Markov Decision Processes

Hao Gong, Mengdi Wang; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:862-883

Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees

Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:884-893

Dual Stochastic MPC for Systems with Parametric and Structural Uncertainty

Elena Arcari, Lukas Hewing, Max Schlichting, Melanie Zeilinger; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:894-903

Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation

Hany Abdulsamad, Jan Peters; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:904-914

A Kernel Mean Embedding Approach to Reducing Conservativeness in Stochastic Programming and Control

Jia-Jie Zhu, Bernhard Schoelkopf, Moritz Diehl; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:915-923

Efficient Large-Scale Gaussian Process Bandits by Believing only Informative Actions

Amrit Singh Bedi, Dheeraj Peddireddy, Vaneet Aggarwal, Alec Koppel; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:924-934

Plan2Vec: Unsupervised Representation Learning by Latent Plans

Ge Yang, Amy Zhang, Ari Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:935-946

Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems

Joao Paulo Jansch-Porto, Bin Hu, Geir Dullerud; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:947-957

Improving Robustness via Risk Averse Distributional Reinforcement Learning

Rahul Singh, Qinsheng Zhang, Yongxin Chen; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:958-968

Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning

Karl Pertsch, Oleh Rybkin, Jingyun Yang, Shenghao Zhou, Konstantinos Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:969-979

Safe non-smooth black-box optimization with application to policy search

Ilnura Usmanova, Andreas Krause, Maryam Kamgarpour; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:980-989

Improving Input-Output Linearizing Controllers for Bipedal Robots via Reinforcement Learning

Fernando Castañeda, Mathias Wulfman, Ayush Agrawal, Tyler Westenbroek, Shankar Sastry, Claire Tomlin, Koushil Sreenath; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:990-999

Uncertain multi-agent MILPs: A data-driven decentralized solution with probabilistic feasibility guarantees

Alessandro Falsone, Federico Molinari, Maria Prandini; Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:1000-1009

subscribe via RSS