-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate AD (automatic differentiation) support #4
Comments
I had preliminary results of differentiating against the hardware parameters introduced in #101 (and refined in #120). I created a simple toy problem of a box falling for 1 second over a compliant flat terrain. I designed an quick optimization pipeline that simulates two of such boxes: a nominal one with default parameters, and a training one with wrong parameters. Once I collect trajectories from both, I compute the RMS error between the two trajectory, get the gradient w.r.t. a set of hardware parameters, and with it update the parameters of the training model.
As first attempt, this seems promising. Of course, when dealing with quantities not belonging to cc @flferretti @CarlottaSartore @traversaro @S-Dafarra @DanielePucci |
Super, important for many in @ami-iit/alpha-delta-tau |
A good example that compares a derivative computed analytically w.r.t. the same quantity obtained by AD is: |
Another example of using AD is the validation of |
JAXsim currently focuses only on sampling performance, exploiting
jax.jit
andjax.vmap
. Being written in JAX, the forward step of the simulation should be differentiable (also considering contact dynamics, since it is continuous), but it has not yet been investigated.Interesting resources:
The text was updated successfully, but these errors were encountered: