You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The implementation of DOT seems based on SGD with momentum. Since vision transformers usually use AdamW as optimizer, how about adapting the DOT to other optimizer such as AdamW or Lamb?
The text was updated successfully, but these errors were encountered:
@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?
@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?
DOT needs to maintain momentum_kd and momentum_ce at the same time. Maybe
it has a conflict with DDP's implementation of updating parameters.
The implementation of DOT seems based on SGD with momentum. Since vision transformers usually use AdamW as optimizer, how about adapting the DOT to other optimizer such as AdamW or Lamb?
The text was updated successfully, but these errors were encountered: