You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your good implementation of MAML, however, I think that maybe using state_dict() and load_stat_dict() is much easier than modifying all the weights (in learner.py forward), can I first deepcopy the net parameters(state_dict()) and use the fast weights (also use a optimizer to update, instead of list(map(lambda p: p[1] - self.update_lr * p[0], zip(grad, self.net.parameters()))) ), then load the origin parameters back to update the meta learner? Thanks.
The text was updated successfully, but these errors were encountered:
I also think it's too complicated to redefine the initialization parameters for each layer. Is there any way to make any network (such as ResNet) put into the MAML frame without defining each layer?
I wonder if anyone has successfully implemented this, as I haven't. It appears any load operation or attempt to backprop in an alternative network would remove the computational graph.
I have been relying on redefining every layer for deeper networks so it would really help if this works.
Thanks for your good implementation of MAML, however, I think that maybe using state_dict() and load_stat_dict() is much easier than modifying all the weights (in learner.py forward), can I first deepcopy the net parameters(state_dict()) and use the fast weights (also use a optimizer to update, instead of list(map(lambda p: p[1] - self.update_lr * p[0], zip(grad, self.net.parameters()))) ), then load the origin parameters back to update the meta learner? Thanks.
The text was updated successfully, but these errors were encountered: