Replies: 1 comment
-
There was a bug in the learn.get_X_preds code that has been fixed. I believe the issue you were facing should also be fixed. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear tsai team,
We are facing a problem in the explainability section of XCM model, from this tutorial:
https://colab.research.google.com/github/timeseriesAI/tsai/blob/master/nbs/114_models.XCM.ipynb
We get 2 difference attribution maps when we run the show_gradcam function.
Using the same X, one time we provided the y, and the second time we didn't (y=None), and we got 2 different heatmaps:
model.show_gradcam(X,y)
y=None:
y=0:
digging deep inside the function, in order to understand this behavior. We found that inside get_acts_and_grads you guys are using this line:
preds = model.eval()(x)
which doesn't really correspond to what we get from the get_X_preds() function and this might be what is making the
difference.
_______________________________________________
for example:
Calling the get_acts_and_grads function the first time ( y is provided, and not None):
running
preds = model.eval()(x)
give us :which is different from the original y!! Since y is provided, y is not None and we we will backward on preds[0, y] which equals to -2.1717
When y is not provided (y is None), we will backward on "preds.max(dim=-1).values.mean()" which in this case equals 1.9500.
p.s. after training the model, these 2 lines yield different results ( y_pred1 doesn't equal y_pred2)
(1)
preds = model.eval()(X) y_pred1=preds.max(dim=-1)
(2)
_,_, y_pred2 = learn.get_X_preds(X, with_decoded=True)
So this might be the bug, because these 2 outputs should not be different, because seems like this is what you guys assumed when this function was built )
I would really appreciate your response, because we are stuck on this bug for a while now,
Many thanks,
Beta Was this translation helpful? Give feedback.
All reactions