You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After a couple of epochs I come up with this error in the operation: self.pdw[index] = self.momentum * self.pdw[index] - self.learning_rate * dw
Full traceback:
Traceback (most recent call last):
File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 585, in <module>
metrics = set_mlp.fit(x_train, y_train, x_test, y_test, loss=CrossEntropy, epochs=no_training_epochs, batch_size=batch_size, learning_rate=learning_rate,
File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 309, in fit
self._back_prop(z, a, masks, y_[k:l])
File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 242, in _back_prop
self._update_w_b(k, v[0], v[1])
File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 258, in _update_w_b
self.pdw[index] = self.momentum * self.pdw[index] - self.learning_rate * dw
File "C:\Users\rroman\AppData\Roaming\Python\Python39\site-packages\scipy\sparse\base.py", line 543, in __rmul__
return self.__mul__(other)
File "C:\Users\rroman\AppData\Roaming\Python\Python39\site-packages\scipy\sparse\base.py", line 475, in __mul__
return self._mul_scalar(other)
File "C:\Users\rroman\AppData\Roaming\Python\Python39\site-packages\scipy\sparse\data.py", line 124, in _mul_scalar
return self._with_data(self.data * other)
FloatingPointError: underflow encountered in multiply
I am guessing that there's a number in one of the arrays that gets smaller and smaller and smaller. Is there somebody around that could give me a hand to prevent this from happening?
The text was updated successfully, but these errors were encountered:
Hello.
I am trying to use your SET MLP implementation in a bunch of data from NMIST. I load the data with the following function I wrote:
After a couple of epochs I come up with this error in the operation:
self.pdw[index] = self.momentum * self.pdw[index] - self.learning_rate * dw
Full traceback:
I am guessing that there's a number in one of the arrays that gets smaller and smaller and smaller. Is there somebody around that could give me a hand to prevent this from happening?
The text was updated successfully, but these errors were encountered: