You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for epoch in range(epochs):
# Set the model to train mode
model_0.train() # turn on gradient requires gradient
# 2. Forward pass
y_preds = model_0(x_train)
# 3. Calculate the loss
loss = loss_fn(y_preds,y_train)
print(f"Loss: {loss}")
# 4. Optimize zero grad
optimizer.zero_grad()
# 5. Perform backpropagartion on the loss with respect to the parameters of the model
loss.backward()
# 6. set the optimizer
optimizer.step()
# Testing
model_0.eval() # turn off gradient
The text was updated successfully, but these errors were encountered:
I received this error a couple of times. Without more details difficult to pinpoint. When I got the errors most of the time the problem was with a conflicting 'type' Could be the way arguments/parameters are passed between functions. By default the objects created may have a type of 'any' which work in most cases but not in functions that are particular about a specific type. For example a transform result passed into another tool function may result in an not callable object due to the 'type' conflict in one of the parameters.
torch.manual_seed(42)
An epoch is one loop through the data...
epochs=1
Training
1. Loop through the data
for epoch in range(epochs):
# Set the model to train mode
model_0.train() # turn on gradient requires gradient
The text was updated successfully, but these errors were encountered: