Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training continue model produces tensor error when adding a node #13

Open
fathyshalaby opened this issue Sep 2, 2021 · 1 comment
Open

Comments

@fathyshalaby
Copy link

remote@pop-os:~/repos/autofurnish/external_experiments/scene_synthesis/deep_synth$ python continue_train.py --data-dir bedroom --save-dir bedroom --train-size 500 --use-count
Building model...
Converting to CUDA...
Building dataset...
Building data loader...
Building optimizer...
=========================== Epoch 0 ===========================
torch.Size([46, 38]) 108 191
torch.Size([44, 46]) 244 320
torch.Size([40, 41]) 110 413
torch.Size([232, 56]) 376 424
torch.Size([36, 150]) 366 303
torch.Size([191, 157]) 153 307
torch.Size([65, 154]) 339 233
torch.Size([120, 55]) 149 85
torch.Size([201, 178]) 114 378
torch.Size([45, 78]) 383 167
torch.Size([142, 187]) 161 98
torch.Size([75, 32]) 196 408
Traceback (most recent call last):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_train.py", line 206, in
train()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_train.py", line 136, in train
for batch_idx, (data, target, existing) in enumerate(train_loader):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_dataset.py", line 63, in getitem
composite.add_node(node)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/data/rendered.py", line 189, in add_node
to_add[xmin:xmin+xsize,ymin:ymin+ysize] = h
RuntimeError: The expanded size of the tensor (134) must match the existing size (178) at non-singleton dimension 1. Target sizes: [201, 134]. Tensor sizes: [201, 178]

@fathyshalaby fathyshalaby changed the title Training continue model produces tensor error Training continue model produces tensor error at rendering Sep 2, 2021
@fathyshalaby fathyshalaby changed the title Training continue model produces tensor error at rendering Training continue model produces tensor error when adding a node Sep 2, 2021
@kwang-ether
Copy link
Collaborator

there's probably an object that is out of bound in image space. You probably want to check the scene filtering logic and make sure all objects fall into the (256,256) or whatever size you choose after projection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants