Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors when trying to run the provided test #20

Open
ArielleZhang opened this issue Mar 22, 2023 · 1 comment
Open

Errors when trying to run the provided test #20

ArielleZhang opened this issue Mar 22, 2023 · 1 comment

Comments

@ArielleZhang
Copy link

ArielleZhang commented Mar 22, 2023

I am getting this error:

--------------Options--------------
activation: leakyrelu
add_noise: True
attn_D: False
attn_E: False
attn_G: True
batch_size: 1
checkpoints_dir: ./checkpoints
coarse_or_refine: refine
data_powers: 5
display_env: main
display_id: None
display_port: 8092
display_server: http://localhost
display_single_pane_ncols: 0
display_winsize: 256
down_layers: 4
dropout: 0.0
embed_dim: 512
embed_type: learned
epoch: latest
eval: False
fine_size: 512
fixed_size: 256
gpu_ids: 0
how_many: inf
img_file: ./examples/celeba/img/
img_nc: 3
init_gain: 0.02
init_type: kaiming
isTrain: False
kernel_E: 1
kernel_G: 3
kernel_T: 1
lipip_path: ./model/lpips/vgg.pth
load_size: 512
mask_file: ./examples/celeba/mask/
mask_type: 3
mid_layers: 6
model: tc
nThreads: 8
n_decoders: 0
n_encoders: 12
n_layers_D: 3
n_layers_G: 4
name: celeba
ndf: 32
netD: style
netE: diff
netG: diff
netT: original
ngf: 32
no_flip: False
no_shuffle: True
norm: pixel
nsampling: 1
num_embeds: 1024
num_res_blocks: 2
phase: test
preprocess: scale_shortside
results_dir: ./results
reverse_mask: False
top_k: 10
use_pos_G: False
which_iter: 0
word_size: 16
----------------End----------------
testing images = 13
model [TC] was created
creating web directory ./results\celeba\test_latest
how many is 13
Traceback (most recent call last):
  File "E:\1-EngSci3\1-ECE324\Triple-Dots\TFill\test.py", line 25, in <module>
    for i, data in enumerate(islice(dataset, opt.how_many)):
  File "D:\programs\anaconda3\envs\TFill\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "D:\programs\anaconda3\envs\TFill\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "D:\programs\anaconda3\envs\TFill\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
    w.start()
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_transform.<locals>.<lambda>'
(TFill) PS E:\1-EngSci3\1-ECE324\Triple-Dots\TFill> Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "D:\programs\anaconda3\envs\TFill\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
@XinghuaHuang02
Copy link

you should change some paras.
.\options base_options.py, line 34
parser.add_argument('--nThreads', type=int, default=0, help='# threads for loading data')
then you can run!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants