Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Efficiency issue #12

Open
LLCF opened this issue May 4, 2018 · 4 comments
Open

Efficiency issue #12

LLCF opened this issue May 4, 2018 · 4 comments

Comments

@LLCF
Copy link

LLCF commented May 4, 2018

 x_batch, y_batch = next(train_generator)
 feed_dict = {   img: x_batch,
                        label: y_batch
                    }

This way is very slowly.

@abin24
Copy link

abin24 commented May 7, 2018

It is slow because of the memory leak of the line:
global_step.assign(it).eval()
this line can be deleted!!!!

@LLCF
Copy link
Author

LLCF commented May 8, 2018

I don't think so. When I comment "global_step.assign(it).eval()", most of the time, the GPU-Util is also 0%. I am sure that the GPU is waiting for data.

@abin24
Copy link

abin24 commented May 8, 2018

Ok, I don't know what happened. I just add line sess.graph.finalize() after the line sess.run(init_op). and comment "global_step.assign(it).eval()" The train time is 45 faster than the original version. BTW, the original version will get 23times slower after about 1-2 hour

@duyanfang123
Copy link

I want to ask where is the train data and test data???I am very urgent

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants