Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sampling: incremental printing of generated text #71

Open
gwern opened this issue Apr 23, 2016 · 2 comments
Open

Sampling: incremental printing of generated text #71

gwern opened this issue Apr 23, 2016 · 2 comments

Comments

@gwern
Copy link

gwern commented Apr 23, 2016

While using sample.lua to generate text to inspect how the quality is changing between checkpoints, it would be nice if each character could be printed as it's generated (like in char-rnn) so you don't have to wait for the full sample to be created before you see any output. This wait is particularly painful if you need to generate large amounts of text to see how it's going; for my metadata training, I need 10-20k characters to see it sample from several different authors, and it takes something like an hour to do that much.

sample.lua calls LanguageModel.lua, and inside sample, I think this could be done by adding a print call using the next_char variable?

@AlekzNet
Copy link

Have a look at this PR: #8

@danindiana
Copy link

I agree that the output being shown is in some cases desirable and a user-friendly aspect of char-rnn. Even more troubling, however is that even when I attempt to sample the output using a modifier like | tee when writing a file torch-rnn doesn't show the output. Maybe this is a bug related to #158 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants