Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added a '-seed' option to sample.lua to allow for an identical rerun.… #219

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/flags.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,4 @@ The sampling script `sample.lua` accepts the following command-line flags:
- `-gpu`: The ID of the GPU to use (zero-indexed). Default is 0. Set this to -1 to run in CPU-only mode.
- `-gpu_backend`: The GPU backend to use; either `cuda` or `opencl`. Default is `cuda`.
- `-verbose`: By default just the sampled text is printed to the console. Set this to 1 to also print some diagnostic information.
- `-seed`: Default is 0. Set this to a non-zero value to seed the torch RNG with a specific value. Omit flag or set as zero to randomly seed the RNG(!). Seeding the RNG allows for reproductible output over multiple runs of sample.lua given the same parameters and checkpoint. Honours the -verbose flag.
11 changes: 11 additions & 0 deletions sample.lua
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,23 @@ cmd:option('-temperature', 1)
cmd:option('-gpu', 0)
cmd:option('-gpu_backend', 'cuda')
cmd:option('-verbose', 0)
cmd:option('-seed', 0)
local opt = cmd:parse(arg)


local checkpoint = torch.load(opt.checkpoint)
local model = checkpoint.model

if opt.seed == 0 then
turbotas marked this conversation as resolved.
Show resolved Hide resolved
opt.seed = torch.random()
end
torch.manualSeed(opt.seed)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be simpler to force a manual seed only if -seed argument is set:

if <flag set>:
  torch.manualSeed(...)
end

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this code looks odd. The reason I did this is to make sure that if you see some output that you would like to see again, and if you had the verbose flag set to report the seed, you can re-run by then including the seed - even if you didn't originally set the seed flag. So the code always sets a seed - using your provided value if you set one and a random number if you didn't (or specified zero!). I thought this was clever ;-)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah okay that makes sense.


local msg
msg = string.format('Random number seed: %d', opt.seed)
if opt.verbose == 1 then print(msg) end


local msg
if opt.gpu >= 0 and opt.gpu_backend == 'cuda' then
require 'cutorch'
Expand Down