Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the differences between MOE and SigOpt? #450

Open
alexanderhupfer opened this issue Jan 3, 2016 · 5 comments
Open

What are the differences between MOE and SigOpt? #450

alexanderhupfer opened this issue Jan 3, 2016 · 5 comments

Comments

@alexanderhupfer
Copy link

I recently came across SigOpt and used their experiment design module. Some further research then brought me here. I'm just getting started as with MOE but can I basically get the same functionality with MOE as with SigOpt's experiment design module? I would very much prefer to use MOE for academic work.

@sc932
Copy link
Contributor

sc932 commented Jan 4, 2016

SigOpt and MOE differ in a few key ways (caveat, co-founder of SigOpt and co-dev of MOE):

  • The types of parameters supported (SigOpt can do integer and categorical in addition to continuous).
  • Administration, SigOpt takes care of everything for you (from GP hyperparameter selection to hosting the results). This also comes with a considerable speedup from running it locally.
  • Support, there is a full team at SigOpt to help with any issues.
  • Methods, SigOpt continues to push the cutting edge of research in this space and gets better every day, MOE and several of the other popular repos in this space have been stagnant for some time.

SigOpt has a free academic plan at sigopt.com/edu if you are using it for academic work too.

@alexanderhupfer
Copy link
Author

Are there any references to be cited?

@suntzu86
Copy link
Contributor

  • For sigopt, I assume you'd just cite their website but I'll let them make the call.
  • For MOE, citing this repo seems appropriate.
  • For both, I know there's a JSS archive paper in the works, but I don't have an ETA.

Also, @sonium0, responding to your prev comment:
(Note: I'm a MOE co-dev and sigopt co-founder too, although I left sigopt in mid 2015.)

I'd say one of the biggest value-adds for sigopt is automation.

MOE (or rather, Gaussian Processes aka GPs), like almost any ML algorithm, has hyperparameters (e.g., GP length scales, variance) that need to be properly tuned for it to function correctly. This is tricky business. When building MOE, we only had real data from Yelp to work with, so baking in a bunch of assumptions on hyperparameter behavior seemed overly narrow sighted. Sigopt, on the other hand, sees examples from many fields and goes to lengths to automatically figure stuff like this out for you.

Similarly, MOE, like almost any open source tool, has tunable parameters (e.g., optimizer step size, number of monte carlo points, etc.) that substantially affect performance. Here we did try to pick some reasonable defaults as they are less application-dependent, but still it isn't perfect. Here again sigopt makes it so that you don't have to think about it.

On the flip side, sigopt tries to make reasonable/automatic choices that will work well for all users. If you understand GPs well and understand your system well (aka you are an expert and not just a user), you can probably find parameters/hyperparameters that give you even better results. But this may not be worth your time/energy.

Another thing I'd point out on methods: MOE is basically just GPs (we also have multi-armed bandits but that's pretty simple). Longer term, sigopt could have a whole slew of optimization techniques to apply to customers' problems. GPs are powerful and general, but they are certainly not the best tool in every situation.

At the core, I'd say:

  • If you want to learn about GPs and learn about MOE (or if that's core to your research), this repo could be interesting/useful/valuable to you.
  • If you just want to know the best parameters for your system and don't care about optimization, then give sigopt a whirl.

And lastly, as for support, I think for MOE there's mostly just me. I try to get back reasonably quickly on questions, although as noted above, new feature development will not be so quick.

@Palzer
Copy link

Palzer commented May 11, 2016

Hi,

I checked out your academic plan (I'm a student) and the only issue is that it seems limited at 3 experiments. I am working on my Masters and would need more than that. Whats my best option?

@suntzu86
Copy link
Contributor

btw this is the paper I mentioned earlier: http://arxiv.org/abs/1602.05149

And you may want to reach out to sigopt directly for questions about the limit on the number of experiments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants