-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What are the differences between MOE and SigOpt? #450
Comments
SigOpt and MOE differ in a few key ways (caveat, co-founder of SigOpt and co-dev of MOE):
SigOpt has a free academic plan at sigopt.com/edu if you are using it for academic work too. |
Are there any references to be cited? |
Also, @sonium0, responding to your prev comment: I'd say one of the biggest value-adds for sigopt is automation. MOE (or rather, Gaussian Processes aka GPs), like almost any ML algorithm, has hyperparameters (e.g., GP length scales, variance) that need to be properly tuned for it to function correctly. This is tricky business. When building MOE, we only had real data from Yelp to work with, so baking in a bunch of assumptions on hyperparameter behavior seemed overly narrow sighted. Sigopt, on the other hand, sees examples from many fields and goes to lengths to automatically figure stuff like this out for you. Similarly, MOE, like almost any open source tool, has tunable parameters (e.g., optimizer step size, number of monte carlo points, etc.) that substantially affect performance. Here we did try to pick some reasonable defaults as they are less application-dependent, but still it isn't perfect. Here again sigopt makes it so that you don't have to think about it. On the flip side, sigopt tries to make reasonable/automatic choices that will work well for all users. If you understand GPs well and understand your system well (aka you are an expert and not just a user), you can probably find parameters/hyperparameters that give you even better results. But this may not be worth your time/energy. Another thing I'd point out on methods: MOE is basically just GPs (we also have multi-armed bandits but that's pretty simple). Longer term, sigopt could have a whole slew of optimization techniques to apply to customers' problems. GPs are powerful and general, but they are certainly not the best tool in every situation. At the core, I'd say:
And lastly, as for support, I think for MOE there's mostly just me. I try to get back reasonably quickly on questions, although as noted above, new feature development will not be so quick. |
Hi, I checked out your academic plan (I'm a student) and the only issue is that it seems limited at 3 experiments. I am working on my Masters and would need more than that. Whats my best option? |
btw this is the paper I mentioned earlier: http://arxiv.org/abs/1602.05149 And you may want to reach out to sigopt directly for questions about the limit on the number of experiments. |
I recently came across SigOpt and used their experiment design module. Some further research then brought me here. I'm just getting started as with MOE but can I basically get the same functionality with MOE as with SigOpt's experiment design module? I would very much prefer to use MOE for academic work.
The text was updated successfully, but these errors were encountered: