-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can Talos Work with Unsupervised Learning on LSTM/Autoencoder Model #533
Comments
For example:
|
My model is unsupervised, so I do not have the "y" dataset. Does Talos only work for supervised models? |
I am also wondering this, @krwiegold did you ever find a way to make this work? |
@alexcwsmith I could never get it to work unfortunately. I had to give up on talos. |
Let's have a look into this. Some of the higher priority items like full support for multi-input models, and distributed experiments have now been completed, so I think this could very well be next. It's a very interesting problem, given there is no truth to optimize for. |
@krwiegold @alexcwsmith can you help and share one or two code complete examples in Google Colab where such a model is running without Talos. Also, had you any thoughts about the possible ways to implement the support into Talos. |
Thanks @mikkokotila The simplest example I think is the VAE example from PyTorch here: https://github.com/pytorch/examples/tree/main/vae And as far as possible ways to implement talos with a VAE, simply running a scan to find parameters that minimize the loss, or the KL Divergence, would be a great start. |
Hi, I am trying to use Talos to optimize the hyperparameters on an unsupervised LSTM/Autoencoder model. The model works without Talos. Since I do not have y data (no known labels / dependent variables), so I created my model as follows below. And the data input is called "scaled_data".
set parameters for Talos
p = {'optimizer': ['Nadam', 'Adam', 'sgd'],
'losses': ['binary_crossentropy', 'mse'],
'activation':['relu', 'elu']}
create autoencoder model
def create_model(X_input, y_input, params):
autoencoder = Sequential()
autoencoder.add(LSTM(12, input_shape=(scaled_data.shape[1], scaled_data.shape[2]), activation=params['activation'],
return_sequences=True, kernel_regularizer=tf.keras.regularizers.l2(0.01)))
autoencoder.add(LSTM(4, activation=params['activation']))
autoencoder.add(RepeatVector(scaled_data.shape[1]))
autoencoder.add(LSTM(4, activation=params['activation'], return_sequences=True))
autoencoder.add(LSTM(12, activation=params['activation'], return_sequences=True))
autoencoder.add(TimeDistributed(Dense(scaled_data.shape[2])))
autoencoder.compile(optimizer=params['optimizer'], loss=params['losses'], metrics=['acc'])
scan_object = talos.Scan(x=scaled_data, y=scaled_data, params=p, model=create_model, experiment_name='LSTM')
My error says: TypeError: create_model() takes 3 positional arguments but 5 were given.
How am I passing 5 arguments? Any ideas how to fix this issue? I looked through the documents and other questions, but don't see anything with an unsupervised model. Thank you!
The text was updated successfully, but these errors were encountered: