-
Notifications
You must be signed in to change notification settings - Fork 268
Troubleshooting
This means that a dependency used by matplotlib called subprocess32 is not building, which is generally associated with python 2. The way to overcome this issue is by first installing an older version of matplotlib (this is ok):
pip install matplotlib==1.5.3
This might happen on some older systems with python 2. Overcome by first installing an older numpy version:
pip install numpy==1.14.5
This error comes as a result of listing optimizers as string values as opposed to the actual object name in the params dictionary.
When not using lr_normalizer get ValueError: Could not interpret optimizer identifier: <class 'keras.optimizers.Adam'>
This is the reverse of the above; when lr_normalizer is not used, string values for optimizers should be used in the params dictionary.
When using a string value for activation and get TypeError: unsupported operand type(s) for +: 'int' and 'numpy.str_'
For example if the activation is 'relu' and 'elu', then this can be resolved simply by:
from keras.activations import relu, elu
Then instead of using a string value in the params dictionary, use the actual object (e.g. relu).
NOTE: this is a very common case to get it wrong and the error messages do not indicate that anything is wrong with activation. In fact the error messages will say everything else as you try to hopelessly troubleshoot this. This is an important one to remember friends!
Read above. It might be that there are other varieties of this as well, but the solution is always simple. Input actual activation object names (as opposed to strings) in the param dictionary.
The parameter for the first layer neuron value needs to be called 'first_neuron'
When ever hidden_layers is applied in the model, hidden_layers and dropout parameters need to be included in the params dictionary
This happens when the input model has:
return model, out
You fix this by using the right order for the objects:
return out, model