RuntimeError: Cannot clone object - Cross Validation

Hi Team,



I have completed Neural Networks course and trying to explore the downloadables in my windows machine. I have Anaconda setup and all looks good. I can successfully execute 

LSTM- Price Prediction-Upload.ipynb in my jupyter notebook. But, when i try to execute Cross Validation in Keras-Upload.ipynb, i'm facing the below error. Please help me to resolve this. Kindly let me know the code changes to be done to fix it.

 

neurons_params = [ 225,150,175]
act_1_params=['tanh','sigmoid','relu']
dropout_ratio_params=[0.18,0.30,0.23]
param_grid = dict(neurons=neurons_params,act_1=act_1_params,dropout_ratio=dropout_ratio_params)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1,verbose=2)

grid_result = grid.fit(X_train, y_train)

I'm getting error in the above lines. The below is the error message.
 
RuntimeError: Cannot clone object <keras.wrappers.scikit_learn.KerasClassifier object at 0x000000F2204952E8>, as the constructor either does not set or modifies parameter class_weight

 

Hi Arun,



We have re-tested the code from our end and could not replicate this error. Could you please share with us the Ipython notebook and any other files related to this particular error. We will debug the problem and get back to you.

I haven’t made any modification to the notebook provided in the Download section. Only difference i find is that i have used python3.6 version. Because, i have faced lot of issues to setup keras with tensorflow backend for python2.7.

Can you try removing the class_weights as an input variable of KerasClassifier in the previous cell and then try again.

I tried removing class_weights and it starter running. To reduced the data, i have modied the starting date to be 01/01/2018. I’m using Google Cloud Windows 2012 server with 3.7GB. Please let me know how i can solve the below error. 529 # Delete the underlying status object from memory otherwise it stays alive 530 # as there is a reference to status from this from the traceback due to ResourceExhaustedError: OOM when allocating tensor with shape[600,750] and type float on /job:l

The error seems to be cloud specific and I believe the ResourceExhaustedError suggests that the RAM is insufficient and not capable of completing the task. In such cases mostly the model will start training but gets stuck after sometime. You can try reducing the number of neurons in each layer of the model and run it again.

Thanks for your answer. I have set it up in my mac laptop. I could able to proceed and it ran successfully. I have used the code you have provided and did do any modification. When i was running “grid_result = grid.fit(X_train, y_train)”, it took around 20 to 30 mins, i can see only below output. Next step, generated a file “best_params_.sav” using pickle.

Epoch 19/20 - 0s - loss: nan - acc: 0.0000e+00 - val_loss: nan - val_acc: 0.0000e+00 Epoch 20/20 - 0s - loss: nan - acc: 0.0000e+00 - val_loss: nan - val_acc: 0.0000e+00 [CV] … neurons=175, dropout_ratio=0.23, act_1=relu, total= 29.0s

filepath=“CV_weights-best.hdf5” checkpoint = ModelCheckpoint(filepath, monitor=‘val_loss’, verbose=2, save_best_only=True, mode=‘auto’) training=best_model.fit(X_train, y_train, epochs=200, batch_size=32, verbose=2, validation_split=0.2, callbacks=[checkpoint],class_weight=class_weight)

Finally, i got below output. File “CV_weights-best.hdf5” is not generated and i could not run the next step. Epoch 00199: val_loss did not improve from inf Epoch 200/200 - 3s - loss: nan - acc: 0.0000e+00 - val_loss: nan - val_acc: 0.0000e+00 Epoch 00200: val_loss did not improve from inf There is some where wrong. Now, i’m using python2.7 version. Please help.

Sorry for putting it splitted because it didn’t allowed me to add in a single comment. Also, enable us to upload screenshots in this forum which will help.

Hi Arun, the error seems to suggests that the algorithm is not able to learn. Is it possible that the target label is a regression data and the last layer in the model is a sigmoid or any other classifier.

Thanks for the suggestion. I’m using the script provided in the Download section “Cross-Validation-in-Keras-Upload.ipynb” without any modification. I’m kind of lost here. If you can help me out on what code changes to be done from my side, that would be great help. I’m using same SBI data provided.

Arun, We are looking into your query and will get back to you on it. Thanks

Thank you so much… Looking forward to your response. If i can make it work, i can enhance it in my own way.

Hi Arun, The runtime error for the class_weights can be fixed using the below code: from sklearn.pipeline import Pipeline neurons_params = [ 225,150,175] act_1_params=[‘tanh’,‘sigmoid’,‘relu’] dropout_ratio_params=[0.18,0.30,0.23] pipeline = Pipeline(steps=[(‘clf’, model )]) param_grid = dict(clf__neurons=neurons_params,clf__act_1=act_1_params,clf__dropout_ratio=dropout_ratio_params) grid = GridSearchCV(estimator=pipeline, param_grid=param_grid, n_jobs=1,verbose=2)

grid_result = grid.fit(X_train, y_train,clf__class_weight=class_weight) . We have updated the same in the course downloadables now. You can try running the new files. If you still face an issue in training the model, please let us know. We will connect with you and try to solve it.

Hi Arun, If you still continue to get similar results where the model is not learning anything. then please check if you are using an activation function like relu which has an output range from (0,1), while your target dataset(y) contains labels ranging from (-1,1).

Hi Varun, Thanks for your prompt response. I have just downloaded latest code from Quantra and again executed “Cross Validation in Keras-Upload”. Faced the similar kind of issue. I never modified any code from the downloaded file. Just used as it is. Checked and found that it is using relu and i suppose that should be an issue. Can you please let me know what changes can i do to fix it? " act_1_params=[‘tanh’,‘sigmoid’,‘relu’]

Hi Arun, Could you please post your contact number, as our team would like to connect with you and solve this issue. Thank you.