GPU Computing VS CPU Computing of LSTM

I was trying to use  a Nvidia RTX- A4000 for training my LSTM. I experienced the following…

1. Warning:

lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.

 
  I changed my activation function to tanh from relu and added 

recurrent_activation='sigmoid'

it started working.

My question is is there any possibilty that i can stick on to my old activation functions and leverage the faster computing possibilites of my GPU and cudNN kernel. Please do help





 

Hello Vinu,



Great to know you are trying to use your GPU.



As Varun explained to you in another post, there is no way to run the Nvidia DNN with the relu activation function.



The cuDNN runs only with the tanh activation function.



I hope this helps,



José Carlos

Thanks Jose. It answers my question

Thanks to you Vinu,



Regards,



José Carlos