I was trying to use a Nvidia RTX- A4000 for training my LSTM. I experienced the following…
1. Warning:
lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
I changed my activation function to tanh from relu and added
recurrent_activation='sigmoid'
it started working.
My question is is there any possibilty that i can stick on to my old activation functions and leverage the faster computing possibilites of my GPU and cudNN kernel. Please do help