Course Name: Deep Reinforcement Learning in Trading, Section No: 22, Unit No: 7, Unit type: Document
Hi, I am running the code localy and it takes the model about 5 minutes for a single trade. The strange think is that it is not generating any work load on my CPU or GPU. If this is a high computation model, shouldnt the Python app be using a high percentage of my CPU or GPU? Task manager is showing 0%. Test_mode is set to "False".
Hi Fernando,
This is an interesting observation! It is possible that some processes with different names are running on the system, which is utilising the CPU and GPU on notebook run. However, let us check and analyse this and get back to you.
Thanks,
Akshay
There is no load at all on the CPU or GPU… Its going to take about 3 days to train the model sequentially… I'm using a Intel(R) Core™ i7-7700HQ with 16 GB RAM and the GPU is a Geforce GTX 1060.
Att,
Fernando
Hi Fernando,

The above graph shows the CPU utilisation while running the Jupyter Notebook. As you can see from the graph, the CPU utilisation increases significantly when we run the notebook.
You can check the same and plot the CPU utilisation using the following lines of code:
import psutil
import time
import matplotlib.pyplot as plt
# Function to get CPU utilization percentage
def get_cpu_utilization():
return psutil.cpu_percent(interval=1)
# Initialize lists to store data
timestamps = []
cpu_utilization = []
# Number of data points to collect
num_points = 100
# Main loop
for _ in range(num_points):
timestamp = time.time()
utilization = get_cpu_utilization()
timestamps.append(timestamp)
cpu_utilization.append(utilization)
time.sleep(1)
# Plot the data
plt.plot(timestamps, cpu_utilization)
plt.xlabel('Time')
plt.ylabel('CPU Utilization (%)')
plt.title('CPU Utilization Over Time')
plt.show()
Hope this helps!
Thanks,
Akshay
Hi Akshay,
It was running with a different process name indeed, as you said.
What machine spec do I need to run this fast? Or is it possible to run it using the GPU?
Thank you for your help.
Att,
Fernando
Hi Fernando,
I am running the notebook on a similar system configuration as you have provided, and it takes nearly 15-20 seconds for a single trade on average. There is a possibility that some other system processes that have a higher priority over the notebook process could be utilising the memory and CPU in your case.
Hope this helps!
Thanks,
Akshay
Hi Akshay,
I found out that model.predict() has a memory leak issue, it will increase RAM usage until the kernel crashes. To fix that I changed model.predict() to model().numpy(), and the processing speed increased a lot to about 5 secs per trade. Also, append has been deprecated to dataframes, so trade_logs = trade_logs.append(tl) needs to be replaced by trade_logs = pd.concat([trade_logs, pd.DataFrame(tl)], ignore_index=True).
Thanks for your help.
Att,
Fernando
Hi Fernando,
Glad that you found the source of the issue!
Thanks,
Akshay