I tried to run the code provided by changing the dataset; the code ran without issue on test mode, and when I tried on the full dataset, it ran for more than 3hrs and returned the following error. Thanks in advance.
This is my data.shape = (553609, 5)
This is my model summary:
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 150) 22650
dense_7 (Dense) (None, 300) 45300
dense_8 (Dense) (None, 3) 903
=================================================================
Total params: 68,853
Trainable params: 68,853
Non-trainable params: 0
_________________________________________________________________
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_9 (Dense) (None, 150) 22650
dense_10 (Dense) (None, 300) 45300
dense_11 (Dense) (None, 3) 903
=================================================================
Total params: 68,853
Trainable params: 68,853
Non-trainable params: 0
--------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-22-3b57ee0af746> in <cell line: 6>() 4 False
in rl_config
5 """ ----> 6 run(bars5m, rl_config) 3 frames <ipython-input-19-2571f8ac1e47> in run(bars5m, rl_config) 91 92 """—Creating a new Q-Table—""" —> 93 inputs, targets = exp_replay.process( 94 q_network, r_network, batch_size=rl_config['BATCH_SIZE']) 95 env.pnl_sum = sum(pnls) <ipython-input-18-659c4b5a5503> in process(self, modelQ, modelR, batch_size) 74 75 """—Calculate the reward at time t+1 for action at time t—""" —> 76 Q_sa = np.max(modelQ.predict(state_tp1, verbose=0)[0]) 77 78 if game_over: /usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 68 # To get the full stack trace, call: 69 # tf.debugging.disable_traceback_filtering()
—> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb /usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 50 try: 51 ctx.ensure_initialized() —> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 53 inputs, attrs, num_outputs) 54 except core._NotOkStatusException as e: InvalidArgumentError: Graph execution error: Detected at node 'sequential/dense/Relu' defined at (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance app.start() File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start self.io_loop.start() File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.10/asyncio/base_events.py", line 600, in run_forever self._run_once() File "/usr/lib/python3.10/asyncio/base_events.py", line 1896, in _run_once handle._run() File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda> lambda f: self._run_callback(functools.partial(callback, future)) File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback ret = callback() File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner self.ctx_run(self.run) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run yielded = self.gen.send(value) File "/usr/local/lib/
Hello Ramon,
It is difficult to pinpoint the source of the error as of now, but let me get back to you. In the meanwhile, can you give some details about the dataset you are using?
Hello Rekhit, and thanks for taking a look at this,
My dataset is XAUUSD historical data 2006 - 2023 (I also divide from 2016 - 2023 but same error) intraday 1M CSV resample to 5M.
Here is a link: https://drive.google.com/file/d/1S4AL8RVc8_LctvKKcbJA6SgYTtD-831P/view?usp=sharing
There are no missing data, but debugging; I did some printouts to see the time the error happened and double-checked the data, and the data is correct; I also changed the starting point and batch_size=32, and the error persists; here is the log and I am running it locally :
(quantra_py) PS C:\Users\rgero\Downloads\ORO-20230723T224909Z-001> nvidia-smi Mon Jul 24 09:28:00 2023 ±--------------------------------------------------------------------------------------+ | NVIDIA-SMI 536.25 Driver Version: 536.25 CUDA Version: 12.2 | |-----------------------------------------±---------------------±---------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX 4000 Ada Gene… WDDM | 00000000:01:00.0 Off | Off | | N/A 45C P8 10W / 119W | 10045MiB / 12282MiB | 5% Default | | | | N/A | ±----------------------------------------±---------------------±---------------------+ ±--------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 20040 C …naconda3\envs\quantra_py\python.exe N/A | ±--------------------------------------------------------------------------------------+ (quantra_py) PS C:\Users\rgero\Downloads\ORO-20230723T224909Z-001> nvidia-smi Mon Jul 24 10:54:29 2023 ±--------------------------------------------------------------------------------------+ | NVIDIA-SMI 536.25 Driver Version: 536.25 CUDA Version: 12.2 | |-----------------------------------------±---------------------±---------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX 4000 Ada Gene… WDDM | 00000000:01:00.0 Off | Off | | N/A 47C P8 11W / 120W | 10045MiB / 12282MiB | 4% Default | | | | N/A | ±----------------------------------------±---------------------±---------------------+ ±--------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 20040 C …naconda3\envs\quantra_py\python.exe N/A | ±--------------------------------------------------------------------------------------+ (quantra_py) PS C:\Users\rgero\Downloads\ORO-20230723T224909
State shape for time step 2017-03-15 23:15:00: (150,) — Assembling State for time step: 2017-03-15 23:15:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:20:00: (150,) — Assembling State for time step: 2017-03-15 23:20:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:25:00: (150,) — Assembling State for time step: 2017-03-15 23:25:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:30:00: (150,) — Assembling State for time step: 2017-03-15 23:30:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:35:00: (150,) — Assembling State for time step: 2017-03-15 23:35:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:40:00: (150,) — Assembling State for time step: 2017-03-15 23:40:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:45:00: (150,) — Assembling State for time step: 2017-03-15 23:45:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:50:00: (150,) — Assembling State for time step: 2017-03-15 23:50:00 — — Finished Assembling State — State shape for time step 2017-03-15 23:55:00: (150,) — Assembling State for time step: 2017-03-15 23:55:00 — — Finished Assembling State — State shape for time step 2017-03-27 00:00:00: (150,) — Assembling State for time step: 2017-03-27 00:00:00 — — Finished Assembling State — State shape for time step 2017-03-27 00:05:00: (114,) ---------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[45], line 6 1 """ 2 Run the RL model on the price data 3 Note: To run in a local machine, please change the TEST_MODE
to 4 False
in rl_config
5 """ ----> 6 run(bars5m, rl_co
Hi Ramon,
Thanks for sharing the data. We are looking into it and will get back to you at the earliest.
Hi Ramon,
It seems that the input data you've introduced might not be fully compatible with the network architecture provided in the capstone model solution. This situation suggests that adjustments to the rl_config parameters and experience replay function might be necessary to accommodate the new input data.
We are actively working on debugging the code with the provided data to address this challenge. Your patience is greatly appreciated during this process. We will work on this on priority.
Thanks
Hey Ramon,
The error occurs because the input given to the neural network is not dimensionally compatible with the input layer. This can be because of missing or incomplete elements in the dataset. You can try out the following to resolve the error.
- Use try except statements in the run function. This will cause the error the code to just skip over the instances where error occurs. you can use this as solution if the instances of error are relatively low
- The input dimension required is 150. You can add an if else statement before providing input to the neural network so that it takes in input only if the dimension is 150.
You can also investigate the root cause of error in the data using print statements to assemble state at instances where the size of input is not 150 and make the necessary changes.
You can revise the assemble states in case of any doubt here: https://quantra.quantinsti.com/startCourseDetails?cid=166§ion_no=11&unit_no=8&course_type=paid&unit_type=Notebook
Hope this helps!