The enviorment you have created is so complex to tweek for use.
First I need the Enviorment code of the same but for the gym openAI and Stable Baseline
Hi Himanshu,
In the course, we have used a package independent approach to develop the codes for RL. In this way, you get a lot of flexibility in implementing the RL model as per your need. And by using a python package for RL implementation you would lose out on some flexibility.
But I do understand that you are looking for help in implementing using OpenAI gym, So I believe the below links might be helpful to explore OpenAI gym.
1. Leveraging OpenAI Gym and the Any trading Environment for Trading - https://www.section.io/engineering-education/leveraging-openai-gym-and-the-anytrading-environment-for-trading/
2. RL with OpenAI gym: https://towardsdatascience.com/reinforcement-learning-with-openai-d445c2c687d2
3. Get started with OpenAI Gym: https://blog.paperspace.com/getting-started-with-openai-gym/
You can also refer to the following documentation on RL baselines based on OpenAI Baselines
- Stable Baselines docs!: https://stable-baselines.readthedocs.io/en/master/
You can also look into 'TradingGym' which was built similar to the OpenAI Gym framework
- Trading Gym - https://github.com/cove9988/TradingGym
Thank you