Instead of using brute force to get the best variables is there a better way?

If I have a method that has arguments that need to be optimised I find that it can exponentially slow down my overal performance using nested loops. Im wondering if there are any techniques that i can use instead of using brute force to get the best variables. 

Hi Jane,

You can try 'Random Search' which is a simple optimisation technique that you can use to find the best set of input variables for a method.



One advantage of random search is that it is relatively simple to implement and can be used with a wide variety of optimization problems. And random search is often able to explore the search space more efficiently than grid search, which can be useful in high-dimensional problems where the number of possible combinations of input variables is very large.



A downside of using random search is that the results obtained may not be as accurate as other methods, however, you will definitely save up on a lot of time and computational power.



But, it's worth noting that random search can be combined with other optimization techniques like Bayesian optimization, which can lead to further improvement.



I hope this was helpful.

Do you have any courses focused on optimisation of parameters?

Hi Jane,

You can check out the course on Trading Alphas: Mining, Optimisation, and System Design on Quantra.

Is reinforced learning a good option here? I know this question is broad and a little vague but in terms of using things that can optimize a strategy is Deep Reinforced leaning close?

Hi Jane,

Reinforcement Learning (RL) can be applied, but it depends on the specific use case and how the problem statement is designed. Clarity on the problem and specific use case often reduces the need for brute force methods.



For instance, finding moving average crossover combinations for investing doesn't require a loop to find the best average parameters from 1 to 1000, as the focus is on investing and therefore only considering combinations of 50, 100, 150, and 200. Random search, on the other hand, selects numbers randomly within a given range.



So to conclusion, I'd would say RL can be used, but the choice should be made completely based on the precise use case and it's recommended to have a clear understanding of the problem statement rather than relying on complex methods.



I hope this helped.

What methods/models can be used to approach a grid seach using AI (especialy supervised learning)?

Hi Jane,



Several AI methods/models can be used to approach a grid search using supervised learning:

  1. Random Forest: It can be used to perform a grid search by setting up a range of values for the hyperparameters and then testing each combination of hyperparameters on the training data.
  2. Support Vector Machines (SVM): To perform a grid search using SVM, you can set up a range of values for the hyperparameters and then test each combination of hyperparameters on the training data.
  3. Neural Networks: To perform a grid search using neural networks, you can set up a range of values for the hyperparameters, such as the number of hidden layers, the learning rate, and the activation functions, and then test each combination of hyperparameters on the training data.
  4. Gradient Boosting Machines (GBM): You can set up a range of values for the hyperparameters, such as the learning rate, the number of trees, and the maximum depth of each tree, and then test each combination of hyperparameters on the training data.
  5. K-Nearest Neighbors (k-NN): You can set up a range of values for the hyperparameters, such as the number of nearest neighbours to consider and the distance metric to use, and then test each combination of hyperparameters on the training data.
However, the choice of which method/model to use depends on the specific problem at hand and the characteristics of the data.

Hope this helps!

Thanks,
Akshay

What do you think of Bayes optimization? Does blueshift support any libraries that can do it?


Hi Jane,

Yes, Bayesian optimization can be used as an alternative to grid search for hyperparameter tuning in supervised learning models.

Compared to grid search, Bayesian optimization is generally more efficient and effective in finding the optimal hyperparameters, especially when the search space is high-dimensional or complex. Therefore, it can be used as an AI-based approach to hyperparameter tuning in supervised learning models.

Thanks,
Akshay

What library do you offer to do this on blueshift?

Hi Jane,



We do not have any library for the same on Blueshift.



Thanks,

Akshay