Course Name: Options Volatility Trading: Concepts and Strategies, Section No: 16, Unit No: 27, Unit type: Document
Is there any way to assess this model? It would be helpful describe how to perform a n-times ahead forecasting using this model or tweek its parameters and define some useful stat metrics to evaluate the performance (comparing a y_test vs forecasted variable)
Hello Jorge,
Certainly! Here's a guide on how to perform n-times ahead forecasting and define useful statistical metrics for evaluation:
N-Times Ahead Forecasting:
- Start by estimating the GARCH(1,1) model using historical data. This involves fitting the model to the past volatility dynamics of the financial instrument under consideration.
- Once the model is estimated, you can perform n-times ahead forecasting by recursively predicting volatility for each future time step. The formula for n-times ahead forecast in a GARCH(1,1) model involves updating the volatility equation n-steps into the future.
- However, it should be noted that, the further into the future you try to forecast, the more uncertainty tends to accumulate. Small errors in each step of the forecasting process can compound, leading to larger errors over an extended forecast horizon.
Useful Statistical Metrics for Evaluation:
- After obtaining the n-times ahead forecasts, compare them with the actual observed values (y_test). Common statistical metrics for evaluating model performance include:
- Mean Squared Error (MSE): It measures the average squared difference between the predicted and observed values.
- Root Mean Squared Error (RMSE): It is the square root of the MSE and provides a measure of the model's prediction error in the original units of the data.
- Mean Absolute Error (MAE): It calculates the average absolute difference between the predicted and observed values.
- R-squared: It indicates the proportion of the variance in the dependent variable that is predictable from the independent variables.
Remember to split your data into training and testing sets, keeping a portion of the data for model training and another portion for evaluating its performance. This helps ensure that your evaluation metrics provide a realistic assessment of the model's predictive power on unseen data.
Feel free to reach out if you have further questions or need more clarification!