DYNAMIC LOOKBACK (MOMENTUM TRADING STRATEGIES COURSE)

Hello!!  

I have been doing MOMENTUM TRADING STATEGIES COURSE,  

There is a section in the course called Correlation Analysis, the purpose is to obtain an optimal lookback period in order to 

Determine lookback returns and then a trading signal. 

There is an article that suggests how to calculate this lookback dynamically, here it is: 

https://seekingalpha.com/article/4043600-testing-dynamic-lookback-period-in-simple-momentum-trading-model 

It is shared in the course. 

I am a Little confussed on how to calculate this in PYTHON. 

I have done it , but I am not sure, I have tried 2 different aproaches, 

I would like to be 100% sure.  



This is the paragraph in the article that describes how: 



In the proposed trading system, SPY is held long if it is trading with positive momentum and cash is held if SPY's momentum is negative. Momentum is measured in one of two ways. First, a fixed lookback of 252 days is used, which corresponds generally to one year. Momentum is simply the current price divided by the price 252 days ago. Second, a dynamic lookback is used which is based on the 100-day volatility of SPY, which we'll call σ (σ is the standard deviation of the last 100 days of daily returns; 100 days was chosen arbitrarily). A volatility factor "VF" is then calculated by dividing the current σ by the 100-day simple average of σ. The lookback period is then VF times 252. If the current σ is 75% of the 100-day SMA of σ, then the lookback would be 75% * 252, or 189 days. The maximum lookback period was arbitrarily chosen at 500 days. 

  

 

ThankYou for your help!! 

Hello Alvaro, 

Appreciate your effort in trying to create code for content in additional reading documents. I assume you need help calculating momentum in python. Can you list the code steps of both of your approaches so that I can share my thoughts on the code logic?



Thank you

HELLO!

Yes,



First aproach:

data['Std'] = data['returns'].rolling(window=100).std()   # standard deviation last 100 days 

data['100-day Average σ'] = data['Std'].rolling(window=100).mean()  

data['VF'] = data['Std'] / data['100-day Average σ']     # VOLATILITY FACTOR



Second approach:

  • everything here is inside a for loop

    _2016 = 1096   # just the index in the dataset

    hold = 15  # hold days…

    for i in range(_2016, len(data) - hold):

         volatility_100_days = data.iloc[i - 101:i].returns.std() 

         average_100_days_std = (data.iloc[i - 101:i].returns.expanding().std().mean())

         volatility_factor = volatility_100_days / average_100_days_std

         lookback = int(volatility_factor * 252)



    Third approach: 
  • everything here is inside a for loop

    _2016 = 1096   # just the index in the dataset

    hold = 15  # hold days…

    sigma =

    for i in range(_2016, len(data) - hold):

        volatility_100_days = data.iloc[i - 100:i].returns.std()

        for s in range(0, 100):

            sigma.append(data.iloc[(i-199)+s:(i-99)+s].returns.std())

        average_100_days_std = sum(sigma) / len(sigma)

        volatility_factor = volatility_100_days / average_100_days_std

        lookback = int(volatility_factor * 252)


  • Here I try to calculate the average of standard deviation for the las 100 days , but the first value of the

    100 days stadard deviation values list (sigma) is the standard deviation of the prior last 100 days ( -200 day) and so on…

    This is my actual doubt: should I take the 100 days average standard deviation considering ALL the standard deviation values of the list with a lookback of 100 days for EACH ONE of them, or as the second approach where I only take into consideration the las 100 days returns and use  : expanding().std().mean()
First aproach:
data['Std'] = data['returns'].rolling(window=100).std()   # standard deviation last 100 days 
data['100-day Average σ'] = data['Std'].rolling(window=100).mean()  
data['VF'] = data['Std'] / data['100-day Average σ']     # VOLATILITY FACTOR

Second approach:
#everything here is inside a for loop
_2016 = 1096   # just the index in the dataset
hold = 15  # hold days..
for i in range(_2016, len(data) - hold):
     volatility_100_days = data.iloc[i - 101:i].returns.std() 
     average_100_days_std = (data.iloc[i - 101:i].returns.expanding().std().mean())
     volatility_factor = volatility_100_days / average_100_days_std
     lookback = int(volatility_factor * 252)

Third approach: 
#everything here is inside a for loop
_2016 = 1096   # just the index in the dataset
hold = 15  # hold days..
sigma = []
for i in range(_2016, len(data) - hold):
    volatility_100_days = data.iloc[i - 100:i].returns.std()
    for s in range(0, 100):
        sigma.append(data.iloc[(i-199)+s:(i-99)+s].returns.std())
    average_100_days_std = sum(sigma) / len(sigma)
    volatility_factor = volatility_100_days / average_100_days_std
    lookback = int(volatility_factor * 252)

# Here I try to calculate the average of standard deviation for the las 100 days , but the first value of the
100 days stadard deviation values list (sigma) is the standard deviation of the prior last 100 days ( -200 day) and so on...
This is my actual doubt: should I take the 100 days average standard deviation considering ALL the standard deviation values of the list with a lookback of 100 days for EACH ONE of them, or as the second approach where I only take into consideration the las 100 days returns and use  : expanding().std().mean()

Hello Alvaro, 

To calculate the volatility factor, you divide the rolling 100-day standard deviation of returns(i.e. σ) by the 100-day rolling mean of σ.



so, the first approach would be apt.



Hope this helps!