During the last days I have found a series of post were people sugest that before trainign a model the imputes / predictors should be normailize in order to increase model accuracy and reduce computational resourses and training time.
Is this right?
As an example normailize a technical indicator as EMA or a Linear Regresión makes any sence?
Feature scaling using standardization or normalization (they mean different things) is no guarantee of a better model, but it is usually required for models that are not scale invariant. Which means the model estimates change when you multiply one or more features with a constant number. Example of scale invariant model (i.e. scaling does not matter) is ordinary least square. The reason you still may want to do feature scaling in this case is for 1) better interpretability or 2) neumerical stability (esp if one feature is several magnitude larger than others). But I think with modern neumerical libraries, the second point may not be vary important these days. Another class of models which are usually scale invariant are decision trees (the family, i.e. random forest, and perhaps xgboost). For models which are not scale invariant, you need feature scaling. This is required as the error surface (i.e. a plot of how does the objective function change with change in weights of the features) changes with scale. A comparable range of all features usually mean a "spherical" error surface (meaning no ridges or artifical local minima owing to a whacky large scale of a feature), which increases the speed of convergence and also the chances of hitting the actual global minima (if it exists). Any methods that does neumerical search (e.g. gradient descent, any minimization subject to constraints, i.e. like lasso or ridge) or methods that depends on feature distances (e.g. clustering, SVM) or use singular value decomposition (PCA and some image processing etc.) fall in this category. It is recommended to use feature scaling for this class of models.
Does feature scaling ever harm? Sometime it may, especially if you are using scaling where not required, say in random forest, and the values form a natural decision boudary. Here, by scaling you are loosing information (especially true for normalizing vis-a-vis standardization).
Now that you have some insights, I suggest google more on this topic if you want to learn more about it.