Interested in publishing a one-time post on R-bloggers.com? Press here to learn how.
Consider the regression Y(t) =a0+a1 Y(t-1)+ .. +ap Y(t-p) +b1 X(t-1)+.. bp X(t-p) +e(t)
Let (X — g — > Y) denote that the time series X(t) Granger-causes the Y(t) series.
The R package `lmtest’ has a function grangertest() for testing (X — g — > Y). It tests the Granger non-causality Null Hypothesis H0: b1=b2= …bp=0, that certain regression coefficients are all zero.
This is a standard procedure in econometrics textbooks and assumes linear regression and the F-test. Now the F-test is correct only if the underlying distribution of regression errors e(t) is Normal. Normality a strong assumption and easily relaxed by using the bootstrap. generalCorr::bootGcRsq relaxes the Normality assumption and considers kernel regressions which provide far better fits (higher R-squares)
generalCorr::causeSummary(mtx) is a powerful tool for assessing concurrent causality not covered by Granger causality
Measures of dependence in statistics are symmetric. Why?
Dependence relations in nature or data are almost never symmetric. (a) An infant depends on mother for survival, but mother’s survival does not equally depend on the infant. (b) New York’s rainfall depends on the latitude, but latitude does not equally depend on the New York’s rainfall at all.
As a measure of dependence the 100+ year old Pearson correlation coefficient miserably underestimates dependence. For example if x=1:10 and y=sin(x) perfectly depends on x, a good measure of dependence should be 1. Instead, the Pearson correlation coefficient -0.17 under-estimates it by 83%.
The gmcmtx0(mtx) function in `generalCorr’ package provides a non-symmetric matrix of generalized correlation coefficients with the correct measure of dependence. depMeas(x,y) gives a correct measure of dependence.
Thanks Approved