Comparing research over time is one of the most compelling ways of understanding the POR environment. Although tracking research comes with risks in the interpretation phase (as I discuss here), there is a real power in seeing change over time because the potential biases of a single, one-off survey are mitigated.  

An observed statistical change from T1 to T2 must be because something changed in the environment when the same survey is used repeatedly with the same sample and other design components,

Tracking research has, however, one fundamental flaw in practice. It is only a powerful decision-making tool when it shows a clear actionable trend in one direction (especially after accounting for sampling error).

All too often, either there are too many changes or not enough changes for the client. I have conducted studies where both have occurred and both scenarios are problematic. If nothing changes, there is an inevitable sense of complacency (we are fine) or dejection that nothing has been accomplished (when a lot is perceived to have been done to move the metric). When there is too much change, it is hard to link that change to a successful strategy.

This is no doubt one of the reasons why there seems to be less pure tracking research out there. Because unless the trend is one that tells a compelling story about organization success and failure, these are the easiest studies to justify stopping.