In nearly every discipline within the behavioral, health, and educational sciences, longitudinal data have become requisite for establishing temporal precedence and distinguishing inter-individual differences in intra-individual change. Whereas traditional longitudinal designs often obtained repeated assessments at monthly or even yearly intervals, recent advances in mobile technology have allowed for the collection of multiple assessments throughout a single day. These so-called intensive longitudinal designs (ILDs) are becoming increasing prevalent in many empirical studies of human development and behavior. However, as with any advancement in design and assessment, a multitude of complexities arise when fitting statistical models to large numbers of repeated assessments often taken on smaller numbers of individuals. For example, it is not uncommon to use an ILD to obtain six daily assessments over a 14 day period on 75 individuals. Key among a variety of complexities that must be addressed is that of missing data. Standard existing methods for handling missing data are not always well suited when applied to large numbers of repeated assessments and guidance for practitioners is sparse. A recent paper in the journal Structural Equation Modeling by Linying Ji and colleagues addresses this very issue. The paper is motivated by an actual ILD in which a large number of assessments are obtained from parents in the assessment of emotional states and behaviors arising from conflicts between parents and children. In the original application, missing data are pervasive yet no well developed methods were available to address this issue. The authors then describe several modern methods for the analysis of missing data in ILDs, they conduct a computer simulation to evaluate the methods under known conditions, and they re-analyze the empirical data to demonstrate the new techniques. They conclude with recommendations for handling missing data in ILDs and provide R code to help in this endeavor.
An equivalent model can be thought of as a re-parameterization of the original model. In other words, it is just a different way of “packaging” the same information in the data and no equivalent model can be distinguished from another based on fit alone. If you were to fit a series of equivalent models to the same sample data you obtain exactly the same chi-square test statistic, RMSEA, CFI, TLI, and any other omnibus measure of fit. It is often best to treat this as a limitation of any given study and to potentially present one or a small number of equivalent model options to the reader so that these too might be considered as plausible representations of the data.