Skip to content

msschreiner/MB1T

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

198 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MB1T

A cross-lab, developmental collaboration investigating the test-retest reliability of infant-directed speech preference

More information can be found at the project website: https://manybabies.github.io/MB1T/

Paper

Schreiner, M. S., Zettersten, M., Bergmann, C., Frank, M. C., Fritzsche, T., Gonzalez-Gomez, N., ... Lippold, M. (2024). Limited evidence of test-retest reliability in infant-directed speech preference in a large pre-registered infant sample. Developmental Science, 27, e13551. https://dx.doi.org/10.1111/desc.13551.

Abstract

Test-retest reliability --- establishing that measurements remain consistent across multiple testing sessions --- is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently-used infant measures is largely unknown. The current study investigated the test-retest reliability of infants' preference for infant-directed speech (hereafter, IDS) over adult-directed speech (hereafter, ADS) in a large sample (N=158) in the context of the ManyBabies1 collaborative research project (hereafter, MB1; Frank et al., 2017; ManyBabies Consortium, 2020). Labs of the original MB1 study were asked to bring in participating infants for a second appointment retesting infants on their IDS preference. This approach allows us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we find no consistent evidence of test-retest reliability in measures of infants' speech preference (overall r = .09, 95% CI [-.06,.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants.

Folders

  • data: contains all the raw and processed data for the project, including processing scripts
  • paper: contains all code for reproducing the analyses and paper

ManyBabies1 test-retest analysis code

Analyzing MB1T data involves two steps:

  1. Merging of all data sets creating the df_all file (data/processing_scripts/merge_data.R)

  2. Preprocessing and analyses (paper/Retest_current_draft.Rmd)

Metadata

The list of participating labs can be found here: https://docs.google.com/spreadsheets/d/1jDvb0xL1U6YbXrpPZ1UyfyQ7yYK9aXo002UaArqy35U/edit#gid=0

Link to OSF project

Additional study materials and information can be found at the project's OSF page: https://osf.io/zeqka/

Preregistration

The project was preregistered. You can find the preregistration here: https://osf.io/v5f8t

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •