Anyone who has ever worked with children who are struggling at learning – whether a parent or teacher – knows that diagnostic labels can only go so far in helping individuals. While receiving a diagnosis is an important landmark moment for children and families, is it enough information to guide those who are trying to support those children?
To identify the best solution, those helping them must understand the underlying cause, which can vary widely, even among children with the same diagnosis. Now researchers have a new toolkit for understanding these variations: machine learning.
In an innovative new study, cognitive neuroscientists fed an algorithm data from 550 struggling learners. The algorithm identified four clusters of difficulty, such as in working memory or processing sounds in words. These clusters overlapped across different diagnostic areas.
“We wanted to rethink how to study children developmental disorders and those who struggle at school,” says Duncan Astle of the University of Cambridge, lead author of the study in Developmental Science. “In particular we wanted to take advantage of different analytical tools that are rarely used within our field – in this case, machine learning.”
Rather than choosing the children as subjects for the study according to diagnostic criteria or cut-offs on a standard scale, Astle and his colleagues asked professionals working in children’s services to refer children that had come to their attention because they were struggling. They then inputted a wealth of cognitive testing data from each child into the algorithm, including measures of listening skills, spatial reasoning, problem solving, vocabulary, and memory.
The researchers found that while the clusters identified aligned with parental and educational data on their difficulties, they did not correspond to previous diagnoses. They also found that the groupings mirrored patterns seen in fMRI scans from 184 of the children, suggesting that that the algorithm identified differences that partly reflect underlying biology.
CNS spoke with Astle about the study and its implications for moving beyond diagnostic labels.
CNS: How did you become personally interested in this research area? Why is it important to you?
Astle: I spend a lot of time with teachers. Hearing their experiences of children who struggle, and their frustration with not knowing how best to support them. In the long run, this is why I think this kind of work is important.
Our machine learning and clustering approach was almost entirely data-driven; we wanted the data to tell us about the ways in which children’s profiles could differ.
CNS: Can you explain how your use of machine learning for this study differed from past such uses in the field?
Astle: There are very few uses of machine learning in developmental disorders or indeed mental health. Those that do exist have used supervised machine learning – that is the algorithm is trying to learn about predefined categories within the data. Here we wanted to use an unsupervised algorithm. That is we wanted to make as few assumptions as possible about what the algorithm would learn.
The algorithm was fed data from 550 struggling learners. Each child was defined by 7 values, each corresponding to a different cognitive process that has previously been implicated in developmental disorder. In each case we used a standardised measure that is widely used within the field, and which had a large normative dataset available.
CNS: What was the major challenge in undertaking this study? Challenges unique to integrating machine learning with cognitive neuroscience?
Astle: A big challenge is recruitment. If you are not going to recruit children according to a specific diagnosis, then who do you study? We were very lucky that many specialist teachers, educational psychologists, professionals in child and adolescent psychiatry, pediatricians and clinical psychologists referred us children they thought were struggling.
In terms of the neuroscience, it took us a little while to get practiced and persuading children to keep still during the MRI protocol. But we got there. Integrating the machine learning with the neuroimaging data was remarkably straightforward.
CNS: Why did you design this such that the clusters would be difficulty-based rather than based on traditional diagnoses?
Astle: We had suspected that the diagnoses might not correspond to children’s cognitive profiles, or learning outcomes. But our machine learning and clustering approach was almost entirely data-driven; we wanted the data to tell us about the ways in which children’s profiles could differ. It need not have been the case that he different profiles of cognitive difficulty would cut across any diagnostic boundaries, but that is just how it turned out.
CNS: What do you most want people to understand about this work?
Astle: That we need to move beyond diagnostic labels when we think about why children struggle and how we help them. Interventions are currently structured around diagnoses, and that is probably not the best way of doing it. Our analysis shows that these cognitive profiles cut across traditional diagnostic boundaries.
CNS: What’s next for this line of work?
Astle: We want to see all of these children again. Longitudinal data are key for understanding how these difficulties might manifest and change as children develop. Do specific cognitive difficulties cascade as children grow? Or do some children compensate for their difficulties? And if so, how? These are the kinds of questions we want to address in future.
-Lisa M.P. Munoz
Leave a Reply
You must be logged in to post a comment.