Predicting to Protect: Can AI Help Us Identify Children at Risk of Abuse?

Andrew Morley
5 min readMay 29, 2024

Sadly, reviews into deaths as a consequence of child abuse all too often conclude that there was information available to suggest that the child was at risk.

The challenge is that this information is often fragmented across different agencies and different systems, making it difficult to connect the dots necessary to prompt an intervention.

Artificial Intelligence has the opportunity to revolutionize this by connecting up data, applying that against a taxonomy of risk factors to identify cases for intervention, and in doing so save lives.

Photo by Ben Wicks on Unsplash

Imagine being able to intervene before a child suffers significant harm. That’s the potential of Artificial Intelligence (AI) in child protection.

Building on a Strong Foundation

For many years, doctors and social workers have relied on standardized screening tools.

One example is the Pediatrician’s Child Abuse Screening Tool (PedHIT). This tool asks caregivers a series of structured questions about a child’s health, behavior, and injuries.

Whilst studies have shown that screening tools can be effective in increasing the detection of abuse, there is a challenge in developing a tool that is comprehensive for all forms of abuse, and sufficiently concise to meet the needs of busy professionals.

They also quite often rely on self-report and observation, so might miss reports to other agencies which when looked at in isolation does not raise a concern, but when added to what is known provides a case for intervention.

There is also an issue of validity with any tool having to be dynamic in capturing changes in risk factors and identifying any local patterns. Those that abuse, like many with criminal intent, will be constantly changing their behaviours to avoid detection.

The Potential of AI

AI has the potential to address these challenges. The ability to run an algorithm across multiple data sets to identify patterns against established risk factors for child abuse could revolutionalise the detection process of those at risk.

This is not entirely new territory with a number of technology companies developing tools for this purpose. Whilst welcome there can be a challenge with some companies developing black box algorithms to protect proprietary content which can undermine trust, and make evaluation difficult.

Some state agencies have looked to address this by developing their own tools. Take the Allegheny County Family Screening Tool (AFST), developed by the Allegheny County Department of Human Services (DHS).

The AFST analyzes information from a centralized repository, which can include things like prior child welfare involvement, domestic violence calls, and mental health records. This data analysis is the true power of the AFST. It allows the tool to identify patterns that might be missed by traditional methods.

While large-scale, long-term studies are ongoing, evidence suggests the AFST shows promise for improving child protection efforts. An independent evaluation by Stanford University found that the AFST increased the accurate identification of children who needed further intervention services, without increasing the workload on investigators. Researchers also documented reductions in racial disparities in case openings which suggested it was fairer.

Opportunities

Identification: The superpower of AI is the opportunity to analyse huge datasets. Imagine being able to identify a child with a history of school absences who comes from a household with a history of domestic violence calls. This pattern, which is all too common in child abuse cases, could be recognised and flag the child for further investigation by social workers.

Fairness: Standardized tools can be susceptible to human bias. AI algorithms, when developed and implemented responsibly, can potentially reduce bias in the screening process, leading to fairer outcomes for all children. This is important when you consider the impact of an allegation of child abuse, and the powers the state has to intervene. We have to get these calls right.

Dynamism: AI also provides the possibility of staying ahead of the curve in detection. Outcomes from earlier interventions can only serve to help the AI learn from evolving trends and patterns to constantly improve its accuracy in detection. This can also be used to provide real time learning to professionals to inform their understanding of risk, and when to intervene. These patterns might be global or localized to a geographical area.

Challenges

There are still hurdles to overcome before AI can reach its full potential in child protection.

Data Standardization: AI thrives on well-organized information. Collaboration between government agencies, social service organizations, and healthcare providers is essential to ensure consistent data collection and reporting. Ideally, we should be working towards global standards of classification.

Transparency in AI Systems: Building trust in AI requires transparency. The algorithms used in child protection systems should be understandable and open to scrutiny by experts. This helps to mitigate bias and ensure the technology is used fairly.

Data Privacy: Another factor related to ensuring trust in the system relates to data protection. Information will be sensitive and in many cases having an inference drawn on the basis of having a record on this system could be reputationally damaging. Being identified as high risk does not mean that abuse is occuring.

Culturally Sensitive Taxonomy: Any AI system will need a taxonomy of risk factors to help it identify what it should be looking for. This can be a challenge when the evidence base is heavily informed by North American, European and Australasian experience. More needs to be done in developing evidence bases that account for the different cultural contexts, and does not flag innocent behaviours based on cultural norms.

Conclusion

Protecting children is a community effort, and AI can be a powerful tool in our arsenal. However, this does not remove or minimize the importance of human judgment in this complex and nuanced area.

Professionals will always need to take a call. The power to remove a child from its home is one of the most intrusive interventions the state can make, and human decision making will always be important to this.

However, professionals are all too often disadvantaged by the paucity of information available to them, and time it takes to comb through data sets to identify risk. AI potentially provides the answer to that part of the child protection value chain. It could be a powerful tool in helping identify and intervene in cases of risk. This can be, quite literally, life saving.

Developing a global evidence base of risk, standardizing data, and developing algorithms and systems that are fair and safe are what we need to do if we are to achieve this.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Andrew Morley
Andrew Morley

Written by Andrew Morley

Advocating to keep people and communities safe.

No responses yet

Write a response