Policy Guidance on AI for Children

Draft for consultation | Recommendations for building AI policies and systems that uphold child rights

Policy guidance on AI for children
Photography by Tong Nguyen van / Unsplash; artwork by MacWell

Highlights

Artificial intelligence (AI) is about so much more than self-driving cars and intelligent assistants on your phone. AI systems are increasingly being used by governments and the private sector to, for example, improve the provision of education, healthcare and welfare services.

While AI is a force for innovation, it also poses risks for children and their rights, such as to their privacy, safety and security. But most AI policies, strategies and guidelines make only cursory mention of children. To help fill this gap, UNICEF has partnered with the Government of Finland to explore approaches to protecting and upholding child rights in an evolving AI world.

As part of our Artificial Intelligence for Children Policy project, UNICEF has developed this guidance to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights.

The policy guidance explores AI and AI systems, and considers the ways in which they impact children. It draws upon the Convention on the Rights of the Child to present three foundations for AI that upholds the rights of children:

  1. AI policies and systems should aim to protect children
  2. They should provide equitably for children's needs and rights
  3. They should empower children to contribute to the development and use of AI


Building on these foundations, the guidance offers nine requirements for child-centered AI and provides tools to operationalize the guidance.

Feedback and pilot testing

Public consultation

UNICEF is seeking input from stakeholders who are interested in or working in areas related to the fields of AI and children’s rights. This includes AI developers and deployers, companies, government agencies, civil society, international organizations, academics and adult and child citizens. We invite stakeholders to express their views on the draft guidance and provide feedback and comments by October 16, 2020. (See privacy notice.)

In order to ensure AI systems’ continued alignment with the rights and situations of children, this guidance should be seen as a starting contribution to child-centred AI. The next version, which will include input from this open consultation, will be released in 2021.

Implementing the guidance and sharing case studies

In order for the policy guidance to address the many implementation complexities, it needs to be put to use by policymakers, public organizations and businesses for validation and local adaptation. We thus invite governments and the business sector to pilot this guidance in their field and openly share their findings about how it was used, and what worked and what did not, so that their real experiences can improve the document.

Please use the guiding questions in the last chapter of the policy guidance to publish your experiences as a case study and be sure to let us know once you share your findings by emailing ai4children@unicef.org.

Author(s)
Virginia Dignum, Melanie Penagos, Klara Pigmans and Steven Vosloo
Publication date
Languages
English

Files available for download