
Great Britain’s national mapping service, the Ordnance Survey (OS), has published a set of eight guiding principles which, it says, will ensure it uses artificial intelligence (AI) effectively but responsibly to drive positive change.
Acknowledging that AI is set to the change the world and the ways in which its business is done, including the geospatial sector, OS says it has committed itself to a “holistic approach to responsible AI usage” that “encompasses technical, ethical, and societal focus”.
OS says the eight principles or priority areas for responsible AI implementation “aim to promote the development and deployment of AI technologies that benefit society, while minimising potential harm”. They are as follows:
1. Deeds not words. OS says it commits to “adhering to established principles, guidelines, governance processes, regulations, and standards related to responsible AI. And should we consider instances where such frameworks may be lacking, we will collaborate with experts, decision-makers, and stakeholders to develop them.”
2. Good governance. OS says it pledges to “continuously monitor, enhance, and document our AI systems across five critical dimensions,” being explainability, interpretability and reproducibility; accountability; diversity, non-discrimination and fairness, technical robustness and safety; and privacy and security.
3. Be aware of harms. The organisation says its commitment extends to “evaluating the potential harms arising from AI, and we will regularly assess the potential impacts of AI on physical and mental health, opportunities, livelihoods, as well as cultural, civil, and human rights”.
4. Stakeholders. OS says that its recognises and upholds the “rights of all stakeholders affected by our AI systems, including individuals, communities, society, the environment, and particularly vulnerable groups with limited power or voice. We will actively seek insights and understanding from those impacted by our AI initiatives.”
5. Whole pipeline. The organisation says its recognises that its responsibility “spans every stage of the AI lifecycle, across induction, design, development, testing, and deployment. We will ensure that responsible AI considerations are integrated into each and every phase.”
6. Whole stack. OS says it commits to addressing the “broader ecosystem including data, code, algorithms, platforms, infrastructure, and organisational practices. We will prioritise documentation, scrutiny, and governance across the entire AI stack.”
7. Whole system. The organisation says it will “critically examine the purpose and scope of AI systems, emphasising not only the final product but also the underlying processes, parameters, and impacts on stakeholders. Our scrutiny will extend to the broader societal implications of AI deployment.”
8. Responsibility first. And the final principle is to “prioritise responsibility over technical capabilities, ensuring that our AI systems align with our capacity to fulfil commitments to responsible AI. This includes setting and adhering to constraints such as time, storage, computation, and documentation budgets.”
OS says its approach will ensure the organisation “is neither left behind, nor rushes into something unexpected,” and that it will assist in “actively integrating machine learning into the development of our products, with a dual focus: enhancing the efficiency of internal processes and improving the discoverability and adoption of our offerings”.