Empowering organisations to put the principles of ethical AI into practice

Explore Digital Catapult's work in AI ethics, including its Ethics Framework and Ethics Committee.

Ethics Framework overview

Digital Catapult’s Ethics Committee has created an Ethical Framework consisting of seven concepts, along with corresponding questions intended to inform how they may be applied in practice.

Below is a preview of the AI Ethics framework and related principles.

1. Be clear about the benefits of the product or service

While it is important to consider the risks of new technologies, this should be done in the context of expected benefits. The benefits should be clear, likely, and outweigh potential, reasonable risks. They should be evaluated for different user groups and for any affected non-user groups (especially when there are competing values or interests between these groups), and with consideration of plausible future trends or changes (such as greater compute capacity, a solution coming to dominate the market, etc).

2. Know and manage the risks

Safety and potential harm should be considered, both in consequence of the product’s intended use, and other reasonably foreseeable uses. For example, the possibility of malicious attacks on the technology needs to be thought through. Attacks may be against safety, security, data integrity, or other aspects of the system, such as to achieve some particular decision outcome. As with benefits, assessment of risks can be in respect of individuals (not just users), communities, society and environment and should consider plausible future trends or changes.

3. Use data responsibly

Compliance with legislation (such as GDPR) is a good starting point for an ethical assessment of data and privacy. However, there are other considerations that arise from data-driven products, such as the aptness of data for use in situations that were not encountered in the training data, or whether data contains unfair biases, that must be taken into account when assessing the ethical implications of an AI product or service. Data may come in many forms: as datasets, through APIs, through labour (such as microtasking). The value exchange between those who provide the data (or label it), directly or otherwise, and the company, should be considered for fairness. If data are used from public sources (e.g. open data collected by a public body or NGO) the company should consider whether it may contribute back or support the work of ongoing data maintenance, perhaps by providing cleaned or corrected data.

4. Be worthy of trust

For a technology or product to be trusted it needs to be understood, fit-for-purpose, reliable and competently delivered. Companies should be able to explain the purpose and limitations of their solutions so that users are not misled or confused. There should be processes in place to monitor and evaluate the integrity of the system over time, with clarity over what the quality measures are, and how chosen. Care must be taken to operate within the company’s areas of competence, and to actively engage with third-party evaluation and questions. Things can go wrong, despite best efforts. Companies should put in place procedures to report, investigate, take responsibility for, and resolve issues. Help should be accessible and timely.

5. Promote diversity, equality and inclusion

We will prioritise companies that can demonstrate that they value and actively seek diversity, equality and inclusion. Companies should consider the impact and utility of their product for individuals, larger groups and society as a whole, including its impact on widening or narrowing inequality, enabling or constraining discrimination, and other political, cultural and environmental factors. Diverse teams that are representative and inclusive are smarter, provide higher returns, and help create products and services that work for a greater number of people in society.

6. Be open and understandable in communications

Companies must be able to communicate clearly the benefits and potential risks of their products and the actions they have taken to deliver benefits and avoid, minimise, or mitigate the risks. They must ensure that processes are in place to address the concerns and complaints of users and other parties, and that these are transparent. The Ethics Committee believes that effective communication, when coupled with a principled approach to ethical considerations, is a competitive advantage, and will lead to progress even when hard moral issues are on the line. Conversely, poor communication, and a lack of attention to the social and ethical environment for doing business, can result in adverse public reactions, direct legal repercussions as well as mounting regulation, and hence increased costs and higher rates of failure.

7. Consider the business model

Integrity and fair dealing should be an integral part of organisational culture. Companies should consider what structures and processes are being employed to drive revenue or other material value to the organisation as certain business models or pricing strategies can result in discrimination. Where possible and appropriate, companies should consider whether part of the product, service or data can be made available to the public.

Any questions about Ethics?

Read the ethics focused frequently asked questions

Join the Ethics Committee Steering Board

The Digital Catapult Ethics Committee works closely with companies to analyse how ethics principles can be applied to create sustainable AI-powered products and services with a positive impact on society.

This is an opportunity to be part of an initiative that applies ethics principles in “the wild” and to collaborate with experts from a range of domain

Sign up to the newsletter

Become a subscriber to Digital Catapult's updates - with regular updates on our news, insights and opportunities in advanced digital technologies and other relevant programmes.