Framework Overview

The Ethics Framework is a highly practical tool for individuals and organisations developing AI-enabled products and services that want to build value-aligned technologies with positive effects whilst avoiding negative consequences.

The framework was created by Digital Catapult’s Ethics Committee, it consists of seven concepts each with corresponding questions that are intended to inform how these concepts might be applied in practice. This emphasis is on questions rather than high-level principles because questions help illuminate where principles should be considered in practice, and questions do not assume a universal ‘correct’ answer.

In the research phase of this project, a multitude of useful references were found and used these to derive the seven concepts. As a result, the framework is closely aligned with the more recent ones developed by the High-Level Expert Group on Artificial Intelligence of the European Commission in 2019 and with the OECD’s Principles on AI and the Beijing AI Principles.

Digital Catapult uses the Ethics Framework in the consultations between the Ethics Advisory Group (a subset of the Ethics Committee) members and Machine Intelligence Garage startups, and it is updated regularly as a result of this feedback loop. The Ethics Framework is licensed under a Creative Commons Attribution 4.0 International licence and we encourage feedback via appliedAIethics@digicatapult.org.uk

How to use the framework

The Framework consists of seven concepts with a corresponding list of questions for each. The questions are intended to illuminate the many contexts in which that ethical concept might be relevant to a business or project. Not all questions will be relevant at all times, and many questions will not have an immediate or clear answer. Consult the below framework around key milestones in a project. The right time to start thinking about the questions is at the earliest stages of company growth. Consider current, near and mid-future potential effects. The framework will help to characterise ethical opportunities and potential risks and to be open and clear both internally and externally about how these are evaluated and managed.

1. Clear benefits

Be clear about the benefits of the product or service

While it is important to consider the risks of new technologies, this should be done in the context of expected benefits. The benefits should be clear, likely, and outweigh potential, reasonable risks. They should be evaluated for different user groups and for any affected non-user groups (especially when there are competing values or interests between these groups), and with consideration of plausible future trends or changes (such as greater compute capacity, a solution coming to dominate the market, etc).

What are the goals, purposes and intended applications of the product or service?

  • Who or what might benefit from the product/service? Consider all potential groups of beneficiaries, whether individual users, groups or society and environment as a whole.
  • Are those benefits common to the application type, or specific to the technology or implementation choices?
  • How can the products or services be monitored and tested to ensure they meet these goals, purposes and intended applications?
  • How likely are the benefits and how significant? How can it be assessed what the benefits are?
  • How are these benefits obtained by the various stakeholders?
  • Can the benefits of the product/service be demonstrated?
  • Might these benefits change over time?
  • What is the company’s position on making (parts of) the products/services available on a non-commercial basis, or on sharing AI knowledge which would enable more people to develop useful AI applications?

2. Know and manage the risks

Safety and potential harm should be considered, both in consequence of the product’s intended use, and other reasonably foreseeable uses. For example, the possibility of malicious attacks on the technology needs to be thought through. Attacks may be against safety, security, data integrity, or other aspects of the system, such as to achieve some particular decision outcome. As with benefits, assessment of risks can be in respect of individuals (not just users), communities, society and environment and should consider plausible future trends or changes.

  • Have the risks of other foreseeable uses of the technology, including accidental or malicious misuse of it, been considered?
  • Have all potential groups at risk, whether individual users, groups or society and environment as a whole, been considered?
  • Is there currently a process to classify and assess potential risks associated with the use of the product or service?
  • Who or what might be at risk from the intended and non-intended applications of the product or service?
  • Consider all potential groups at risk, whether individual users, groups, society as a whole or the environment.
  • Are those risks common for application area or technology, or specific to the technology or implementation choices?
  • How likely are the risks, and how significant? Is there a plan to mitigate and manage the risks?
  • How can the potential risks or perceived risks to users, potentially affected parties, purchasers or commissioners be communicated?
  • How do third-parties or employees report potential vulnerabilities, risks or biases, and what processes are in place to handle these issues and reports?
  • How can it be known whether a bias has been created or reinforced with the system?
  • As a result of assessing potential risks, are there customers or use cases that would be chosen not to work with?
  • How are these decisions made and documented?

3. Use data responsibly

Compliance with legislation (such as the GDPR) is a good starting point for an ethical assessment of data and privacy. However, there are other considerations that arise from data-driven products, such as the aptness of data for use in situations that were not encountered in the training data, or whether data contains unfair biases, that must be taken into account when assessing the ethical implications of an AI product or service.

Data may come in many forms: as datasets, through APIs, through labour (such as microtasking). The value exchange between those who provide the data (or label it), directly or otherwise, and the company, should be considered for fairness. If data are used from public sources (e.g. open data collected by a public body or NGO) the company should consider whether it may contribute back or support the work of ongoing data maintenance, perhaps by providing cleaned or corrected data.

  • How was the data obtained and was consent obtained (if required)? Is the data current?
  • Is the training data appropriate for the intended use?
  • Is the data pseudo-anonymised or de-identified? If not, why not?
  • Is the data usage proportionate to the problem being addressed?
  • Is there sufficient data coverage for all intended use-cases?
  • What are the qualities of the data (for example, is the data coming from a system prone to human error?)
  • Have potential biases in the data been examined, well-understood and documented and is there a plan to mitigate against them?
  • Is there a process for discovering and dealing with inconsistencies or errors in the data?
  • What is the quality of the data analysis? How much uncertainty / error is there?
  • What are the consequences which might arise from errors in analysis and how can they be mitigated?
  • Can it be clearly communicated how the data is being used and how decisions are being made?
  • What systems are in place to ensure data security and integrity?
  • Are there adequate methods in place for timely and auditable data deletion, once data is no longer needed?
  • Can individuals remove themselves from the dataset? Can they also remove themselves from any resulting models?
  • Is there a publicly available privacy policy in place, and to what extent are individuals able to control the use of data about them, even when they are not users of the service or product?
  • Are there adequate mechanisms for data curation in place to ensure external auditing and replicability of results, and, if a risk has manifested itself, attribution of responsibility?
  • Can individuals access data about themselves?
    Is data being made available for research processes?

4. Be worthy of trust

For a technology or product to be trusted it needs to be understood, fit-for-purpose, reliable and competently delivered. Companies should be able to explain the purpose and limitations of their solutions so that users are not misled or confused. There should be processes in place to monitor and evaluate the integrity of the system over time, with clarity over what the quality measures are, and how chosen. Care must be taken to operate within the company’s areas of competence, and to actively engage with third-party evaluation and questions. Things can go wrong, despite best efforts. Companies should put in place procedures to report, investigate, take responsibility for, and resolve issues. Help should be accessible and timely.

  • Within the company, are there sufficient processes and tools built-in to ensure meaningful transparency, auditability, reliability and suitability of the product output?
  • Have the limitations of the company’s experience on the system being built been acknowledged and how can these reflect on the system in place? What steps are being taken to address these limitations?
  • Is the nature of the product or technology communicated in a way that the intended users, third parties and the general public can access and understand?
  • Are (potential) errors communicated and their impact explained?
  • Does the company actively engage with its employees, purchasers/commissioners, suppliers, users and affected third-parties so that ethical (including safety, privacy and security) concerns can be voiced, discussed, and addressed?
  • Does the company work with researchers where appropriate to explore or question areas of the technology?
  • Is there a process to review and assure the integrity of the AI system over time and take remedial action if it is not operating as intended?
  • If human labour has been involved in data preparation (eg image labelling by Mechanical Turk workers) have the workers involved been fairly compensated?
  • If data comes from another source, have the data owner’s rights been preserved (eg copyright, attribution) and has permission been obtained?
  • Who is accountable if things go wrong? Are they the right people? Are they equipped with the skills and knowledge they need to take on this responsibility?
  • What is/are the quality or standards to which the product / technology must conform (e.g. academic; peer-review, technical), what are the reasons for choosing the particular standards; and what does the company propose to do to maintain such standards?
  • In order to engender trust, are there customers, suppliers or use cases that may not be worked with? How are these decisions made and documented?
  • Does the company have a clear and easy to use system for third party/user or stakeholder concerns to be raised and handled?
  • Is adequate training in the safe and secure use of your product or service provided to all of your operators, customers and/or users?
  • Has it been considered how to embed ethics within the organisation?
  • Has it been considered how to embed integrity and fair dealing in the culture?
  • How would a person raise a concern with the company?
  • To inform your processes and culture, could mentors or innovation hubs be consulted?

5. Diversity, equality and inclusion

We will prioritise companies that can demonstrate that they value and actively seek diversity, equality and inclusion. Companies should consider the impact and utility of their product for individuals, larger groups and society as a whole, including its impact on widening or narrowing inequality, enabling or constraining discrimination, and other political, cultural and environmental factors. Diverse teams that are representative and inclusive are smarter, provide higher returns, and help create products and services that work for a greater number of people in society.

  • Are there processes in place to establish whether the product or service might have a negative impact on the rights and liberties of individuals or groups?

Please consider:
– varied social backgrounds and education levels
– different ages
– different gender and or sexual orientation
– different nationalities or ethnicity
– different political, religious and cultural backgrounds
– physical or hidden disabilities.

  • What actions can be taken if negative impacts are identified?
  • Social impact can be difficult to demonstrate: have processes been considered that can demonstrate the positive impact the product or service brings?
  • Has putting in place a diversity and inclusiveness policy in relation to recruitment and retention of staff been considered?
  • Has how to balance the specific responsibilities of a startup against other factors such as cost and freedom of choice for users been considered?
  • Are potential biases in the data and processes are examined, well-understood and documented and is there a plan to mitigate against them?
  • Where do hiring practices and building culture fit in? For instance, are ethical questions raised at interviews?
  • Are any principles/risk considerations communicated to new hires?
  • Does the company have a diversity and inclusiveness policy in relation to recruitment and retention of staff?

6. Transparent communication

Be open and understandable in communications

Companies must be able to communicate clearly the benefits and potential risks of their products and the actions they have taken to deliver benefits and avoid, minimise, or mitigate the risks. They must ensure that processes are in place to address the concerns and complaints of users and other parties, and that these are transparent. The Ethics Committee believes that effective communication, when coupled with a principled approach to ethical considerations, is a competitive advantage, and will lead to progress even when hard moral issues are on the line. Conversely, poor communication, and a lack of attention to the social and ethical environment for doing business, can result in adverse public reactions, direct legal repercussions as well as mounting regulation, and hence increased costs and higher rates of failure.

  • Does the company communicate clearly, honestly and directly about any potential risks of the product or service being provided?
  • What does it communicate and when? Does the company communicate clearly, honestly and directly about the processes in place to avoid, minimise or mitigate potential risks?
  • Does the company have a clear and easy to use system for third party/user or stakeholder concerns to be raised and handled?
  • Are the company’s policies relating to ethical principles available publicly and to employees?
  • Are the processes to implement and update the policies open and transparent?
  • Does the company disclose issues other than the product e.g. projects, studies and other activities funded by the company or which the company may work in conjunction or otherwise be involved with; the major sources of data and expertise that inform the insights of AI solutions and the methods used to train those systems and solutions?
  • Has a communication strategy and process if something goes wrong been considered?

7. Business model

Integrity and fair dealing should be an integral part of organisational culture. Companies should consider what structures and processes are being employed to drive revenue or other material value to the organisation as certain business models or pricing strategies can result in discrimination. Where possible and appropriate, companies should consider whether part of the product, service or data can be made available to the public.

  • What kind of corporate structure best meets the company’s needs? As well as the traditional company limited by shares, there are a variety of ‘social enterprise’ alternatives, including community interest company, co-operative, B-Corp and company limited by guarantee. Are any of these of interest?
  • Data exchange: are free services being provided in exchange for user data? Are there any ethical implications for this?
  • Do users have a clear idea of how the data will be used, including any future linking/sale of the data?
  • What happens if the company is acquired? For example, what happens to its data and software?
  • Pricing: have differential prices been considered ? Are there any ethical considerations regarding the pricing strategy?
  • Are there any vulnerable groups that may be offered lower prices?
  • Data philanthropy: is there data that others (e.g. charities, researchers) could use for public purpose benefits?
  • Is integrity and fair dealing embedded in the organisational culture?
  • Has the environmental impact of development / deployment of the technology been considered?
  • Is environmental impact considered when choosing suppliers? Have less energy-intensive options been considered?

References

AI Code – the House of Lords Artificial Intelligence Committee “AI in the UK: ready, willing and able?” report recommends a cross-sector AI Code be established, which can be adopted nationally, and internationally. The code has five suggested principles.

AI Hippocratic Oath – an article by Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. He edits the Hippocratic Oath sworn by generations of doctors to suggest an equivalent oath that AI practitioners can take to highlight their ethical commitments.

Asilomar AI Principles – these 23 principles, developed at a conference held by the Future of Life Institute, have been signed by 1274 AI/Robotics researchers and 2541 others (27 June 2018) including many household names in the world of AI and machine learning. Future of Life Institute says, “We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.”

Athena Swan – a charter that recognises and celebrates good practice towards the advancement of gender equality, established and managed by the British Equality Challenge Unit in 2005.

Centre for Democracy and Technology – non-for-profit organisation that is championing online civil liberties and human rights, driving policy outcomes that keep the internet open, innovative, and free. CDT have created a tool to make one think about various challenges that could arise when designing, building, testing or implementing an algorithm, The tool; Blog piece about the tool.

DataKind – is a non-for-profit organisation that brings together top data scientists with leading social change organizations to collaborate on cutting-edge analytics and advanced algorithms to maximize social impact. Their UK Principles establish what their community should abide by when working on data-for-good projects.

Datasheets for Datasets – a paper written by authors from Microsoft Research, University of Maryland, Cornell University, Georgia Tech and AI Now Institute proposing to document datasets for greater transparency and accountability. They describe how datasheets for datasets will facilitate better communication between dataset creators and users, and encourage the machine learning community to investigate how a dataset was created, what information it contains, what tasks it should and shouldn’t be used for, and whether it raises any ethical or legal concerns.

DCMS – Data Ethics Framework (published Jun 2018) sets out seven principles for how data should be used in the public sector in order to “help. maximise the value of data whilst also setting the highest standards for transparency and accountability when building or buying new data technology”. The associated Data Ethics Workbook sets out the questions that should be considered against each of the principles.

Dot Everyone – (forthcoming) Responsible Technology Product Management Toolkit. “We are currently in the process of developing a number of assessment tools which product teams can work through to help them examine and evaluate how they handle the 3Cs (context, consequences, and contribution) of Responsible Technology in real-time during the development cycle. The form of the assessments range from checklists to step-by-step information mapping to team board games.” Doteveryone is seeking help to road-test the 3C model.

EPSRC Principles of Robotics – 5 rules and 7 principles for regulating robots in the real world. These “highlight the general principles of concern expressed by” a group of experts convened to “discuss robotics, its applications in the real world and the huge amount of promise it offers to benefit society” with the intention that they can “inform designers and users of robots in specific situations”.

Ethical OS Toolkit– Released by Institute for the Future and Omidyar Networks, the Ethical OS Toolkit is, “a toolkit designed to help technologists envision the potential risks and worst-case scenarios of how their technologies may be used in the future so they can anticipate issues and design and implement ethical solutions from the outset”.

The Future of Computing Academy (part of the Association for Computing Machinery) has proposed that the computer science community change its peer-review process to ensure that reviewers assess claims of impact as well as intellectual rigour. Hence researchers should think about and disclose any possible negative societal consequences of their work in their papers.

Google’s AI Principles – in June 2018, Google published seven principles to guide its work in AI research, product development and business decisions.

Information Accountability Foundation – this global information policy think tank helps frame and advance data protection law and practice through accountability-based information governance. Providing tools for establishing legitimacy in big data projects. Noteworthy publications are:

Unified Ethical Frame for Big Data Analysis – theoretical basis for legitimacy.

Big Data Assessment Framework and Worksheet – assessment framework for establishing legitimacy.

It Speaks – a research report produced by King’s College London and ReFig in Canada with the aim of providing solutions to the ethical problem of bias that exists in artificial intelligence language data sets.

Open Data Institute – an independent, non-profit, non-partisan company focused on building an open, trustworthy data ecosystem. The ODI Data Ethics Canvas is a tool designed to help identify potential ethical issues associated with a data project or activity.

Partnership on AI– a multistakeholder organization that brings together academics, researchers, civil society organizations, companies building and utilizing AI technology, and other groups working to better understand AI’s impacts. Partnership on AI have developed a set of Thematic Pillars that provide guidance on principles for developing AI.

RAEing – Diversity and Inclusion Progression Framework. This tool helps engineering and science professionals (and soon startups) self-assess and improve their D&I maturity.

Royal Society – the Society’s fundamental purpose is to recognise, promote, and support excellence in science and to encourage the development and use of science for the benefit of humanity.

Royal Society’s Data Management and Use: Governance in the 21st Century provides a comprehensive review on the needs of a 21st century data governance system.

Technology Strategy Board – the “Responsible Innovation Framework for commercialisation of research findings”. This framework was developed for use assessing synthetic biology applications, but clearly has the potential to inform Responsible Technology more widely.

The Universal Declaration of Human Rights Drafted by representatives with different legal and cultural backgrounds from all regions of the world, the Declaration was proclaimed by the United Nations General Assembly in Paris on 10 December 1948 (General Assembly resolution 217 A) as a common standard of achievements for all peoples and all nations. It sets out, for the first time, fundamental human rights to be universally protected and it has been translated into over 500 languages.

Ethics FAQs

Below are a few of the key questions about Machine Intelligence Garage and its work.

  • Why is ethical use of AI important?

    The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications, as concluded by the recent House of Lords Artificial Intelligence Committee report. However, at present, there is a gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’. There is demand from all sizes of organisation for help on defining and applying ethical standards in practice.

  • What does Machine Intelligence Garage do to support the ethical use of AI?

    Digital Catapult supports a number of activities to promote and ensure the ethical and sustainable use of AI. These activities began in the Machine Intelligence Garage programme.

    Digital Catapult’s Ethics Committee developed an ethics framework, tailored for use by startups, that is used to support the consultations that each company receives from ethics committee members. In addition, cohort startups can access longer-term consultations (called ‘deep dives’) and office hours with ethics committee members. The programme provides participants with a range of workshops and tools that centre on helping startups use data and run operations more ethically and responsibly, whilst embedding good practice within the company and product.

  • What is Digital Catapult’s ethics committee?

    At present, there is a gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’. There is demand from all sizes of organisation for help on defining and applying ethical standards in practice.

    The Digital Catapult Ethics Committee, chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, convenes some of the foremost minds in AI and data ethics to address this need. Launched in 2018, it comprises two elements: the Steering Board, who oversee the development of principles and tools to facilitate responsible AI in practice, and the Advisory Group, who work closely with startups developing their propositions through Digital Catapult’s Machine Intelligence Garage programme.

    The Advisory Group’s collaboration with Machine Intelligence Garage startups ensures that the Committee’s work is tested and grounded in practice.

  • Why was the ethics committee set up?

    Digital Catapult Ethics Committee was launched in 2018, it acts as an independent body dedicated to realising responsible AI development in the UK.

  • Who is on the ethics committee?

    Chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, the committee convenes some of the foremost minds in AI and data ethics to address the need for responsible and ethical use of AI. It comprises of two elements: the Steering Board, who oversee the development of principles and tools to facilitate responsible AI in practice, and the Advisory Group, who will closely with startups developing their through Digital Catapult’s Machine Intelligence Garage programme.

    To see the latest committee, please click here to see all current members.

  • How can businesses get involved with the ethics work for Machine Intelligence Garage?

    AI startup companies can engage with the Ethics Committee by applying to join the Machine Intelligence Garage programme. As a programme participant you will have an ethics consultation within the first 3 months and you can apply to take part in a more comprehensive engagement (a ‘deep dive’).

    Larger organisations can ask to join our Industry Working Group. Please email us at appliedaiethics@digicatapult.org.uk if that is of interest.

    Participation in the programme is also an opportunity to influence the support offered and to ensure that the Committee’s work remains tested and grounded in practice.

  • What is the Ethics Framework?

    The Ethics Framework sets out key ethical considerations and underlying questions that provide a practical approach for startups, individual developers and experts to use in planning for and addressing the ethical challenges faced by their business, technology and ideas. The framework is used to support the consultations that Machine Intelligence Garage cohort startups receive from ethics committee members, but it is also perfectly possible (and encouraged) to use the framework independently.

    The Ethics Framework is licensed under a Creative Commons Attribution 4.0 International licence and we encourage feedback via appliedaiethics@digicatapult.org.uk

  • How is the framework intended to be used?

    The Framework consists of seven concepts with a corresponding list of questions for each. The questions are intended to illuminate the many contexts in which that ethical concept might be relevant to a business or project. Not all questions will be relevant at all times, and many questions will not have an immediate or clear answer.

    Consult the framework around key milestones in a project. The right time to start thinking about the questions is at the earliest stages of company growth. Consider current, near and mid-future potential effects. The framework will help to characterise ethical opportunities and potential risks and to be open and clear both internally and externally about how these are evaluated and managed.

    Users can download a copy of the framework on the ethics framework page.

  • What are the seven principles of the ethics framework?

    1. Be clear about the benefits of the product or service
    2. Know and manage the risks
    3. Use data responsibly
    4. Be worthy of trust
    5. Promote diversity, equality and inclusion
    6. Be open and understandable in communications
    7. Consider the business model

    To view the full framework and principles please click here.

  • What other initiatives does Digital Catapult do in AI ethics?

    Digital Catapult operates an AI Ethics programme with a number of activities and outputs.

    This work includes the independent Ethics Committee, Industry Working Group, an Ethics Hub and published papers on the topic. To find out more about our work in ethics please visit the digicatapult.org.uk website

  • How was the ethics framework established?

    Digital Catapult’s Ethics Committee released its first iteration of the Ethics Framework in 2018 after a series of consultations.

    The Ethics Framework is used in the consultations between the Advisory Group (a subset of the Ethics Committee) members and Machine Intelligence Garage startups, and it is updated regularly as a result of this feedback loop.

    The Ethics Framework is licensed under a Creative Commons Attribution 4.0 International licence and we encourage feedback via appliedaiethics@digicatapult.org.uk

  • What is the difference between the ethics steering board and advisory group?

    The steering board oversees the development of principles and tools to facilitate responsible AI in practice. The advisory group contributes to the supervisory role of the steering group while also participating in direct engagements with Machine Intelligence Garage cohorts.

  • What are the key activities of the ethics board advisory group?

    The advisory group contributes to the supervisory role of the steering group, however their main focus is to operationalise the work of the Ethics Committee through a range of ethics engagements with external organisations. Examples of such engagements include consultations, office hours, and deep dives with startups participating in Digital Catapult’s Machine Intelligence Garage programme.

    The advisory group’s collaboration with cohorts ensures that the committee’s work is tested and grounded in practice.

  • What are the key activities of the ethics board steering board?
    • The steering board oversees the development of principles and tools to facilitate responsible AI in practice.
    • Key activities of the steering board:
      Establish, maintain and, if needed, update the principles for the Committee’s work
    • Provide mentoring and advice for the operation of the ethics engagements
    • Review the delivery and efficacy of the engagements that are delivered in collaboration with the advisory group
    • Propose or advise on suggested programme improvement, extension or other initiatives.

The importance of ethics in AI

The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications find out what Digital Catapult and Machine Intelligence garage are doing to address this need.

Meet the committee

The Digital Catapult AI Ethics Committee, chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, convenes some of the foremost minds in AI and data ethics to address the need of ethical and responsible use of AI.

Sign up to the newsletter

Become a subscriber to Digital Catapult's updates - with regular updates on our news, insights and opportunities in advanced digital technologies and other relevant programmes.