Why is this important?

Digital Catapult’s Machine Intelligence Garage is a programme that provides startups with access to computation power and expertise to accelerate their development.

Artificial Intelligence (AI) as a technology has the potential to create meaningful positive change for society and environment and we strongly believe that for this to happen, ethics must become a central piece for any individual or organisations developing AI. We believe that companies that have considered the ethical implications of the products and services they develop, and which monitor, manage, and communicate effectively about them will have a competitive advantage against those that do not. To enable this, Machine Intelligence Garage Ethics Committee has created an Ethical Framework consisting of seven concepts, along with corresponding questions intended to inform how these concepts might be applied in practice.

How to use this Framework?

Given the complexity of both AI as a technology and ethics in the context of AI we anticipate that many questions will not have an immediate or clear answer. This should not discourage companies from continuing to engage with these challenges. The role of the framework, more than anything, is to help companies to characterise the ethical opportunities and potential risks  associated with their business and/or technology and be open and clear both internally and externally about how these are evaluated and managed. We encourage all companies to consider the principles most relevant to them and to share new ones, with us, and with one another, to promote best practices across industry as a whole.

What is the role of the Committee?

We highlight that the role of our Committee at this point in time is not to provide ethical assessment, but rather to support the companies with which we work to embed ethics effectively into their systems and company culture. The Committee will act as trusted advisors to both us, Digital Catapult, and to the companies on Machine Intelligence Garage. All Machine Intelligence Garage startups will get tailored support from our Ethics Committee to establish an action plan and timeline for the companies to work towards.

How is this different from existing initiatives?

Our ambition is to create a highly practical and applicable framework that communicates in the language, and caters to the needs, of individuals and organisations developing AI enabled products or services. In the research phase of this project, we found a multitude of useful resources which we used to form the basis of this initiative. We strived to synthesise and summarise these findings in our framework while also providing a practical question list approach, which both ethics experts and the companies can use to establish an action plan.

What’s next?

This is the very beginning of our initiative and we acknowledge that our current framework might be incomplete and that new considerations will arise over time as the thinking around the ethics of AI develops. We will continue updating our framework as we build up experience from working with the companies as well as other organisations and individuals working in the fields of ethics and AI. As we progress in our thinking and understanding of this field, we will seek to share our findings with the wider community through roundtables, workshops, case studies as well as the creation of further practical and applicable tools and resources for organisations and individuals to use.

The Machine Intelligence Garage Ethics Committee, chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, will convene some of the foremost minds in AI and data ethics to bridge the gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’.  To view the full committee click here.

Continue below for the framework.

WE ADVISE COMPANIES TO CONSULT THE FOLLOWING SEVEN POINTS:

Suggested action plan against these principles will be informed by product maturity and adoption, and we will therefore suggest companies to consider current, near and mid-future potential effects.

1. Be clear about the benefits of your product or service

While it is important to consider risks of new technologies, this should be done in the context of expected benefits. The benefits should be clear, likely, and outweigh potential, reasonable risks. They should be evaluated for different user groups and for any affected non-user groups (especially when there are competing values or interests between these groups), and with consideration of plausible future trends or changes (such as greater compute capacity, a solution coming to dominate the market, etc).

  • What are the goals, purposes and intended applications of your product or service?
  • Who or what might benefit from your product/service?
  • Consider all potential groups of beneficiaries, whether individual users, groups or society and environment as a whole.
  • Are those benefits common to the application type, or specific to your technology or implementation choices?
  • How will you monitor and test that your products or services meet these goals, purposes and intended applications?
  • How likely are the benefits and how significant?
  • How are you assessing what the benefits are?
  • How are these benefits obtained by your various stakeholders?
  • Can the benefits of your product/service be demonstrated?
  • Might these benefits change over time?
  • What is your position on making (parts of) your products/services available on a non-commercial basis, or on sharing AI knowledge which would enable more people to develop useful AI applications?
2. Know and manage your risks

Safety and potential harm should be considered, both in consequence of the product’s intended use, and other reasonably foreseeable uses. For example, the possibility of malicious attacks on the technology need to be thought through. Attacks may be against safety, security, data integrity, or other aspects of the system, such as to achieve some particular decision outcome. As with benefits, assessment of risks can be in respect of individuals (not just users), communities, society and environment and should consider plausible future trends or changes.

  • Have you considered what might be the risks of other foreseeable uses of your technology, including accidental or malicious misuse of it?
  • Have you considered all potential groups at risk, whether individual users, groups or society and environment as a whole?
  • Do you currently have a process to classify and assess potential risks associated with use of your product or service?
  • Who or what might be at risk from the intended and non-intended applications of your product/service? Consider all potential groups at risk, whether individual users, groups, society as a whole or the environment.
  • Are those risks common for application area or technology, or specific to your technology or implementation choices?
  • How likely are the risks, and how significant?
  • Do you have a plan to mitigate and manage the risks?
  • How do you communicate the potential risks or perceived risks to your users, potentially affected parties, purchasers or commissioners?
  • How do third-parties or employees report potential vulnerabilities, risks or biases, and what processes are in place to handle these issues and reports?
  • How do you know if you have created or reinforced bias with your system?
  • As a result of assessing potential risks, are there customers or use cases that you choose not to work with? How are these decisions made and documented?
3. Use data responsibly

Compliance with legislation (such as the GDPR) is a good starting point for an ethical assessment of data and privacy. However, there are other considerations that arise from data-driven products, such as the aptness of data for use in situations that were not encountered in the training data, or whether data contains unfair biases, that must be taken into account when assessing the ethical implications of an AI product or service.

Data may come in many forms: as datasets, through APIs, through labour (such as microtasking). The value exchange between those who provide the data (or label it), directly or otherwise, and the company, should be considered for fairness.  If data are used from public sources (e.g. open data collected by a public body or NGO) the company should consider whether it may contribute back or support the work of ongoing data maintenance, perhaps by providing cleaned or corrected data.

  • How were the data obtained, was consent obtained (if required)?
  • Are the data current?
  • Are the training data appropriate for the intended use?
  • Are the data pseudo-anonymised or de-identified? If not, why not?
  • Are the data uses proportionate to the problem being addressed?
  • Are there sufficient data coverage for all intended use-cases?
  • What are the qualities of the data (for example, are the data coming from a system prone to human error?)
  • Are potential biases in the data examined, well-understood and documented and is there a plan to mitigate against them?
  • Do you have a process for discovering and dealing with inconsistencies or errors in the data?
  • What is the quality of the data analysis? How much uncertainty / error is there? What are the consequences which might arise from errors in analysis and how can you mitigate these?
  • Can you clearly communicate how data are being used and how decisions are being made?
  • What systems do you have in place to ensure data security and integrity?
  • Are there adequate methods in place for timely and auditable data deletion, once data is no longer needed?
  • Can individuals remove themselves from the dataset? Can they also remove themselves from any resulting models?
  • Is there a publically available privacy policy in place, and to what extent are individuals able to control the use of data about them, even when they are not users of the service or product?
  • Are there adequate mechanisms for data curation in place to ensure external auditing and replicability of results, and, if a risk has manifested itself, attribution of responsibility?
  • Can individuals access data about themselves?
  • Are you making data available for research processes?
4. Be worthy of trust

For a technology or product to be trusted it needs to be understood, fit-for-purpose, reliable and competently delivered. Companies should be able to explain the purpose and limitations of their solutions so that users are not misled or confused. There should be processes in place to monitor and evaluate the integrity of the system over time, with clarity over what the quality measures are, and how chosen. Care must be taken to operate within the company’s areas of competence, and to actively engage with third-party evaluation and questions. Things can go wrong, despite best efforts. Companies should put in place procedures to report, investigate, take responsibility for, and resolve issues. Help should be accessible and timely.

  • Within your company, are there sufficient processes and tools built-in to ensure meaningful transparency, auditability, reliability and suitability of the product output?
  • Have you acknowledged the limitations of your  experience on the system you are building and how can these reflect on the system in place? What steps are you taking to address these limitations?
  • Is the nature of the product or technology communicated in a way that the intended users, third parties and the general public can access and understand?
  • Are (potential) errors communicated and their impact explained?
  • Does your company actively engage with its employees, purchasers/commissioners, suppliers, users and affected third-parties so that ethical (including safety, privacy and security) concerns can be voiced, discussed, and addressed?
  • Does your company work with researchers where appropriate to explore or question areas of the technology?
  • Do you have a process to review and assure the integrity of the AI system over time and take remedial action if it is not operating as intended?
  • If human labour has been involved in data preparation (eg image labelling by Mechanical Turk workers) have the workers involved been fairly compensated?
  • If data comes from another source, have the data owner’s rights been preserved (eg copyright, attribution) and has permission been obtained?
  • Who is accountable if things go wrong? Are they the right people? Are they equipped with the skills and knowledge they need to take on this responsibility?
  • What is the quality or standards to which the product / technology must conform (e.g. academic; peer-review, technical), what are the reasons for choosing the particular standards; and what does the company propose to do to maintain such standards?
  • In order to engender trust, are there customers, suppliers or use cases that you should choose not to work with? How are these decisions made and documented?
  • Does your company have a clear and easy to use system for third party/user or stakeholder concerns to be raised and handled?
  • Have you considered how to embed ethics within your organisation?
  • Have you considered how to embed integrity and fair dealing in your culture?
  • How would a person raise a concern with your company?
  • To inform your processes and culture, could you approach mentors, consult innovation hubs?
5. Promote diversity, equality and inclusion

We will prioritise companies that can demonstrate that they value and actively seek diversity, equality and inclusion. Companies should consider the impact and utility of their product for individuals, larger groups and society as a whole, including its impact on widening or narrowing inequality, enabling or constraining discrimination, and other political, cultural and environmental factors.

  • Do you have processes in place to establish whether your product or service might have a negative impact on the rights and liberties of individuals or groups? Please consider:

– varied social backgrounds and education levels

– different ages

– different gender and or sexual orientation

– different nationalities or ethnicity

– different political, religious and cultural backgrounds

– physical or hidden disabilities.

  • What actions can you take if negative impacts are identified?
  • Social impact can be difficult to be demonstrated: have you considered processes that can enable you to demonstrate the positive impact your product or service brings?
  • Have you considered putting in place a diversity and inclusiveness policy in relation to recruitment and retention of staff?
  • Have you consider how to balance the specific responsibilities of a startup against other factors such as cost and freedom of choice for users?
  • Are potential biases in the data and processes are examined, well-understood and documented and is there a plan to mitigate against them?
  • Where do hiring practices and building culture fit in? For instance, are ethical questions raised at interviews? Are any principles/risk considerations communicated to new hires?
  • Does your company have a diversity and inclusiveness policy in relation to recruitment and retention of staff?
6. Be open and understandable in communications

Companies must be able to communicate clearly the benefits and potential risks of their products and the actions they have taken to deliver benefits and avoid, minimise, or mitigate the risks. They must ensure that processes are in place to address the concerns and complaints of users and other parties, and that these are transparent. We believe that effective communication, when coupled with a principled approach to ethical considerations, is a competitive advantage, and will lead to progress even when hard moral issues are on the line. Conversely, poor communication, and a lack of attention to the social and ethical environment for doing business, can result in adverse public reactions, direct legal repercussions as well as mounting regulation, and hence increased costs and higher rates of failure.

  • Does your company communicate clearly, honestly and directly about any potential risks of the product or service you are providing?
  • What does it communicate and when?
  • Does your company communicate clearly, honestly and directly about the processes in place to avoid, minimise or mitigate potential risks?
  • Does your company have a clear and easy to use system for third party/user or stakeholder concerns to be raised and handled?
  • Are the company’s policies relating to ethical principles available publicly and to employees? Are the processes to implement and update the policies open and transparent?
  • Does the company disclose issues other than the product e.g. projects, studies and other activities funded by the company or which the company may work in conjunction or otherwise be involved with; the major sources of data and expertise that inform the insights of AI solutions and the methods used to train those systems and solutions?
  • Have you considered a communication strategy and process if something goes wrong?
7. Consider your business model

Integrity and fair dealing should be an integral part of organisational culture. Companies should consider what structures and processes are being employed to drive revenue or other material value to the organisation as certain business models or pricing strategies can result in discrimination. Where possible and appropriate, companies should consider whether part of the product, service or data can be made available to the public.

  • What kind of Corporate structure best meets your needs? As well as the traditional company limited by shares, there are a variety of ‘social enterprise’ alternatives, including community interest company, co-operative, B-Corp and company limited by guarantee. Are any of these of interest?
  • Data exchange: are you providing free services in exchange for user data? Are there any ethical implications for this? Do users have a clear idea of how the data will be used, including any future linking/sale of the data?
  • What happens if the company is acquired? For example, what happens to its data and software?
  • Pricing: have you considered differential prices:Are there any ethical considerations regarding your pricing strategy? Are there any vulnerable groups to which you would want to offer lower prices?
  • Data philanthropy: do you have data that you could let others (e.g. charities, researchers) use for public purpose benefits?
  • Is integrity and fair dealing embedded in your organisational culture?