Why is this important?

Digital Catapult’s Machine Intelligence Garage is a programme that provides startups with access to computation power and expertise to accelerate their development.

Artificial Intelligence (AI) as a technology has the potential to create meaningful positive change for society and environment and we strongly believe that for this to happen, ethics must become a central piece for any individual or organisations developing AI. We believe that companies that have considered the ethical implications of the products and services they develop, and which monitor, manage, and communicate effectively about them will have a competitive advantage against those that do not. To enable this, Machine Intelligence Garage Ethics Committee has created an Ethical Framework consisting of seven concepts, along with corresponding questions intended to inform how these concepts might be applied in practice.

How to use this Framework?

Given the complexity of both AI as a technology and ethics in the context of AI we anticipate that many questions will not have an immediate or clear answer. This should not discourage companies from continuing to engage with these challenges. The role of the framework, more than anything, is to help companies to characterise the ethical opportunities and potential risks  associated with their business and/or technology and be open and clear both internally and externally about how these are evaluated and managed. We encourage all companies to consider the principles most relevant to them and to share new ones, with us, and with one another, to promote best practices across industry as a whole.

What is the role of the Committee?

We highlight that the role of our Committee at this point in time is not to provide ethical assessment, but rather to support the companies with which we work to embed ethics effectively into their systems and company culture. The Committee will act as trusted advisors to both us, Digital Catapult, and to the companies on Machine Intelligence Garage. All Machine Intelligence Garage startups will get tailored support from our Ethics Committee to establish an action plan and timeline for the companies to work towards.

How is this different from existing initiatives?

Our ambition is to create a highly practical and applicable framework that communicates in the language, and caters to the needs, of individuals and organisations developing AI enabled products or services. In the research phase of this project, we found a multitude of useful resources which we used to form the basis of this initiative. We strived to synthesise and summarise these findings in our framework while also providing a practical question list approach, which both ethics experts and the companies can use to establish an action plan.

What’s next?

This is the very beginning of our initiative and we acknowledge that our current framework might be incomplete and that new considerations will arise over time as the thinking around the ethics of AI develops. We will continue updating our framework as we build up experience from working with the companies as well as other organisations and individuals working in the fields of ethics and AI. As we progress in our thinking and understanding of this field, we will seek to share our findings with the wider community through roundtables, workshops, case studies as well as the creation of further practical and applicable tools and resources for organisations and individuals to use.

The Machine Intelligence Garage Ethics Committee, chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, will convene some of the foremost minds in AI and data ethics to bridge the gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’.  To view the full committee click here.

Continue below for the framework.

WE ADVISE COMPANIES TO CONSULT THE FOLLOWING SEVEN POINTS:

Suggested action plan against these principles will be informed by product maturity and adoption, and we will therefore suggest companies to consider current, near and mid-future potential effects.

1. Be clear about the benefits of your product or service

While it is important to consider risks of new technologies, this should be done in the context of expected benefits. The benefits should be clear, likely, and outweigh potential, reasonable risks. They should be evaluated for different user groups and for any affected non-user groups (especially when there are competing values or interests between these groups), and with consideration of plausible future trends or changes (such as greater compute capacity, a solution coming to dominate the market, etc).

  • What are the goals, purposes and intended applications of your product or service?
  • Who or what might benefit from your product/service?
  • Consider all potential groups of beneficiaries, whether individual users, groups or society and environment as a whole.
  • Are those benefits common to the application type, or specific to your technology or implementation choices?
  • How will you monitor and test that your products or services meet these goals, purposes and intended applications?
  • How likely are the benefits and how significant?
  • How are you assessing what the benefits are?
  • How are these benefits obtained by your various stakeholders?
  • Can the benefits of your product/service be demonstrated?
  • Might these benefits change over time?
  • What is your position on making (parts of) your products/services available on a non-commercial basis, or on sharing AI knowledge which would enable more people to develop useful AI applications?
2. Know and manage your risks

Safety and potential harm should be considered, both in consequence of the product’s intended use, and other reasonably foreseeable uses. For example, the possibility of malicious attacks on the technology need to be thought through. Attacks may be against safety, security, data integrity, or other aspects of the system, such as to achieve some particular decision outcome. As with benefits, assessment of risks can be in respect of individuals (not just users), communities, society and environment and should consider plausible future trends or changes.

  • Have you considered what might be the risks of other foreseeable uses of your technology, including accidental or malicious misuse of it?
  • Have you considered all potential groups at risk, whether individual users, groups or society and environment as a whole?
  • Do you currently have a process to classify and assess potential risks associated with use of your product or service?
  • Who or what might be at risk from the intended and non-intended applications of your product/service? Consider all potential groups at risk, whether individual users, groups, society as a whole or the environment.
  • Are those risks common for application area or technology, or specific to your technology or implementation choices?
  • How likely are the risks, and how significant?
  • Do you have a plan to mitigate and manage the risks?
  • How do you communicate the potential risks or perceived risks to your users, potentially affected parties, purchasers or commissioners?
  • How do third-parties or employees report potential vulnerabilities, risks or biases, and what processes are in place to handle these issues and reports?
  • How do you know if you have created or reinforced bias with your system?
  • As a result of assessing potential risks, are there customers or use cases that you choose not to work with? How are these decisions made and documented?
3. Use data responsibly

Compliance with legislation (such as the GDPR) is a good starting point for an ethical assessment of data and privacy. However, there are other considerations that arise from data-driven products, such as the aptness of data for use in situations that were not encountered in the training data, or whether data contains unfair biases, that must be taken into account when assessing the ethical implications of an AI product or service.

Data may come in many forms: as datasets, through APIs, through labour (such as microtasking). The value exchange between those who provide the data (or label it), directly or otherwise, and the company, should be considered for fairness.  If data are used from public sources (e.g. open data collected by a public body or NGO) the company should consider whether it may contribute back or support the work of ongoing data maintenance, perhaps by providing cleaned or corrected data.

  • How were the data obtained, was consent obtained (if required)?
  • Are the data current?
  • Are the training data appropriate for the intended use?
  • Are the data pseudo-anonymised or de-identified? If not, why not?
  • Are the data uses proportionate to the problem being addressed?
  • Are there sufficient data coverage for all intended use-cases?
  • What are the qualities of the data (for example, are the data coming from a system prone to human error?)
  • Are potential biases in the data examined, well-understood and documented and is there a plan to mitigate against them?
  • Do you have a process for discovering and dealing with inconsistencies or errors in the data?
  • What is the quality of the data analysis? How much uncertainty / error is there? What are the consequences which might arise from errors in analysis and how can you mitigate these?
  • Can you clearly communicate how data are being used and how decisions are being made?
  • What systems do you have in place to ensure data security and integrity?
  • Are there adequate methods in place for timely and auditable data deletion, once data is no longer needed?
  • Can individuals remove themselves from the dataset? Can they also remove themselves from any resulting models?
  • Is there a publically available privacy policy in place, and to what extent are individuals able to control the use of data about them, even when they are not users of the service or product?
  • Are there adequate mechanisms for data curation in place to ensure external auditing and replicability of results, and, if a risk has manifested itself, attribution of responsibility?
  • Can individuals access data about themselves?
  • Are you making data available for research processes?
4. Be worthy of trust

For a technology or product to be trusted it needs to be understood, fit-for-purpose, reliable and competently delivered. Companies should be able to explain the purpose and limitations of their solutions so that users are not misled or confused. There should be processes in place to monitor and evaluate the integrity of the system over time, with clarity over what the quality measures are, and how chosen. Care must be taken to operate within the company’s areas of competence, and to actively engage with third-party evaluation and questions. Things can go wrong, despite best efforts. Companies should put in place procedures to report, investigate, take responsibility for, and resolve issues. Help should be accessible and timely.

  • Within your company, are there sufficient processes and tools built-in to ensure meaningful transparency, auditability, reliability and suitability of the product output?
  • Have you acknowledged the limitations of your  experience on the system you are building and how can these reflect on the system in place? What steps are you taking to address these limitations?
  • Is the nature of the product or technology communicated in a way that the intended users, third parties and the general public can access and understand?
  • Are (potential) errors communicated and their impact explained?
  • Does your company actively engage with its employees, purchasers/commissioners, suppliers, users and affected third-parties so that ethical (including safety, privacy and security) concerns can be voiced, discussed, and addressed?
  • Does your company work with researchers where appropriate to explore or question areas of the technology?
  • Do you have a process to review and assure the integrity of the AI system over time and take remedial action if it is not operating as intended?
  • If human labour has been involved in data preparation (eg image labelling by Mechanical Turk workers) have the workers involved been fairly compensated?
  • If data comes from another source, have the data owner’s rights been preserved (eg copyright, attribution) and has permission been obtained?
  • Who is accountable if things go wrong? Are they the right people? Are they equipped with the skills and knowledge they need to take on this responsibility?
  • What is the quality or standards to which the product / technology must conform (e.g. academic; peer-review, technical), what are the reasons for choosing the particular standards; and what does the company propose to do to maintain such standards?
  • In order to engender trust, are there customers, suppliers or use cases that you should choose not to work with? How are these decisions made and documented?
  • Does your company have a clear and easy to use system for third party/user or stakeholder concerns to be raised and handled?
  • Have you considered how to embed ethics within your organisation?
  • Have you considered how to embed integrity and fair dealing in your culture?
  • How would a person raise a concern with your company?
  • To inform your processes and culture, could you approach mentors, consult innovation hubs?
5. Promote diversity, equality and inclusion

We will prioritise companies that can demonstrate that they value and actively seek diversity, equality and inclusion. Companies should consider the impact and utility of their product for individuals, larger groups and society as a whole, including its impact on widening or narrowing inequality, enabling or constraining discrimination, and other political, cultural and environmental factors.

  • Do you have processes in place to establish whether your product or service might have a negative impact on the rights and liberties of individuals or groups? Please consider:

– varied social backgrounds and education levels

– different ages

– different gender and or sexual orientation

– different nationalities or ethnicity

– different political, religious and cultural backgrounds

– physical or hidden disabilities.

  • What actions can you take if negative impacts are identified?
  • Social impact can be difficult to be demonstrated: have you considered processes that can enable you to demonstrate the positive impact your product or service brings?
  • Have you considered putting in place a diversity and inclusiveness policy in relation to recruitment and retention of staff?
  • Have you consider how to balance the specific responsibilities of a startup against other factors such as cost and freedom of choice for users?
  • Are potential biases in the data and processes are examined, well-understood and documented and is there a plan to mitigate against them?
  • Where do hiring practices and building culture fit in? For instance, are ethical questions raised at interviews? Are any principles/risk considerations communicated to new hires?
  • Does your company have a diversity and inclusiveness policy in relation to recruitment and retention of staff?
6. Be open and understandable in communications

Companies must be able to communicate clearly the benefits and potential risks of their products and the actions they have taken to deliver benefits and avoid, minimise, or mitigate the risks. They must ensure that processes are in place to address the concerns and complaints of users and other parties, and that these are transparent. We believe that effective communication, when coupled with a principled approach to ethical considerations, is a competitive advantage, and will lead to progress even when hard moral issues are on the line. Conversely, poor communication, and a lack of attention to the social and ethical environment for doing business, can result in adverse public reactions, direct legal repercussions as well as mounting regulation, and hence increased costs and higher rates of failure.

  • Does your company communicate clearly, honestly and directly about any potential risks of the product or service you are providing?
  • What does it communicate and when?
  • Does your company communicate clearly, honestly and directly about the processes in place to avoid, minimise or mitigate potential risks?
  • Does your company have a clear and easy to use system for third party/user or stakeholder concerns to be raised and handled?
  • Are the company’s policies relating to ethical principles available publicly and to employees? Are the processes to implement and update the policies open and transparent?
  • Does the company disclose issues other than the product e.g. projects, studies and other activities funded by the company or which the company may work in conjunction or otherwise be involved with; the major sources of data and expertise that inform the insights of AI solutions and the methods used to train those systems and solutions?
  • Have you considered a communication strategy and process if something goes wrong?
7. Consider your business model

Integrity and fair dealing should be an integral part of organisational culture. Companies should consider what structures and processes are being employed to drive revenue or other material value to the organisation as certain business models or pricing strategies can result in discrimination. Where possible and appropriate, companies should consider whether part of the product, service or data can be made available to the public.

  • What kind of Corporate structure best meets your needs? As well as the traditional company limited by shares, there are a variety of ‘social enterprise’ alternatives, including community interest company, co-operative, B-Corp and company limited by guarantee. Are any of these of interest?
  • Data exchange: are you providing free services in exchange for user data? Are there any ethical implications for this? Do users have a clear idea of how the data will be used, including any future linking/sale of the data?
  • What happens if the company is acquired? For example, what happens to its data and software?
  • Pricing: have you considered differential prices:Are there any ethical considerations regarding your pricing strategy? Are there any vulnerable groups to which you would want to offer lower prices?
  • Data philanthropy: do you have data that you could let others (e.g. charities, researchers) use for public purpose benefits?
  • Is integrity and fair dealing embedded in your organisational culture?
Resources

In the process of drafting our ethics principles, we have researched and found useful the following publications:

  • Accenture – Fairness Tool. This tool, developed on the back of an Alan Turing Institute Data Study Group is designed to identify quickly and then help fix problems in algorithms
  • AI Code – the House of Lords Artificial Intelligence Committee “AI in the UK: ready, willing and able?” report recommends a cross-sector AI Code be established, which can be adopted nationally, and internationally. The code has five suggested principles.
  • AI Hippocratic Oath – an article by Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. He edits the Hippocratic Oath sworn by generations of doctors to suggest an equivalent oath that AI practitioners can take to highlight their ethical commitments.
  • Asilomar AI Principles – these 23 principles, developed at a conference held by the Future of Life Institute,  have been signed by 1274 AI/Robotics researchers and 2541 others (27 June 2018) including many household names in the world of AI and machine learning. Future of Life Institute says, “We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.”
  • Athena Swan – a charter that recognises and celebrates good practice towards the advancement of gender equality, established and managed by the British Equality Challenge Unit in 2005.
  • Centre for Democracy and Technology – non-for-profit organisation that is championing online civil liberties and human rights, driving policy outcomes that keep the internet open, innovative, and free. CDT have created a tool to make one think about various challenges that could arise when designing, building, testing or implementing an algorithm, The tool; Blog piece about the tool.
  • DataKind –  is a non-for-profit organisation that brings together top data scientists with leading social change organizations to collaborate on cutting-edge analytics and advanced algorithms to maximize social impact. Their UK Principles establish what their community should abide by when working on data-for-good projects.
  • Datasheets for Datasets – a paper written by authors from Microsoft Research, University of Maryland, Cornell University, Georgia Tech and AI Now Institute proposing to document datasets for greater transparency and accountability. They describe how datasheets for datasets will facilitate better communication between dataset creators and users, and encourage the machine learning community to investigate how a dataset was created, what information it contains, what tasks it should and shouldn’t be used for, and whether it raises any ethical or legal concerns.
  • DCMS – Data Ethics Framework (published Jun 2018) sets out seven principles for how data should be used in the public sector in order to “help. maximise the value of data whilst also setting the highest standards for transparency and accountability when building or buying new data technology”. The associated Data Ethics Workbook sets out the questions that should be considered against each of the principles.
  • Dot Everyone – (forthcoming) Responsible Technology Product Management Toolkit. “We are currently in the process of developing a number of assessment tools which product teams can work through to help them examine and evaluate how they handle the 3Cs (context, consequences, and contribution) of Responsible Technology in real time during the development cycle. The form of the assessments range from checklists to step-by-step information mapping to team board games.” Doteveryone is seeking help to road-test the 3C model.
  • EPSRC Principles of Robotics – 5 rules and 7 principles for regulating robots in the real world.  These “highlight the general principles of concern expressed by” a group of experts convened to “discuss robotics, its applications in the real world and the huge amount of promise it offers to benefit society” with the intention that they can “inform designers and users of robots in specific situations”.
  • Ethical OS Toolkit – Released by Institute for the Future and Omidyar Networks, the Ethical OS Toolkit is, “a toolkit designed to help technologists envision the potential risks and worst-case scenarios of how their technologies may be used in the future so they can anticipate issues and design and implement ethical solutions from the outset.”
  • The Future of Computing Academy (part of the Association for Computing Machinery) has proposed that the computer science community change its peer-review process to ensure that reviewers assess claims of impact as well as intellectual rigour. Hence researchers should think about and disclose any possible negative societal consequences of their work in their papers.
  • Google’s AI Principles – in June 2018, Google published seven principles to guide its work in AI research, product development and business decisions.
  • Information Accountability Foundation – this global information policy think tank helps frame and advance data protection law and practice through accountability-based information governance.  Providing tools for establishing legitimacy in big data projects. Noteworthy publications are:

  • It Speaks – a research report produced by King’s College London and ReFig in Canada with the aim of providing solutions to the ethical problem of bias that exists in artificial intelligence language data sets.
  • Open Data Institute – an independent, non-profit, non-partisan company focused on building an open, trustworthy data ecosystem. The ODI Data Ethics Canvas is  a tool designed to help identify potential ethical issues associated with a data project or activity.
  • Partnership on AI – a multistakeholder organization that brings together academics, researchers, civil society organizations, companies building and utilizing AI technology, and and other groups working to better understand AI’s impacts. Partnership on AI have developed a set of Thematic Pillars that provide guidance on principles for developing AI.
  • RAEng – Diversity and Inclusion Progression Framework. This tool helps engineering and science professionals (and soon startups) self-assess and improve their D&I maturity.
  • Royal Society –  the Society’s fundamental purpose is to recognise, promote, and support excellence in science and to encourage the development and use of science for the benefit of humanity.

  • Technology Strategy Board – the “Responsible Innovation Framework for commercialisation of research findings”. This framework was developed for use assessing synthetic biology applications, but clearly has the potential to inform Responsible Technology more widely.