Digital Catapult establishes Ethics Committee to help translate responsible AI theory into practice.

The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications, as concluded by the recent House of Lords Artificial Intelligence Committee report [1] . However, at present, there is a gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’. There is demand from all sizes of organisation for help on defining and applying ethical standards in practice.

The Machine Intelligence Garage Ethics Committee, chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, will convene some of the foremost minds in AI and data ethics to address this need. It will comprise two elements: the Steering Group, who will oversee the development of principles and tools to facilitate responsible AI in practice, and the Advisory Group, who will work closely with startups developing their propositions through Digital Catapult’s Machine Intelligence Garage programme.

While the programme itself provides access to expertise and computational power, the Advisory Group’s collaboration with Machine Intelligence Garage startups will ensure that the Committee’s work is tested and grounded in practice.

Digital Catapult’s Machine Intelligence Garage Ethics Committee has released its first Ethics Framework and invites AI companies to test the framework as a means to integrate ethical practice into the development of artificial intelligence and machine learning technologies.

The Ethics Framework sets out key ethical considerations and underlying questions that provide a practical approach for startups, individual developers and experts to use in planning for and addressing the ethical challenges faced by their business, technology and ideas.

To read the framework click here.

 

[1]  “AI in the UK: ready, willing and able?”

Key Activities of the Advisory Group:

  • Lead an initial workshop to establish the principles for the Committee work
  • Participate twice a year in half day roundtable discussions to establish and keep the principles for the Committee work updated
  • Assess applications of Machine Intelligence Garage companies

Key Activities of the Steering Board:

  • Provide mentoring and advice for the operation of the ethics committee advisory group
  • Participate twice a year in half day roundtable discussions to establish and keep the principles for the Committee work updated

Filter By:

All Advisory Group Steering Board

Advisory Group

Hetan Shah, Executive Director at Royal Statistical Society

Hetan is executive director of the Royal Statistical Society – a membership body for over 9,000 statisticians and data scientists – with a vision to …

Hetan is executive director of the Royal Statistical Society – a membership body for over 9,000 statisticians and data scientists – with a vision to put data at the heart of understanding and decision-making. He is visiting professor at the Policy Institute, King’s College London, and is chair of the Friends Provident Foundation, a grant making trust. Hetan is a member of the advisory board of the Office for National Statistics Data Science Campus, and a member of the Big Lottery Fund’s Data and Evidence Advisory Group.

What prompted you to be part of this ethics committee?

I’m involved in a lot of discussions on data ethics but they can be abstract. Real cases are where the rubber hits the road.

Why is responsible AI important?

Data use, algorithms and AI are powerful technologies, and we should use them for public good.

Read more
LinkedIn

Burkhard Schafer, Professor of computational legal theory at University of Edinburgh

Burkhard Schafer studied Theory of Science, Logic, Theoretical Linguistics, Philosophy and Law at the Universities of Mainz, Munich, Florence and Lancaster. His main field of …

Burkhard Schafer studied Theory of Science, Logic, Theoretical Linguistics, Philosophy and Law at the Universities of Mainz, Munich, Florence and Lancaster. His main field of interest is the interaction between law, science and computer technology, especially computer linguistics. How can law, understood as a system, communicate with systems external to it – be it the law of other countries (comparative law and its methodology) or science (evidence, proof and trial process)? As a co-founder and co-director of the Joseph Bell Centre for Legal Reasoning and Forensic Statistics, I help to develop new approaches to assist lawyers in evaluating scientific evidence and develop computer models which embody these techniques. A special interest here is the development of computer systems that help law enforcement agencies to co-operate more efficiently across jurisdictions, assisting them in the interpretation of the legal environment within which evidence in other jurisdictions is collected. This research is linked to his wider interest in comparative law and its methodology, the idea of a “Chomsky turn in comparative law”, and the project of a “computational legal theory”.

What prompted you to be part of this ethics committee?

Ethics is too often seen as a compliance burden, a tick box exercise, or a set of “you must nots”. Instead, it should be the place to imagine better worlds and part of a thrive for excellence.

Why is responsible AI important?

AI can only realise its full potential if it gains public acceptance and if the consequences of its deployment inform every step of its design and development.

Read more
LinkedIn

Laura James, Entrepreneur in Residence at Cambridge Computer Lab

Laura James works with emerging technologies in new and growing organisations across sectors, and has been active in the tech responsibility space since 2016, with …

Laura James works with emerging technologies in new and growing organisations across sectors, and has been active in the tech responsibility space since 2016, with a focus on practical ways to improve industry practice. Working with businesses and learning about their technologies, challenges and opportunities has always been fascinating to her, and she enjoys supporting early stage and growing organisations. Laura is looking forward to helping the Machine Intelligence Garage startups act responsibly as regards their users, society more broadly, and other stakeholders, as well as exploring the tradeoffs and choices they face.

What prompted you to be part of this ethics committee?

Responsible technology development is critical, so that we get technology that is useful, has benefits definitely outweighing harms, that we can rely on and have confidence in. This means considering the potential impacts given the wider context in which the technology is developed, used and maintained, thinking creatively and with others about risks and changes and planning for unintended consequences which might arise in futures both good and bad. As machine learning rapidly enters more parts of our lives, it is important that we create and operate it thoughtfully, apply it to problems appropriately, and build systems that are trustworthy. We cannot assume that retrospective action could fix problems caused by irresponsible design and engineering, especially not for data systems which can reach huge scale very quickly.

Why is responsible AI important?

Responsible technology development is critical, so that we get technology that is useful, has benefits definitely outweighing harms, that we can rely on and have confidence in. This means considering the potential impacts given the wider context in which the technology is developed, used and maintained, thinking creatively and with others about risks and changes and planning for unintended consequences which might arise in futures both good and bad. As machine learning rapidly enters more parts of our lives, it is important that we create and operate it thoughtfully, apply it to problems appropriately, and build systems that are trustworthy. We cannot assume that retrospective action could fix problems caused by irresponsible design and engineering, especially not for data systems which can reach huge scale very quickly.

Read more
LinkedIn

Christine Henry, Product Manager, Real World Insights at IQVIA

Dr Christine Henry is working as a Product Manager at Amnesty International on Amnesty Decoders, an online volunteering platform. She previously worked at IQVIA as …

Dr Christine Henry is working as a Product Manager at Amnesty International on Amnesty Decoders, an online volunteering platform. She previously worked at IQVIA as a Product Manager on a healthcare platform to explore and analyse patient data. Christine has over eight years of experience in healthcare data analysis, forecasting, and market access, as well as knowledge of machine learning and data science. She holds a PhD in physical chemistry from the Australian National University, and a law degree. Christine is passionate about investigating the ethical and social impacts of new technologies and data. She is a volunteer at DataKind UK, where she works with teams of pro bono data scientists to help charities and nonprofits to use data science techniques to have a greater impact. She led the development of DataKind UK’s ethical principles for data science volunteers and has presented on this work at conferences and meetings.

What prompted you to be part of this ethics committee?

Having worked on DataKind UK’s ethical principles for pro bono data scientists in the non-profit space, I’m very interested in expanding this work by helping to make data and AI ethics practically applicable across a whole range of companies and business areas. I think that the work MI Garage and Digital Catapult are doing around supporting new entrants in AI and machine learning is important for diversity and competition in the space, providing alternatives beyond the established companies. If we in the Ethics Committee can help embed an “Ethical by Design” culture for early-stage companies, this will not only protect users and data subjects but it can also help companies succeed.

Why is responsible AI important?

AI is already impacting our lives – affecting everything from movie recommendations or the ads we see online, to healthcare plans and people’s ability to get benefits or find employment. Responsible AI development and use is needed not for some future superintelligence but for right now. Perhaps it’s useful to consider the alternative; irresponsible AI would mean building things because they can be developed without considerations of whether they should or whether there should be limits. It would mean not considering potential biases in AI tools (due to biases in the data used to build them or in models), putting out products that harm some groups more than others without realising, let alone without any attempts to mitigate or improve this. AI without responsibility would also mean irresponsible use of data, not engaging with questions around consent, privacy, secure minimal retention and fair conditions for any data suppliers. Finally, irresponsible AI means ignoring the responsibility to show engagement with these issues – not being willing to produce auditable or interpretable tools, and staying silent on issues around ethics and risks when users (and society) are looking for best practice and open discussion. If we think about it this way (which is not unlike the status quo in some companies and some industries) it becomes clear that engaging in responsible AI is not only good for society and for users of tools, but it’s likely to create higher quality AI applications and more sustainable companies.

Read more
LinkedIn

Josh Cowls, Data Ethics Researcher at The Alan Turing Institute

Josh Cowls researches the ethics and politics of data science and AI at the Alan Turing Institute in London. Josh is also a Research Associate …

Josh Cowls researches the ethics and politics of data science and AI at the Alan Turing Institute in London. Josh is also a Research Associate at the Oxford Internet Institute, University of Oxford. While at the Turing, Josh has helped launch its Ethics Advisory Group to ensure that the research conducted at the Institute is ethically sound, and has also contributed to the creation of the Ada Lovelace Institute, in partnership with the Nuffield Foundation. Josh’s research interests lie at the intersection of ethics, politics and communication, and his forthcoming PhD will explore the legitimacy of algorithmic decision-making in society.

What prompted you to be part of this ethics committee?

The UK is blessed with an incredible legacy of invention, world-class academic institutions, and an enterprising spirit. We have a terrific opportunity to draw on these advantages and create game-changing ML and AI solutions – but to truly set ourselves apart, innovation needs to be in lockstep with ethics. As the creation of its Ethics Committee shows, Machine Intelligence Garage thinks so too, and I’m delighted to be part of the Ethics Committee to help ensure that the benefits of digital technologies can be to the benefit of everyone.

Why is responsible AI important?

If the events of the last couple of years have taught us anything, it’s that innovation and ethics aren’t mutually exclusive – they’re mutually dependent. Having a clear sense of the ethical and social implications of the technology we build is the smartest way to anticipate challenges down the line and create a truly responsible and sustainable business. Given the enormous scope for impact on society that AI represents, it’s more important than ever to ensure that this impact is positive.

Read more
LinkedIn

Shahar Avin, Research Associate at Centre for the study of Existential Risk (CSER)

Shahar Avin is a postdoctoral researcher at the Centre for the Study of Existential Risk (CSER). He works with CSER researchers and others in the …

Shahar Avin is a postdoctoral researcher at the Centre for the Study of Existential Risk (CSER). He works with CSER researchers and others in the global catastrophic risk community to identify and design risk prevention strategies, through organising workshops, building agent-based models, and by frequently asking naive questions. Prior to CSER, Shahar worked at Google for a year as a mobile/web software engineer. His PhD was in Philosophy of Science, on the allocation of public funds to research projects. His undergrad was in Physics and Philosophy of Science, which followed a mandatory service in the IDF. He has also worked at and with several startups over the years.

What prompted you to be part of this ethics committee?

It is increasingly clear that the impacts of AI technologies will be transformative: for society, the economy, and everyday life. While many of the expected impacts are beneficial, there are also associated risks, from poor choices, through to accidents and to malicious use. Through my work at the Centre for the Study of Existential Risk, and through initiatives such as the IEEE’s Ethically Aligned Design, I have researched and discussed such risks, and how to mitigate them, at a fairly abstract, system-focused level. I think it is equally important, however, to bring these concerns to the practitioners at the cutting edge of developing and implementing AI technologies, to help turn a Beneficial AI vision into a series of actionable day-to-day decisions and practices.

Why is responsible AI important?

It is trivial to note that all technologies can have negative effects, both intended and unintended, and that rules, norms and responsible choices can help us gain more of the benefits while minimising the harm. There are, however, several factors that make responsible innovation particularly relevant in the case of AI. The first is the general applicability, the “omni-use nature”, of the technology, which means it is unlikely to be effectively regulated or restricted through existing regulatory mechanisms. The strong openness norms (in terms of publication, code-sharing, etc) mean the potential for rapid diffusion exists both for beneficial and for malicious uses, requiring rapid responses that practitioners are based able to deliver. Finally, the rapid technological progress means talent is often a bottleneck in creating new AI ventures (or expanding existing ones), leaving individual researchers and engineers in a strong negotiating position, including on topics such as intended use, safety & security considerations, and general ethical concerns. We have seen this play out already in domains such as lethal autonomous weapon systems, and I expect we’ll see more areas where researchers and engineers play a key role in guiding us towards ethical outcomes.

 

Read more
LinkedIn

Steering Board

Jo Twist, Chief Executive Officer at UKIE – The Association for UK Interactive Entertainment

Jo Twist is CEO of Ukie, the trade body for UK games and interactive entertainment, making the UK the best place in the world to …

Jo Twist is CEO of Ukie, the trade body for UK games and interactive entertainment, making the UK the best place in the world to make, sell and play games. She is also Deputy Chair of the British Screen Advisory Council, London Tech Ambassador, Chair of the BAFTA Games Committee, an Ambassador on the Mayor of London’s Cultural Leadership Board, and Creative Industries Council member. In 2016 she was awarded an OBE for services to the creative industries and won the MCV 30 Women in Games award for Outstanding Contribution. She is a Vice President for games and accessibility charity, SpecialEffect and the government’s Sector Champion for Disabilities. Previously, Jo was Education Commissioning Editor for Channel 4 where she commissioned Digital Emmy-winning Battlefront II, free to play browser and iOS games and social media projects. Jo was Multiplatform Commissioner for BBC Entertainment & Switch, BBC Three Multiplatform Channel Editor, and technology reporter for BBC News. Her doctorate in the late 1990s was an ethnography exploring identity and concepts of difference in place based and virtual communities.

What prompted you to be part of this ethics committee?

AI and games have gone hand in hand for many years. Games rely on AI and data to shape experiences, characters, environments, as well as identifying different player behaviours. Many of the applications and innovation of AI in games are being adopted by other sectors, which could bring huge benefits to other industries, and – as large scale complex simulated environments – games themselves provide excellent training grounds for AI. The UK’s games industry is recognised globally as one of the best, most innovative and creative, and it has a clear role to play in the development and understanding of AI in society – across all sectors.

Why is responsible AI important?

We have a responsibility to shape technology for public benefit rather than let technology shape us. It is critical that we are driving the application and impact of AI as a society in a positive way as well as educating the public on potential pitfalls and unintended consequences.

Read more
LinkedIn

Jeni Tennison, CEO at Open Data Institute

Jeni Tennison is the CEO of the Open Data Institute. She gained a PhD in Artificial Intelligence, then worked as an independent consultant specialising in …

Jeni Tennison is the CEO of the Open Data Institute. She gained a PhD in Artificial Intelligence, then worked as an independent consultant specialising in open data publishing and consumption. She was the Technical Architect and Lead Developer for legislation.gov.uk before joining the ODI as Technical Director in 2012, becoming CEO in 2016.

Jeni sits on the UK’s Open Standards Board; the Advisory Board for the Open Contracting Partnership; the Board of Ada, the UK’s National College for Digital Skills; the Co-operative’s Digital Advisory Board; and the Board of the Global Partnership for Sustainable Development Data.

Read more
LinkedIn

Hayaatun Silem, CEO at Royal Academy of Engineering

Dr Hayaatun Sillem is the Chief Executive of the Royal Academy of Engineering, which brings together the UK’s leading engineers and technologists for a shared …

Dr Hayaatun Sillem is the Chief Executive of the Royal Academy of Engineering, which brings together the UK’s leading engineers and technologists for a shared purpose: to promote engineering excellence for the benefit of society. Prior to her appointment as Chief Executive, she held the post of Deputy CEO at the Academy. She previously served as Committee Specialist and later Specialist Adviser to the House of Commons Science & Technology Committee. Hayaatun has extensive leadership experience in UK and international engineering, innovation, and diversity and inclusion activities. She is a trustee of the London Transport Museum and EngineeringUK, and Chair of Judges (designate) for the St Andrews Prize for the Environment. Hayaatun has a Masters in Biochemistry (MBiochem) from the University of Oxford and a PhD in signal transduction from Cancer Research UK/University College London. She is a Fellow of the Institution of Engineering and Technology.

What prompted you to be part of this ethics committee?

Engineering and technology play a fundamental role in shaping the world around us. The Royal Academy of Engineering exists to promote engineering excellence for the benefit of society and supporting the ethical development and deployment of technology is a key part of that mission.

Why is responsible AI important?

AI offers an incredibly powerful set of tools and techniques which can help create economic opportunity and deliver societal benefits, but we cannot assume that this will happen automatically. A responsible approach to AI will give us the best chance of maximising the benefits to people across all parts of society and of minimising the risk of negative impacts and loss of trust or confidence.

Read more
LinkedIn

Wendy Hall, Regius Professor of Computer Science at the University of Southampton

Dame Wendy Hall, DBE, FRS, FREng is Regius Professor of Computer Science, Pro Vice-Chancellor (International Engagement) at the University of Southampton, and is the Executive …

Dame Wendy Hall, DBE, FRS, FREng is Regius Professor of Computer Science, Pro Vice-Chancellor (International Engagement) at the University of Southampton, and is the Executive Director of the Web Science Institute. With Sir Tim Berners-Lee and Sir Nigel Shadbolt she co-founded the Web Science Research Initiative in 2006 and is the Managing Director of the Web Science Trust, which has a global mission to support the development of research, education and thought leadership in Web Science. She became a Dame Commander of the British Empire in the 2009 UK New Year’s Honours list, and is a Fellow of the Royal Society. She has previously been President of the ACM, Senior Vice President of the Royal Academy of Engineering, a member of the UK Prime Minister’s Council for Science and Technology, was a founding member of the European Research Council and Chair of the European Commission’s ISTAG 2010-2012, and was a member of the Global Commission on Internet Governance. She is currently a member of the World Economic Forum’s Global Futures Council on the Digital Economy, and is co-Chair of the UK government’s AI Review, which was published in October 2017.

What prompted you to be part of this ethics committee?

The UK has the expertise and skills to lead the way on Responsible AI, but this means a grounding in the real world is required to find out what will work in practice. The Machine Intelligence Garage Ethics Committee is uniquely focused on developing and testing practical tools and processes to help AI developers to decide when AI is responsible and when it isn’t. In the future these could be used as benchmarks for wider use in industry and government.

Why is responsible AI important?

AI technologies can be hugely powerful, and we need to ensure as far as possible that they are used for good, not for harm. It is important to have responsible AI in mind from the research stage through to production.

Read more
Website

Sir William Blair, Professor of Financial Law and Ethics at Queen Mary, University of London

Sir William (Bill) Blair is Professor of Financial Law and Ethics at Queen Mary University of London’s Centre for Commercial Law Studies. He is a …

Sir William (Bill) Blair is Professor of Financial Law and Ethics at Queen Mary University of London’s Centre for Commercial Law Studies. He is a former senior judge, now an arbitrator at barristers’ Chambers at 3 Verulam Buildings, London, and holds appointments at the EU level and at the Bank of England. His interests lie in the field of fintech, and how AI may help, for example, in addressing the problem of financial exclusion.

What prompted you to be part of this ethics committee?

As is well known, fintech is one if the key areas where AI is making an impact, and this is a trend that will grow. Finance is central to what we do, and as machines begin to take financial decisions which affect the lives of people, we have to recognise the ethical issues that arise along the way, and try to embed high standards from the start.

Why is responsible AI important?

If we look at it from the perspective of finance, an example is the growing problem globally of financial exclusion, where people and even countries can find it hard to gain access to the financial system. Responsible AI can play a part in finding a solution, because it has the potential to take more focused decisions on concerns that are keeping people out, such as money laundering and cybercrime, by making more sense of masses of data than is possible in current practice. Irresponsible AI on the other hand would only make matters worse.

Read more
Website

Luciano Floridi, Turing Fellow and Chair of the Data Ethics Group, Professor of Philosophy and Ethics of Information, University of Oxford

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, where he directs the Digital Ethics Lab of the Oxford …

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, where he directs the Digital Ethics Lab of the Oxford Internet Institute, and is Professorial Fellow of Exeter College. He is also Turing Fellow and Chair of the Data Ethics Group of the Alan Turing Institute. His areas of expertise include digital ethics, the philosophy of information, and the philosophy of technology. Among his recent books, all are published by Oxford University Press: The Fourth Revolution – How the infosphere is reshaping human reality (2014), winner of the J. Ong Award; The Ethics of Information (2013); The Philosophy of Information (2011); The Logic of Information (forthcoming in 2019).

What prompted you to be part of this ethics committee?

I have a keen interest in translating ethical thinking about AI in good governance principles in a way that can support society and industry.

Why is responsible AI important?

It is important because it can provide great solutions that are environmentally sustainable and socially preferable.

Read more
LinkedIn