Shahar Avin, Research Associate at Centre for the study of Existential Risk (CSER)
Shahar Avin is a postdoctoral researcher at the Centre for the Study of Existential Risk (CSER). He works with CSER researchers and others in the …
Shahar Avin is a postdoctoral researcher at the Centre for the Study of Existential Risk (CSER). He works with CSER researchers and others in the global catastrophic risk community to identify and design risk prevention strategies, through organising workshops, building agent-based models, and by frequently asking naive questions. Prior to CSER, Shahar worked at Google for a year as a mobile/web software engineer. His PhD was in Philosophy of Science, on the allocation of public funds to research projects. His undergrad was in Physics and Philosophy of Science, which followed a mandatory service in the IDF. He has also worked at and with several startups over the years.
What prompted you to be part of this ethics committee?
It is increasingly clear that the impacts of AI technologies will be transformative: for society, the economy, and everyday life. While many of the expected impacts are beneficial, there are also associated risks, from poor choices, through to accidents and to malicious use. Through my work at the Centre for the Study of Existential Risk, and through initiatives such as the IEEE’s Ethically Aligned Design, I have researched and discussed such risks, and how to mitigate them, at a fairly abstract, system-focused level. I think it is equally important, however, to bring these concerns to the practitioners at the cutting edge of developing and implementing AI technologies, to help turn a Beneficial AI vision into a series of actionable day-to-day decisions and practices.
Why is responsible AI important?
It is trivial to note that all technologies can have negative effects, both intended and unintended, and that rules, norms and responsible choices can help us gain more of the benefits while minimising the harm. There are, however, several factors that make responsible innovation particularly relevant in the case of AI. The first is the general applicability, the “omni-use nature”, of the technology, which means it is unlikely to be effectively regulated or restricted through existing regulatory mechanisms. The strong openness norms (in terms of publication, code-sharing, etc) mean the potential for rapid diffusion exists both for beneficial and for malicious uses, requiring rapid responses that practitioners are based able to deliver. Finally, the rapid technological progress means talent is often a bottleneck in creating new AI ventures (or expanding existing ones), leaving individual researchers and engineers in a strong negotiating position, including on topics such as intended use, safety & security considerations, and general ethical concerns. We have seen this play out already in domains such as lethal autonomous weapon systems, and I expect we’ll see more areas where researchers and engineers play a key role in guiding us towards ethical outcomes.