Journal
COMPUTER LAW & SECURITY REVIEW
Volume 35, Issue 4, Pages 410-422Publisher
ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.clsr.2019.04.007
Keywords
AI artefacts; AI systems; AI ethics; AI risk assessment; AI risk management; AI Principles
Categories
Ask authors/readers for more resources
The first article in this series examined why the world wants controls over Artificial Intelligence (AI). This second article discusses how an organisation can manage AI responsibly, in order to protect its own interests, but also those of its stakeholders and society as a whole. A limited amount of guidance is provided by ethical analysis. A much more effective approach is to apply adapted forms of the established techniques of risk assessment and risk management. Critically, risk assessment needs to be undertaken not only with the organisation's own interests in focus, but also from the perspectives of other stakeholders. To underpin this new form of business process, a set of Principles for Responsible AI is presented, consolidating proposals put forward by a diverse collection of 30 organisations. (C) 2019 Roger Clarke. Published by Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available