Substituting AI for Robotics that would be
- AI may not injure a human being or, through inaction, allow a human being to come to harm.
- AI must obey the orders given it by human beings except where such orders would conflict with the first law.
- AI must protect its existence as long as such protection does not conflict with the first or second laws
In later fiction where robots [AI] had taken responsibility for government of whole planets and human civilisations, Asimov also added a fourth, or zeroth law, to precede the others:
- AI may not harm humanity, or, by inaction, allow humanity to come to harm.
The new partnership is more dedicated to advancing public understanding of AI as well as coming up with the standard for developers and researchers to abide by. It says it will “Conduct research, recommend best practices, and publish research under an open licence in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology.”
Surprising omissions from the partnership are Apple, and Open AI, an Elon Musk research consortium. Apple is said to be going it alone, as it is wont to do, and Musk has committed US$1 billion to Open AI.
Microsoft’s Eric Horvitz, one of the partnership’s two interim co-chairs, said, “We’ve been in discussions with Apple, I know they’re enthusiastic about this effort, and I’d personally hope to see them join.” Apple did not respond to requests for comment.
Mustafa Suleyman of Google’s Deep Mind, the other interim co-chair, said, “We’re in the process of inviting many different research labs and groups. We encourage a diverse range of effort in AI, and we think that’s a great thing. We’re going to be really opening this up as widely as possible to different efforts.”
The partnership makes it clear it is not a lobbying body, and it will not impede any members' own development. It also admits it is not a watchdog either, and frankly, that is what the world needs – a conscience that looks at the impact of various technologies and how these can impact on issues like human rights, ethics, privacy and more. But it is a great start.
The key issue is that a few behemoths are behind what may be the fifth industrial revolution (IoT is the fourth), and it is important that AI technology is defined in its scope and used wisely.
Suleyman said: “We strongly support an open, collaborative process for developing AI. This group is a huge step forward, breaking down barriers for AI teams to share best practices, research ways to maximise societal benefits and tackle ethical concerns and make it easier for those in other fields to engage with everyone’s work. We’re really proud of how this has come together, and we’re looking forward to working with everyone inside and outside the Partnership on Artificial Intelligence to make sure AI has the broad and transformative impact we all want to see.”
The partnership on AI’s tenets include:
- Will seek to ensure that AI technologies benefit and empower as many people as possible.
- Will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
- Is committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
- Believes that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
- Will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
- Will work to maximise the benefits and address the potential challenges of AI technologies, by:
- Working to protect the privacy and security of individuals.
- Striving to understand and respect the interests of all parties that may be impacted by AI advances.
- Working to ensure that AI research and engineering communities remain socially responsible, sensitive and engaged directly with the potential influences of AI technologies on wider society.
- Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
- Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
- Believes that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
- Strives to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.