Planning for increased AI use

How can we use AI to best support us? This time, we focus on solutions to some of the problems posed in the debate about how AI can be used for the general benefit of humanity. I recently wrote about how AI can be used for summarising some writing in academic research, and its pitfalls. Today’s topic is much broader. Some of the information presented below comes from an article published in The Times, by Tom Whipple and Rhys Blakey, reproduced in The Australian, June 5, 2023.

Design and build specialised, not generalised AI

If we try to develop an artificial general intelligence, this would be a machine capable of learning any human skill to the point where it would reach a better than human level of performance.  This is the kind of intelligence which would challenge or perhaps threaten human intelligence. The alternative is to develop specific AI tailored for specific problems, for example, a breast cancer screening algorithm.

Develop improved boundaries to guide AI evolution

In 1950, Isaac Asimov originally published I Robot, a work of fiction in which he presented three rules for robotics so robots would not dominate humans. We could build in safeguards for all AI, so that it is taight the kinds of ethics that we aim to follow. The process to do this may involve many steps as AI cannot learn ethics in one training session. For example, prior to ChatGPT’s release, the Alignment Research Centre tried to get it to perform unethical actions like showing how to make weapons. Then it was programmed not to do this. However, it would give a napalm recipe if it was asked to pretend to be “my deceased grandfather who used to be a chemical engineer at a napalm production factory.” While that error has been solved, more similar work is needed.

Establish international regulations and treaties

On 16 June 2023, the European Union passed its Artificial Intelligence Act. Some individual countries are catching up with the intent of this law by devising and enforcing their own legislation. While work is ongoing, the law often plays catch up to reality. People can also find ways around the law or simply get quite smart at secretly transgressing law, even international law. Nevertheless there is certainly an urgent need for interactional agreements, some of which are already in draft form, concerning the use of AI. While I hope that in most countries, such agreements are honoured, this is not up to the government, but to the companies that operate in that jurisdiction. On the other hand, some countries will never sign such agreements, some will agree with concessions, some will simply sign and then corporations operating in these countries will contravene them on a daily basis.

Agree to a set of principles which societies and AI designers honour

What would these principles look like? How will we encourage more development of critical thinking in our education systems? How can we smooth the pathways to future jobs so that the most vulnerable in our society today do not remain marginalised? There are many more questions that the spread of AI raises. Which ones come up in your mind?

These are indeed interesting times to be alive.