AI and AGI – The Role and Responsibility of Advisers

11th April 2023

When we debate the merits of AI and AGI, it is hard not to think back to HAL, the highly advanced computer in Stanley Kubrick’s film, 2001 a Space Odyssey, whose super-human like intelligence enables him to guide a human crew through space and operate their ship. 

While HAL initially supports the crew and seems eerily benevolent, the film develops into a frightening psychological and physical thriller played out in a battle for survival between humans and the computer they have created.  

HAL’s voice, when he speaks, has a calm and controlled tone and HAL seems to be both logical and benevolent: “I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.”

While at first glance HAL’s comment seems to be about the importance of achieving one’s own full potential, it does not include or apply any importance to an individual’s moral duty or their responsibility to strive towards the achievement of a greater good.  

As we develop AI, it is important to always be mindful of the remarkable achievements of computers and their potential, but at the same time to be aware of the need for the presence of a moral compass and a guiding hand to be applied to the technology that is being created – and the information being disseminated.

One of the key responsibilities of advisers in the private sector going forward may be to fulfil the role of providing this guidance and navigating these areas, especially while the laws around AI are still being developed.  

This article explores the current approach of governments who are facing challenges in both navigating and controlling the wider use of AI and its development into AGI and balancing this with the civil liberties and human rights considerations which they are seeking to regulate. 

While we continue to operate increasingly with AI, as professionals, and advisers, we need to be constantly mindful of both our ability to maximise our potential and of the technology we are using – and in particular to be conscious of our duty to ensure that anything we create or utilise is managed in a way to include values which we support and trust, or we are at risk like HAL’s crew, of being threatened by the advanced technology we create – perhaps in ways we can’t yet imagine.