Generative AI, heralded as the highway to Industry 4.0, has landed with a bang with solutions like Chat GPT, Midjourney and DALL-E having become public playthings almost overnight. With wall-to-wall communications surrounding these new tools, professionals across a range of disciplines may be feeling the pressure to ‘act now and act fast’.
For those wondering how to make sense of this technology and what it could mean, here is a calm and balanced recap of what it at stake and what questions business leaders should be asking.
Artificial intelligence (AI) refers to any computer system that can perform tasks that typically require human intelligence, such as visual perception, decision-making, and language translation. These systems learn by processing massive amounts of data and looking for patterns to model their own decision-making. Humans may supervise the AI’s learning process, or it can be done in an automated fashion. This technology is nothing new – the big breakthrough is in how accessible it has become.
AI holds immense potential to make a positive impact on society, whether that’s by automating administrative tasks to enable us to focus on more meaningful work and pursuits, or its ability to accelerate learning and innovation, with exciting possibilities for how we deal with the biggest challenges facing humanity, from tackling climate change to treating diseases like Heart Disease and Cancer.
However, talk of AI’s capacity to transform every aspect of our lives and work can be alarming, with reports suggesting that it may threaten jobs and the economy. Furthermore, movies and TV shows depicting killer robots may further contribute to the sense of a looming, technologically-induced ‘doomsday’. While there’s no shortage of hyperbole and alarmism in the media, there is no denying the ethical and legal dilemmas that ungoverned and unexamined AI deployment will bring.
Given that current Generative AI tools are open-source, there’s always the possibility of misuse, either intentionally or unintentionally, resulting in harm to individuals or groups. For instance, if an organization employs open-source AI to automate recruitment decisions, but the algorithm exhibits bias against specific groups, it could lead to discriminatory practices.
There are also consequences for already fragile democracies. In Venezuela earlier this month, deepfake videos of American news readers promoting government-aligned misinformation circulated widely in the media. Misuse of generative AI systems like this holds deep repercussions for citizens and governments such as the dissemination of propaganda and misinformation in populations with limited access to trustworthy news due to censorship.
Ultimately the risk-reward impact of AI will depend on how it is regulated and what trade-offs society and businesses are prepared to make, which is why Leading AI specialists are urging manufacturers and developers to be accountable for ensuring that their products are utilised and deployed in an ethical and responsible manner.
While there is growing demand for legislators and legal experts to create regulations and guidelines that would address the potential risk associated with generative AI, the fact remains that at present, the technology is moving faster than our ability to regulate it effectively. Hence business leaders like Elon Musk and Bill Gates calling for a pause on the development of more powerful AI systems until the consequences are fully considered and shared protocols are established.
As this plays out, any organisation using AI should be cognisant of the possible dangers it poses, such as biases, privacy issues, and unforeseen repercussions, and take steps to mitigate these risks. While it easy to get swept up in an implementation ‘arms race’, business leaders would do well to explore the reputational and ethical concerns such tools could bring, taking time to ensure they are delivered in a manner that aligns to their company’s values and commitments, whether that’s protecting privacy and livelihoods, supporting fairness and equality, or safeguarding democracy.
Our advice in short: keep calm… but think critically. While the technology is here to stay, the reputational, or indeed, material risks of moving without understanding the consequences could well outweigh the benefits of any first mover advantage.