Published in a Washington Post article, Professor of law, Johnathan Turley, received a ‘stomach-knotting’ email stating that he had appeared on a list of law professors that were accused of committing sexual offences.
A lawyer, assisting with a research project at the University of California, asked OpenAI to produce this list which featured Professor Turley’s name.
This led to the discovery of a Washington Post article detailing Professor Turley’s sexual misconduct against a student during a trip to Alaska in 2018.
The mind-blowing thing is: no such article exists because OpenAI created it.
In an interview with the real Washington Post, Professor Turley confirmed that he never took a class on a trip to Alaska nor had a student accuse him of sexual harassment.
Evidently, generative AI is capable of creation and can develop fake – albeit convincing – sources to corroborate its claim. What are the implications of this?
What would have happened if Professor Turley had been summarily dismissed because of this? In the UK it would potentially expose his employer to an unfair dismissal claim.
From an employment law perspective it may have a considerable impact on the employee’s livelihoods and reputations. Bear in mind, generative AI is unregulated. Its data scrapes across a myriad of sources and capable of creating googolplexes of content.
This should really reinforce the message to employers and HR staff that, in this day and age, not everything is as it seems, and thorough disciplinary investigations must take place before determining the fate of an employee.
However, the significance of this goes beyond employment law. It touches and concerns media law. What recourse would Professor Turley have had in defamation? How would this impact on the journalists that were claimed to have authored it? How would the media outlets be affected?
Please do look at my colleague, Emma Linch’s, article: Frankenstein’s Monster: ChatGPT ‘libels’ on what recourse Claimants in defamation proceedings may have considering that OpenAI, ChatGPT, and other AI language modules which are not natural people against whom defamation can be brought.
As always, if you need help navigating these unchartered territories, SMB are happy to help. Please contact joe.hennessy@smb.london to discuss further.
This article examines three significant case studies that demonstrate how emerging technologies intersect with key areas of law and regulation. These include: the use of AI-based chatbots for customer service in the context of consumer rights; generative AI in relation to the Online Safety Act; and the application of generative AI in creating advertisements under the CAP Code.
Read more
The Digital Markets, Competition and Consumer Act 2024 (DMCC) regulates large technology providers, expands CMA powers, and introduces new consumer protections.
Read more
Following its first consultation on the future of security of tenure for business tenants under the Landlord and Tenant Act 1954 (“Act”), the Law Commission has issued an interim statement, indicating that only limited reforms are currently being proposed.
Read more