21 August 2025
I first caught up with Professor Simon Chesterman (David Marshall Professor of Law and Vice Provost (Educational Innovation), National University of Singapore; Founding Dean, NUS College) 4 years ago after his Keynote speech “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” at another conference2.
After reading Professor Chesterman’s book of the same title (Cambridge University Press, 2021), in a Q&A with Professor Chesterman (2021), I posed 5 questions that covered AI in the space of investigation, litigation, the Courts, and finally the possibility of global AI regulation.
4 years on, it is my pleasure and honour to revisit these issues with Professor Chesterman. In the intervening of 4 years, many things have changed; but some things have not:
- Professor Chesterman will once again give a Keynote address, this time at the APIEx Symposium & Workshop 2025: Technology, The Expert and The Law on 19 November 2025.
- In his book, Professor Chesterman mentioned that the “years since 2016 in particular saw a proliferation of guides, frameworks, and principles focused on AI3 … [and] Governments have been slow to pass laws governing AI. Several have developed softer norms4”. More of such guides, frameworks, principles, and soft norms have emerged since, with some formalising some of these into law.
- Many of us have been looking at past cases (which involve non-Gen AI systems) to consider what pitfalls might befall in the future in respect of the use of Gen AI. We previously discussed, in the context of automation5 and the evidentiary value of evidence obtained via AI tools, the long-drawn litigation (since the mid-2000s) surrounding the Horizon IT system used by the UK Post Office Ltd.6 Three significant developments have taken place since – (a) two pieces of legislation were introduced7 ; (b) three schemes were set up for different groups of victims; and (c) a long statutory inquiry took place, with Volume 1 of the final report published on 8 July 20258. A similar Robodebt scandal has also arisen in Australia, where an automated debt assessment and recovery had been implemented in 2016, resulting in approximately 470,000 debts to be wrongly issued. The Federal Court approved a A$1.872 billion settlement in June 2021, and investigations are still ongoing9.
Jiamin: You mentioned in your book “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” that all of the guides, frameworks, principles, and soft norms that were rolled our between 2016 to 2021 surround 6 non-controversial themes – (a) human control; (b) transparency; (c) safety; (d) accountability; (e) non-discrimination; and (f) privacy10. Does your view remain in respect of the newer slew of frameworks, principles, soft norms, and laws that have emerged since?
Prof Chesterman: Yes, my view remains. There is however, one aspect that has emerged more prominently recently – sustainability. We are becoming more aware of the environmental impact of AI, for instance the use of an immense amount of energy and the creation of e-waste. There is also a wariness of both under-regulation and over-regulation, and there is a waxing and waning of fears around AI. Several countries seem to be experiencing “buyer’s remorse” driven by concerns that overly strict regulations could hinder economic growth or drive innovators to other jurisdictions. Singapore has erred on the side of under-regulation, except where there are issues such as national security or political stability (eg, Protection from Online Falsehood and Manipulation Act, Election (Integrity of Online Advertising) (Amendment) Act).
|
Jiamin: Does that mean that corporations who are big on ESG may start to avoid AI in future?
Prof Chesterman: I don’t think so. In the medium term, I think companies will approach AI use much as the approach electricity usage – just like how computers are being used. One could certainly refuse to use computers, but that is neither realistic nor practical. I do hope that consumers, however, are more aware of the carbon footprint effect – if you use ChatGPT to generate information, that increases the carbon footprint by something like 10 times as compared to using a regular search engine like Google.
|
Jiamin: You have highlighted in the blurb of your upcoming Keynote speech on “The Tragedy of AI Governance” that there have been 3 factors that undermine the prospects for meaningful governance of AI: (a) shift of power from public to private hands; (b) balancing between regulation and allowing innovation; (c) dysfunction of global processes to manage collective action problems. I am most intrigued about the first factor. What makes AI so different from data privacy (which involves mainly the private sector) that makes it difficult to regulate?
Prof Chesterman: It really is related to where the technology is going. Gen AI started as machine learning in universities. But if universities developed something like ChatGPT — something that can produce text and voice that look like legal advice, medical advice, or medical counselling — there would be many safeguards in place (eg, institutional review boards, ethics panels, and so on) before such technology can be deployed. None of these safeguards apply to the companies who are developing the technology today. These companies have rolled out LLMs almost like an experiment to see how people would interact with them. As a result, we are not just beta testers of the new technology; we are almost akin to lab rats in a grand experiment that is changing the way people interact with data. The vast majority of these companies are also for-profit companies, which means that profit becomes the dominant consideration. This translates directly to the proposed use cases – many are tailored to help the bottom-line. While AI may be used to assist in achieving what humans cannot achieve, many use cases involve replacing human labour, cutting costs, and ensuring that results are achieved faster and more cheaply. The shift of power from public to private hands means that economic imperative becomes the big driver. Other than profit, these corporations have minimal internal constraints.
|
Jiamin: Do you see some glimmer of hope in respect of the Council of Europe’s Framework Convention on AI11 , which has received endorsement from more than 50 countries thus far?
Prof Chesterman: There is hope, but there are two fundamental structural problems. First, there is no doubt that China is a dominant player in the AI industry. While regional frameworks are important experiments, there is a need to ensure that any regime is truly global. If one of the dominant players is not in the picture, it is akin to accepting at least a bifurcation in the regulation. While that is possible, it is definitely not desirable. Second, which goes back to what we discussed earlier, governments can sit down to discuss, but that is not where the action is. Companies in this space have vast budgets that may be larger than some countries’. At the end of the day, global governance is the lowest denominator. For instance, if we look at how human rights started, it could have either taken the form of wide application or deep commitment. A small number of countries can agree to deep commitment, but the more the number of countries, the shallower the commitment becomes. Nevertheless, that may be a worthwhile trade-off if you’re hoping to get global buy-in for a new regime.
|
Jiamin: One focus of the APIEx Symposium & Workshop 2025: Technology, The Expert and The Law is on how AI is affecting dispute resolution and court proceedings. The Court’s Gen AI Guide12 requires compliance with the said Guide and for Court users be prepared to inform the Court whether Gen AI was used in the preparation of Court documents. Law firms may also be considering some form of disclosure of the use of generative AI, both internally and to clients. With the issues arising from disclosure (or lack thereof) in the university settings, do you think voluntary disclosure is feasible?
Prof Chesterman: There needs to be an understanding that there is a difference between what one does in the university and in the workforce. In the university, we read students the riot act in respect of plagiarism. That has been the case for decades. In the workforce, no one would want an originally drafted contract with clauses that are untested. There is therefore a mismatch – we prioritise intellectual contributions in the universities, but the workforce requires product that attracts low risks. For professionals who are thinking of using gen AI, for instance, lawyers or expert witnesses, I tend to refer to this analogy: you have to treat it like a very smart intern with a drinking problem. If work is outsourced to this intern, and that work is used or relied on, you would have to stand behind that regardless. No serious lawyer would say the intern made a mistake (or hallucinated). Instead, he/she would have to apologise and try to rectify the issue (if possible), and make sure it does not happen again. Likewise, if an expert witness presents something that is untrue to the Court, he/she cannot then say, “Gen AI made me do it”. Therefore, the problem is not with voluntary disclosure of whether gen AI was used, but instead, a lawyer or expert witness would have to stand behind the work regardless of how it was done.
|
Jiamin: In light of The Tragedy of AI Governance, what is one piece of advice you would give to expert witnesses?
Prof Chesterman: The rise of gen AI creates tremendous opportunities, but also real risks. AI hallucinating or falsifying information are real concerns, but the larger concern that expert witness should worry about in the longer term is the decline of expertise. Expert witnesses need to think about what they did they to get to where they are. If corporations start to train fewer people to cut costs, and replace certain areas with gen AI, then in future, we will have fewer and fewer experts.
Jiamin: Thank you once again, Prof Chesterman, for your time.
|
For more, please catch Professor Chesterman’s Keynote Address titled “The Tragedy of AI Governance” on 19 November 2025.
Contributed by:
Leow Jiamin - Deputy Director (Legal Faculty), Singapore Academy of Law
1 See also https://www.justsecurity.org/89432/the-tragedy-of-ai-governance/, an article written by Professor Chesterman for Just Security with the same title.
2 The 4th Asset Recovery Asia Conference on 30 November 2021
3 Professor Chesterman cited “the Partnership on AI’s Tenets (2016), the Future of Life Institute’s Asilomar AI Principles (2017), the Beijing Academy of Artificial Intelligence’s Beijing AI Principles (2019), and the IEEE’s Ethically Aligned Design (2019) … Microsoft’s Responsible AI Principles, IBM’s Principles for Trust and Transparency, and Google’s AI Principles”: “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), p 174
4Professor Chesterman cited “Singapore’s Model AI Governance Framework (2019), Australia’s AI Ethics Principles (2019), China’s AI Governance Principles (2019), and New Zealand’s Algorithm Charter (2020) [and at the intergovernmental level] Charlevoix Common Vision for the Future of Artificial Intelligence (2018) … Ethics Guidelines for Trustworthy AI (2019) … OECD’s Recommendation of the Council on Artificial Intelligence (2019)”: “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), p 175
5 See also “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), Chapter 2.3.2 on Automated Processing
6 In brief, the Horizon IT system used by the UK Post Office Ltd (“POL”) detected unexplained discrepancies in various POL accounts, which resulted in more than 900 sub-postmasters being prosecuted for theft, false account and/or fraud. The UK High Court subsequently found that the Horizon software contained software bugs, errors and defects “far larger number than ought to have been present in the system if [the Horizon system] were to be considered sufficiently robust such that they were extremely unlikely to be considered the cause of shortfalls in branches.”: Bates v Post Office Ltd (No 6: Horizon Issues) Technical Appendix [2019] EWHC 3408 (QB) at [434]
7 Post Office (Horizon System) Offences Act 2024 and Post Office (Horizon System) Compensation Act 2024
8 https://www.postofficehorizoninquiry.org.uk/
9 https://www.nacc.gov.au/news-and-media/national-anti-corruption-commission-investigate-robodebt-referrals
10 “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), pp 175-176
11 https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence. The 7 fundamental principles are: (a) human dignity and individual autonomy; (b) equality and non-discrimination; (c) respect for privacy and personal data protection; (d) transparency and oversight; (e) accountability and responsibility; (f) reliability; (g) safe innovation.
12 Registrar’s Circular No. 1 of 2024: Guide on the Use of Generative Artificial Intelligence Tools by Court Users, https://www.judiciary.gov.sg/docs/default-source/circulars/2024/registrar's_circular_no_1_2024_supreme_court.pdf