Stop using AI for legal work
The Law Society just warned lawyers about AI. I build AI for law firms, and I agree.
Caveat advocatus
On 2 April 2026, the Law Society circulated an advisory on the use of publicly available AI tools.
One of my clients sent it to me the same day. There was, in his words, a “lively debate” among the partners of his firm about whether AI tools were safe to use at all.
That reaction is understandable.
For many lawyers, “AI” still means a public chatbot.
But there is a difference between the product and the technology.
A public chatbot is a product, with its own terms, retention settings, and data handling rules.
The underlying model is the technology, and that same technology can also sit inside a very different enterprise product.
This distinction lies at the heart of the Law Society’s advisory. It is warning against using publicly available AI tools that are not designed for business or enterprise use, and the professional risks that come with using them for legal work.
That warning is timely and should be taken seriously.
Why the concern is real
Large language models today are trained on huge amounts of data, publicly available and otherwise.
The business model matters too.
Better models attract more usage, and more usage helps improve the product further. That is one reason consumer AI products are often priced so aggressively.
That concern has become harder to ignore because of The New York Times lawsuit against OpenAI (1, 2).
Whatever one thinks about the merits, the allegation that caught public attention was simple: with the right prompting, the model could reproduce substantial snippets of New York Times articles.
That is the nightmare scenario.
Not just that the provider stores your prompt, but that confidential facts, commercially sensitive material, privileged analysis, or your firm’s own advice end up in systems you cannot meaningfully audit.
That would be bad enough for any business. For legal practice, it is much worse.
The Law Society’s advisory therefore says something important in plain terms: if you are using publicly available AI tools, you should not upload privileged, proprietary, confidential, or personal data, and you should be very careful about whether those tools retain inputs or use them for model training.
But for lawyers, the issue is broader than model training alone.
The real question is whether the tool is on consumer or enterprise terms, whether there are contractual and operational safeguards comparable to the protection required under the PDPA, what the retention defaults are, what admin and audit controls exist, and whether those protections apply to the exact tier your staff are actually using.
The real problem is often the wrong product, on the wrong tier, under the wrong terms.
Microsoft Copilot is a good example of how misleading branding can be. “Copilot” sounds like one product, but in practice it hides several different privacy regimes: consumer Copilot, home Microsoft 365 Copilot features, and Microsoft 365 Copilot under a work account (1, 2, 3). Those are not the same thing. Some consumer-facing Copilot usage may still involve model training unless the user opts out, conversation history saved for up to 18 months, and automated or human review in some cases (1, 4). Lawyers should not assume that a familiar Microsoft brand automatically means enterprise-grade protections. The exact product and terms matter.
There is also a practical problem.
As I wrote in an earlier piece, generic public AI tools are often not a good fit for legal teams. Clients tell me it is hard enough to get staff to learn prompting well, and those techniques change as models change.
What lawyers should actually check
If your firm is evaluating any AI tool for legal work, the key questions are:
Is this a consumer tier or a true business / enterprise tier?
Will prompts, uploads, and outputs be used for model training or service improvement?
What are the retention and deletion defaults?
Are there contractual and operational safeguards comparable to the protection required under the PDPA?
What admin, access-control, audit, and security settings exist, and do they apply to the exact workflow your staff will actually use?
But that does not mean the answer is to avoid AI altogether.
The underlying technology is not the problem. The real issue is whether the software is built and contracted for enterprise use.
What we do differently
At Northbridge Lab, we do not rely on unmanaged consumer AI accounts. We use reputable enterprise AI providers, with contractual commitments that the data we send will not be used to train their models.
We also care about the rest of the picture: retention, provider choice, security posture, and how the system is deployed inside a legal workflow.
I spend a lot of time evaluating providers because model quality changes quickly, and the best practical setup is not static. That is also why our software has meaningful operational cost. Consumer AI is often subsidised; enterprise API access is not.
The goal is simple: get the benefits of the best available models without exposing customer data to the risks that come with unmanaged public tools.
That is also why I care about using reputable providers rather than chasing every new model release. In legal work, trust, contractual protection, and predictable handling of data arguably matter even more than raw model capability.
No tradeoff between compliance and performance
I previously worked on data policy in the Prime Minister’s Office and built software in GovTech, including systems that handled sensitive personal information of Singaporeans.
So this is not something I treat casually. Clients pay us to get the job done, but to get it done in a way that is responsible, compliant, and fit for legal work.
If you are curious about our compliance posture, feel free to reach out and I can send you our compliance and assurance note. It maps the product against:
Law Society of Singapore, Guidance Note 3.4.1 “Cloud Computing”
PDPC’s Advisory Guidelines on Key Concepts in the Personal Data Protection Act
Supreme Court’s Guide on the Use of Generative Artificial Intelligence Tools by Court Users
Ministry of Law’s Guide for Using Generative AI in the Legal Sector
MAS Guidelines on Outsourcing (Financial Institutions other than Banks)
MAS Consultation Paper on Proposed Guidelines on Third-Party Risk Management
Just as importantly, the software also has to actually work.
Firms should not have to choose between performance and responsible deployment.

