Are law firms overpaying for ChatGPT wrappers?
MikeOSS claims to have replicated the core features of Harvey and Legora in two weeks. Lawyers should ask what they are really paying for.
The open-source legal AI that went viral
MikeOSS went viral because of a simple claim.
Its creator, Will Chen, is a former Latham & Watkins associate who has branded MikeOSS as “the open source alternative to Harvey and Legora”. He said he had built the core web application of Harvey and Legora in two weeks, and was releasing it as open-source software.
That is a bold claim. Harvey and Legora are among the most talked-about legal AI companies in the world. Harvey was reportedly valued at US$11 billion. Legora has also raised at a multi-billion-dollar valuation and has reportedly crossed US$100 million in annual recurring revenue.
MikeOSS presents itself as an open-source alternative to both. Its visible feature set is familiar: an assistant, document projects, tabular review, and reusable workflows.
Open-source software is software where the source code is publicly available. That means people can inspect it, run it, modify it, and, depending on the licence, build on top of it. MikeOSS is released under the AGPL licence, which generally allows users to use, study, modify, and share the software, but may also require modified source code to be made available in some circumstances.
Whether MikeOSS is production-ready is a separate question. Rather, it accentuates what lawyers should have been asking when evaluating legal AI all along.
The token reseller critique
Chen’s criticism goes further. In a post describing his “token reseller theory”, he argues that Harvey and Legora are “essentially sales organisations that resell tokens”. Tokens are the units of text that AI model providers charge for when software sends information to a model and receives an answer back. In his view, Harvey and Legora “slap on a UI” that makes them look different from ChatGPT, but the core web-app features are essentially a chatbox, project uploads, tabular review, and workflows that are custom prompts.
That is a deliberately provocative framing, and Harvey and Legora are probably not losing sleep just yet.
MikeOSS may have reproduced the core visible features of those products, but it does not have the trappings of enterprise software: security reviews, procurement clearance, integrations, support, governance, training, auditability, and all the other things that make software acceptable inside a risk-sensitive law firm.
But the critique is still useful because it sharpens the buyer question: is this product adding real legal workflow value, or is it mostly a polished way to access the same underlying AI models through a legal-themed interface?
Lawyers already know AI can be useful
Many lawyers I speak to are already getting real value from ChatGPT, Claude, and Gemini. They use these tools to work through voluminous documents: finding relevant passages, summarising background facts, comparing documents, and testing their thinking.
And they get all this for tens of dollars a month.
There are a few reasons for that. AI companies are also subsidising usage to win market share. They are going after generic white-collar work at enormous scale, which is what they need to justify their valuations. Most importantly, this kind of work is what large language models are inherently good at: reading, summarising, comparing, reorganising, and drafting text.
That does not mean lawyers should be careless. As I have written previously, the exact product, tier, data-retention terms, and model-training terms matter. So does professional judgment: lawyers still have to read the source material, check the answer, and do their own legal research.
With those caveats, generic AI is already useful for a lot of legal work. The real question is what legal tech adds on top.
I think the answer has three layers.
Legal AI has three layers
When lawyers evaluate legal AI, they should separate the product into three layers: model access, generic wrapper, and practice fit.
Model access is the underlying AI intelligence from Claude, GPT, Gemini, or another large AI model. This layer is improving very quickly, and is increasingly available through many different providers. Responsible vendors still have to manage provider terms, data handling, retention, security, reliability, model selection, and the real cost of enterprise-grade usage.
The generic wrapper is the software interface placed around that model: the chatbox, document upload button, project folder, matter workspace, prompt library, reusable workflow button, or table that runs extraction across documents.
Practice fit is how well the product adapts to the actual legal work: the jurisdiction, practice area, document types, firm templates, house style, staff workflow, source-checking, and output format.
This distinction matters because vendors often blend all three together.
The product may look impressive because the model is impressive. It may feel polished because the wrapper is polished. But the question for the firm is whether the product actually fits the way the practice works.
That does not mean the wrapper is worthless. A good interface, good document handling, good citation design, and good collaboration features all matter.
But lawyers should be careful about paying enterprise prices for model access and generic wrapper alone. MikeOSS is a reminder that some of the visible interface layer may be easier to reproduce than buyers assumed.
In my humble opinion, the real value is in fit. That means knowing what has to be extracted, how it should be structured, what has to be checked, what source material must be linked, what exceptions are common, and what the lawyer needs to decide next.
That is much harder to commoditise than a chatbox, because it cannot be replicated by someone in San Francisco or Stockholm.
No chatbox here
A blank chatbox can be useful for exploration. It is less convincing as the centre of a production workflow.
This is partly a change-management problem. Even senior lawyers who are comfortable with technology tell me that prompting AI can be a pain. They do not necessarily want to learn a new technique just to get consistent results. That problem becomes larger when the users are secretaries, paralegals, and junior staff who may be less motivated to experiment with prompting.
The better approach is usually to absorb that complexity into the software. The AI should sit inside the workflow, not become the workflow itself.
The user should see the matter, the documents, the extracted facts, the missing information, the source links, the calculations, and the next decision. The product should narrow the task and make adoption easier, not ask every user to become a prompt engineer.
That is also why I think legal tech companies should be careful about building features that lawyers can already get from ChatGPT or Claude. If it is not sufficiently differentiated, it is probably not worth building.
This has affected how we work with firms at Northbridge Lab. We are looking for partners, not just customers.
We bring a core AI engine, but we do not treat the product as one-size-fits-all. We adapt it to the firm’s domain, workflows, templates, and preferred outputs. The point is not to give the firm another chatbox. It is to make AI fit the way the firm already works.
The real lesson from MikeOSS
MikeOSS does not mean every law firm should operate its own legal AI platform. It also does not mean Harvey or Legora are worthless.
But it should make lawyers less impressed by generic AI wrappers.
If a product mainly gives you another place to upload documents, ask questions, and receive fluent answers, the bar should be high. General AI tools already do a lot of that surprisingly well, and they do it at a price that dedicated legal tech vendors will find hard to match.
The harder work is making AI useful inside the actual practice.
That means understanding the documents, the bottlenecks, the judgment calls, the source-checking, the staff who will use the system, and the format in which the lawyer needs the answer. It also means understanding the firm’s style: how it drafts, how it reviews, how it wants outputs presented, and what its clients expect.
That kind of fit cannot be solved by a generic prompt written for a global market. It has to be worked out with the firm and then embedded into the software.
That is where we are trying to compete at Northbridge Lab.
Not by giving firms another chatbox, but by adapting a core AI engine to specific Singapore legal workflows and firm-specific ways of working. The aim is to make AI feel like part of the work rather than another tool people have to learn.
So if you are evaluating legal AI, ask a more uncomfortable question: could ChatGPT or Claude do this in 18 months?
If the answer is yes, you may just be paying a huge markup for a temporary wrapper sitting directly in the path of OpenAI and Anthropic.
If the answer is no because the product understands your practice, your documents, your templates, your staff, and your way of working, that is different.
That is where legal AI becomes useful software.

