Using AI in M&A without losing judgment

Insights10 Mar 2026

Generative AI is now routinely used in complex M&A transactions. While its ability to analyse and generate information at speed is valuable, the real challenge is ensuring AI supports, rather than displaces, professional judgment. For law firms, this means the focus is shifting from simply getting answers to designing systems that know when an answer should not be given at all. 

Why prompting is no longer about getting better answers

Generative AI is good at answering questions at speed and, in many cases, is capable of producing outputs that look polished, structured and commercially reasonable from the first draft.

However, that is not where the real difficulty lies, particularly for law firms advising on complex M&A transactions where decisions are routinely taken under time pressure and with incomplete information.

The challenge is not getting generative AI to answer. It is designing systems that are right when they answer and disciplined enough to stay silent when there is no answer.

AI fluency vs reliability

Large language models are fluent by design and will generally produce articulate, wellstructured responses regardless of whether the underlying analysis is sound or incomplete.

They are capable of drafting, summarising and explaining complex material coherently, even where the inputs are partial or ambiguous, which makes them superficially attractive in diligence, disclosure review and transaction management contexts.

What they do not do, and cannot do, is exercise judgment about when a conclusion is warranted, when further information is required or when silence would be the better response.

They will continue producing an answer unless they are explicitly constrained not to do so, because they are optimised to generate text rather than to assess whether text should be generated at all.

That behaviour is not a flaw in the technology, but a direct consequence of how the models are built and trained.

The risk arises when fluency is mistaken for reliability and when outputs that read well are assumed to be safe to rely on without further scrutiny.

The risk of confident error

In law firms, the most problematic failure mode is rarely simple error, because obvious mistakes are usually detected and corrected in the normal course of review.

The greater risk is confident error, where an answer sounds authoritative, internally consistent and wellreasoned, but is nevertheless based on assumptions that were never tested or gaps that were never properly understood or acknowledged. This is particularly dangerous in due diligence and risk allocation discussions.

This is why advice that focuses on getting ‘better’ or more detailed outputs misses the point, particularly in a legal context where the cost of misplaced confidence can have farreaching effects.

In many highstakes situations, a system that knows when to stop, or when to surface uncertainty rather than resolve it, is more valuable than a system that always produces an answer.

Prompting as discipline

Prompting is often described as a creative exercise, but it is better understood as a form of discipline imposed on a system that inherently has none.

Whenever a prompt leaves something unspecified, the model will still resolve it in order to continue generating text, and it will do so probabilistically rather than deliberately.

Such silent resolutions can relate to matters that professionals care deeply about, including authority, risk tolerance, audience and the treatment of uncertainty, all of which are central to M&A work.

Good prompting is less about clever engineering and more about assumption control, by making explicit what the system is allowed to rely on, what it must not infer and how it should behave when the available information is incomplete.

When those boundaries are clearly set, accuracy improves because the system is less likely to overreach and confidence is more closely aligned with the underlying evidence.

Why restraint matters in AI

There is a natural tendency to treat a failure to answer as a weakness, or as evidence that the system has failed to perform the task it was given.

In practice, however, the opposite is true, particularly in professional settings where incomplete information, ambiguous drafting and evolving deal structures are common rather than exceptional.

If a conclusion depends on missing documents, undefined terms or unresolved assumptions, the most responsible outcome is often to state the gap clearly rather than to fill it implicitly.

The difficult part is not building systems that can answer questions, because that capability is widely available, but building systems that can recognise when answering would be misleading in the context of a transaction.

Building such systems is where much of the real work in responsible AI deployment now sits.

A shift in prompting

Among more sophisticated users, prompting is already changing.

The emphasis is moving away from generating more content and toward shaping behaviour, particularly in relation to how systems respond to ambiguity, incomplete inputs and conflicting signals that are typical of M&A transactions.

Increasingly, the focus is on how uncertainty is communicated, how assumptions are exposed and how the system behaves when it should slow down or stop rather than push through to a conclusion.

This is not about conservatism, but about aligning AIassisted workflows with the standards that law firms already apply to human work in M&A.

The bottom line on AI in M&A

Generative AI is a powerful tool, but it is indifferent to risk and has no inherent sense of consequence.

The task for lawyers, particularly in M&A, is not to make these systems more fluent, because they already are, but to design and deploy them in a way that preserves judgment, deal discipline and accountability.

Prompting, done properly, is one of the ways that discipline is imposed, not by encouraging the system to do and say more, but by ensuring it answers only when it should.

Contact

Hall & Wilcox acknowledges the Traditional Custodians of the land, sea and waters on which we work, live and engage. We pay our respects to Elders past, present and emerging.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of service apply.