July 2nd, 2025

AI - A(nthromorphisation) I(nsights)

Content series about the biggest trends shaping private finance

Rewiring M&A: The Real Promise of AI and the Power of Iteration

Getting AI to deliver real results takes more than just good intentions. It requires thoughtful iteration and commitment. This is not about quick wins at hackathons but about steady, deliberate progress over time.

Yet, we think this iterative journey is both exciting and necessary, and many of the 300 participants in Bain’s recent Global M&A survey share that view. AI is poised to fundamentally reshape the M&A process.

M&A has always been one of the most fascinating corners of finance. Now, imagine how much more compelling it becomes when you can eliminate swathes of menial tasks and reduce the chaos by an order of magnitude.

But this shift isn’t just about AI. It’s about technology more broadly. That’s why we're thoughtful in how we communicate with clients: what AI is and what it is not, why purpose-built AI for M&A actually matters, and why the path to better deal outcomes involves not just a leap forward at the outset, but a collaborative, iterative process that unlocks further marginal gains over time.

Language Matters: The Trouble With Talking About AI

It’s also why we handle the anthropomorphisation of AI with care.

Don't get us wrong, we’re strong advocates for using AI to solve targeted business problems (it’s quite literally our job). But there’s a lot that’s broken in how people talk about it.

Take, for example, Apple’s recent research showing that so-called "reasoning models" don’t actually reason, they pattern match. For anyone that's used them, they'll almost definitely have had thought this anecdotally, so it's nice the big boys have been able to put the time and effort in to show it rigorously.

And honestly, our biggest issue isn’t that these models don’t reason. We believe we’re far from that. It’s that we call it “reasoning” at all. Running a model in a while loop is not reasoning. Not even close.

And there is quite frankly poor use of language around many other aspects of AI: "hallucination" (= getting things wrong), "agents" (= fuzzy if statements), or even artificial intelligence itself (= random word generators).

We think a lot of this stems from the earlier Machine Learning publications (maybe a decade ago if we’re being generous), where the big conferences and journals seemed to be hooked on results that looked good on the surface, but with just a tiny bit of digging it was clear it was just the use of a bigger computer.

In essence, the AI industry has mis-direction at its heart. And that just doesn't work when it's actually needed.