Problems.vc

After the Platforms: The Agent SubstratePart II of IV

Five Costumes, One Problem

A week in the life of a founder with agents in the loop on every counterparty interaction. The friction is one problem in five costumes: operational facts no one can verify — and agents have made it worse, not better.

Here is a week in the life of a founder we will call Alice. She runs a company of forty people. It is profitable enough to be interesting to banks, growing enough to be interesting to investors, organized enough to be interesting to acquirers, and old enough that its operational history spans four different SaaS vendors and about a dozen counterparties she has to answer to in any given quarter.

Alice has an agent. It is a good one. It handles her email, her scheduling, her research, her drafting, her analysis. She uses it constantly and she would not go back. Her counterparties have agents too, because by now anyone serious does. Alice does not think of her week as a data problem. She thinks of it as her job. But every item on her calendar, once you look at it closely, is the same problem wearing a different costume, and her agent cannot fix it. In several places it is making it worse.

Monday: the data room

Alice is raising a bridge round. Her agent assembled the data room over the weekend. It pulled revenue by cohort from one system, customer-acquisition cost from another, retention from a third, and reconciled the three. It did in an afternoon what would have taken a human analyst a week. On Monday morning the firm leading the round asks for a different cut of the same data, and the agent produces that cut before the call is over.

The speed is real. What has not changed is what happens next. The output is a PDF. The firm has to trust that Alice's agent pulled correctly, joined correctly, interpreted the field definitions correctly, and did not hallucinate any of the summary prose that frames the tables. There is no way for the firm to pull the underlying facts and check for themselves, because the underlying facts live inside SaaS vendors that do not talk to the firm. So the firm's own analysis agent reads the PDF, pattern-matches it against the firm's portfolio of similar companies, and produces a recommendation. The firm is now trusting two agents in series, neither of which has access to the source, neither of which can prove its work, both of which Alice and the firm have commercial reasons to believe and no technical reason to verify.

Trust is still substituted for verification. The substitution happens faster now. The volume of unverified assertion moving between parties has gone up, not down. The data room is not real infrastructure. It is the same recreation from scratch, generated now by software instead of by Alice, and it is no more verifiable than it was before.

Tuesday: the bank

On Tuesday Alice is applying for a working-capital line. Her agent assembles the package in minutes. Tax returns, bank statements, P&Ls, balance sheets, AR aging, a cap table, customer concentration tables. All pulled from the relevant systems, formatted, annotated, submitted.

The bank's underwriting agent reads it and flags a dozen discrepancies between the numbers the bank has from its own ledger and the numbers in Alice's accounting system. The discrepancies are not fraud. They are timing. Alice's agent has an explanation ready and sends it. The bank's agent cannot verify the explanation, because verifying would require reading into Alice's accounting system directly, and the bank's agent has no authenticated path in. So it escalates. A human underwriter looks at the file, reads the exchange between the two agents, and is right back where a human underwriter has always been: trusting artifacts without verification. The only thing that has changed is that the artifacts now include a transcript of two agents arguing about facts neither of them can prove, which a human underwriter who has been burned by confident-sounding agent output is, if anything, less inclined to believe.

If the line comes through, it will come through slowly. Many lines like this one do not come through at all, because the package was not persuasive enough to a reviewer who had no way to look past it.

Wednesday: the acquirer

On Wednesday Alice takes a call with a strategic that has been circling for months. They want to start light-touch diligence. Everyone on the call agrees it is light-touch. Then the diligence request list arrives, and it is not fifty items; it is five hundred. The acquirer's diligence agent generated them overnight. Most are reasonable. Some are strange. A few are subtly contradictory. All of them have to be answered.

Alice's agent answers them. Overnight, against all five hundred, with supporting exhibits pulled from every system the company runs. The acquirer's agent reads the answers and flags seventy-three inconsistencies between documents, between timestamps, between field definitions that mean subtly different things in different systems. Alice's agent answers the seventy-three. The acquirer's agent flags a fresh set. Neither side's humans can keep up with the thread. A senior associate at the acquirer, we will call him Bob, tries to referee, but after two days of reading agent transcripts he admits he no longer has a coherent picture of the company. The deal slows down. Not because the information is not there. Because there is too much information, produced too fast, with no substrate underneath it that any participant can use to verify what is actually true.

The deal, if it closes, will close at a price that reflects the acquirer's uncertainty about what they are buying. That uncertainty is higher, not lower, than it would have been in a pre-agent world, because Bob has seen two hundred pages of answers to questions his own agent generated and he does not know which of those answers he can rely on. The bid-ask spread is still the price of illegibility. The agents have made the illegibility denser, not thinner.

Thursday: the audit

On Thursday the auditors are on site. They bring an audit agent that samples ten times more transactions than a human auditor would have sampled, because it can. Each sample requires supporting evidence. Alice's agent produces the evidence, cross-referenced across systems, formatted for the audit agent's inspection. The audit agent processes it, flags exceptions, requests clarification. Alice's agent clarifies.

At no point in this exchange does either agent access a source of truth that neither side can manipulate. Alice's agent is reading from Alice's systems, producing artifacts. The audit agent is reading the artifacts, producing workpapers. The final letter asserts reasonableness based on a chain of agent-produced artifacts that no human on either side has the time to trace. The fee goes up, because the audit is more thorough in scope. The verification is not more thorough in substance. It is more thorough in the performance of verification, which is not the same thing.

Next year it will happen again, from scratch. Nothing in this year's exercise produces a residue that next year's auditor can build on, because the residue is in a format and a trust model that does not outlive the engagement.

Friday: the CRM

On Friday Alice is dealing with the smallest frustration of the week, though over time it may be the largest. She has decided to move off her current CRM. Her migration agent promises to handle the schema mapping. It is a good agent. It does a better job than a human consultant would.

It still cannot make the migration lossless. The old vendor's schema and the new vendor's schema do not agree on what some things mean. The email threads, the call notes, the activity log, the attachments: all of it has to be exported from one vendor's format and loaded into another vendor's format, and no amount of agent intelligence can recover the semantics the old vendor encoded in custom fields the new vendor does not have. The agent is a better bulldozer. The walls between the two vendors are still walls. The data was never Alice's to move freely; it was the vendor's to release under the vendor's terms.

The migration will still consume a technical person for months, because the agent's decisions have to be reviewed and the reviews find exceptions the agent cannot resolve on its own. The vendors price their products accordingly, because the cost of leaving is still high, agents or no agents.

The pattern

Five scenes, one week, one founder, one agent on her side, a matching agent on every counterparty's side. The scenes sit in different chapters of business life. A fundraise is not an audit. A loan is not a CRM switch. An M&A diligence is not a bank underwriting. They look different, they involve different counterparties, they unfold over different timescales. And yet if you strip the costumes away, each one is the same interaction, and the agents are not fixing it. In several cases they are making it harder.

Alice has operational facts. Her counterparty needs to know something about those facts. The facts live in systems her counterparty cannot read. Alice's systems cannot hand the facts over in a form the counterparty can independently verify. So Alice, or her agent on her behalf, stands in the gap. The agent exports, reformats, annotates, explains, reconciles, vouches. The counterparty's agent reads, flags, re-asks, escalates. Every counterparty interaction in her week is a verification problem being resolved by two agents talking to each other across a data landscape neither of them can authenticate against.

The agent did not fix this because the problem was never about speed. It was about proof.

The tax nobody names, and why agents made it worse

Here is the part worth sitting with. The friction is not that the verification takes time. The friction is that at the end of the verification, the output is still not verifiable. And now the unverifiable output is being produced faster than any human on the receiving end can review, by software that everyone knows can hallucinate.

When a human was producing the artifacts, at least there was a human's reputation in the loop. The investor trusted Alice's PDFs because Alice had signed her name to them. The bank trusted Alice's package because Alice's signature was on the loan application. The chain of trust ran through a human who could be held accountable. With agents in the middle, the chain is weaker. Alice is still on the hook for what her agent produced, but the counterparty knows the agent may have hallucinated, misinterpreted a field, pulled stale data, or summarized something in a way that subtly changes what it means. The counterparty's own agent knows this too, and is less inclined to take anything at face value.

So the premium on faith goes up, not down. The verification problem compounds. Every party in the transaction is simultaneously relying on agents to keep up and distrusting agents because they know the agents can be wrong. The result is more artifacts, produced faster, trusted less. This is the agent-era version of the same tax that existed before. It is now larger, because the asymmetry between artifact production and verification has widened.

Alice has not built infrastructure for her data to live together in a form counterparties can verify. She could not build it alone if she wanted to. Her counterparties are not going to accept her home-grown attestation scheme, and no standard exists for her to adopt. She could not fix this problem by trying harder. Nobody can fix it from the demand side. It has to be fixed underneath the systems, not over them.

Why the problem has survived this long

It is not for lack of integrations. There are thousands of integrations. iPaaS is a real industry. Middleware vendors are publicly traded. Every serious SaaS company has an API. The problem is not that the data cannot be moved. The problem is that when the data is moved, nothing about the move is verifiable. A counterparty receiving a data export has no way to confirm that the export matches what the source system actually holds, or that the source system holds what it claims to hold, or that the fields mean what they seem to mean, or that the history is complete rather than filtered.

Nor do agents change this. Agents are good at operating on information that has already been structured and attested. They are bad at producing attestation where none exists, because producing trustworthy attestation requires cryptographic primitives anchored to source systems, and agents are not cryptographic primitives. They are consumers of them. Where the cryptographic substrate is missing, agents can do what humans did, faster, but they cannot do what humans could not do, which is prove things to an outside party without that outside party's cooperation.

There is no shared shape for what a claim about a company is. There is no standard format for how a claim is attested. There is no protocol by which one company's data can be presented to another company, or to another company's agent, in a form that both parties can trust at a glance. What exists instead is a patchwork of formats, conventions, portals, spreadsheets, and one-off integrations, each of which assumes that either humans or agents at each end will paper over the lack of a standard. They do, because they have to. But papering over is not the same as solving, and the paper gets thicker in the agent era.

What becomes possible

Imagine, for a moment, that switching a CRM happened with the click of a button, because the customer data was never really trapped inside the CRM in the first place. The CRM was always just a view over the customer's own records, which the customer held directly and granted the vendor access to under a scoped, revocable grant. The switch is a revocation of one grant and the issuance of another. No migration. No schema mapping. No agent bulldozer. Just a new view onto the same canonical history.

Now apply the same move to the rest of Alice's week. The investor on Monday receives a granted read over a specific slice of Alice's real operating data, attested by Alice's systems, timestamped, cryptographically verifiable against the source without Alice in the loop. The investor's analysis agent reads the real facts under the grant, not a PDF summary of the facts. It cannot hallucinate the underlying numbers, because it is looking at the underlying numbers. Its recommendation is grounded in claims that cite their own provenance.

The loan application on Tuesday is a capability the bank's underwriting agent exercises against Alice's accounting system directly, under Alice's control, reading signed claims that the agent can verify cryptographically. No discrepancy discussion. The bank's ledger and Alice's ledger agree because they are reading from the same attested source.

The acquirer on Wednesday runs diligence against scoped, revocable grants that expire when the deal closes or dies. The five hundred questions collapse, because most of them were agents generating placeholders for information the acquirer's agent could read directly under the right grant. The inconsistencies collapse too, because the underlying claims carry their own provenance and there is nothing to reconcile. Bob can read his agent's analysis with confidence, because the analysis cites attestations the other side cannot fake.

The auditor on Thursday samples transactions from the canonical source, each sample carrying its own cryptographic attestation, and produces a letter that cites the attestations rather than reconstructing the picture from workpapers nobody can audit. Next year's auditor builds on this year's attestations, because attestations outlive engagements.

Each of these is the same structural move. The data stays with the entity that produced it. Counterparties, and their agents, receive verifiable, scoped, revocable grants instead of PDFs. Every claim carries its own provenance. Nothing has to be trusted on faith. The agent era becomes what it was supposed to be instead of what it currently is. A productivity multiplier, rather than a verification disaster.

The hours come back. The uncertainty discount shrinks. The vendor lock-in dissolves. The audit fee compresses. The loan decision moves from slow and unreliable to fast and verifiable. The deal closes at a price that reflects what the company is, rather than what Bob could legibly see through the fog of diligence. The tax goes away.

One problem

All of this is downstream of one observation. The friction in Alice's week is not five problems. It is one problem wearing five costumes, and if you were to follow her for a full year it would be one problem wearing many more. Hiring references. Vendor onboarding. Customer KYB. Insurance renewal. Regulatory filings. SOC 2 prep. Vendor risk questionnaires. Every counterparty interaction is the same missing primitive, and the cost is everywhere, and the agent era is making it more expensive, not less.

The next piece in this series takes the observation seriously and asks the design question it implies. If this is one problem, what would one solution actually have to look like?

Not five problems. One.