In this essay
AI · Controllers April 2026 · 9 min read

What the AI Controller Actually Looks Like in 2027

Most of the conversation about AI in finance lands in one of two wrong places. Here's the honest middle.

By Nico Rivera Practitioner perspective

Every CFO is being asked some version of the same question right now: what does AI mean for our finance function? The conversation that follows tends to land in one of two places, and neither is useful.

The first place is breathless optimism. Closes that run themselves. Reconciliations that match without human intervention. Variance commentary written by an agent at midnight and ready for the CFO at 7am. The pitch decks are confident, the demos are clean, and the financial press is happy to amplify all of it.

The second place is defensive dismissal. AI doesn't understand context. AI hallucinates. AI can't sign a 10-K. The audit committee will never let it touch the close. Every controller who has watched a previous tech cycle over-promise and under-deliver — and there have been several — has reasonable cause for skepticism.

Both positions are wrong because both are too coarse. The honest read is more specific, and getting it right matters more than the loud version of either argument.

The function divides cleanly. The conversation doesn't.

If you actually break the controller function down by activity, the AI question stops being a single question. It becomes a dozen smaller questions, and most of them have clear answers.

Some of what controllers do is highly automatable. Transaction-level matching, exception flagging, first-pass variance commentary, standard report generation, account-level flux explanations against prior period — these are pattern-recognition tasks at scale, with clean inputs, well-defined outputs, and fast feedback loops. AI handles them well. For teams that have implemented this carefully, the time savings are real.

Some of what controllers do is not automatable, and won't be in 2027. Judgment calls on accrual methodology when the underlying contract is ambiguous. Reserve adequacy in a portfolio whose performance is mid-shift. Disclosure tone on a topic the auditors are flagging. Whether a particular GL account makes sense to keep, kill, or split. These tasks involve weighing factors that aren't fully expressed in any data the model has access to — institutional history, auditor preferences, materiality judgments, and the kind of context that exists as collective knowledge rather than written documentation.

And some of it is in the middle — automatable in part, but with a judgment layer that has to remain human. Reconciliation can be largely automated, but the question of which exceptions matter and which are noise is still judgment. Variance commentary can be drafted by an agent, but the question of what to flag for the CFO is judgment.

The controller who understands which bucket each task lives in will thrive. The one who treats them all the same — in either direction — won't.

The skill shift is not from accounting to coding. It's from executing workflows to designing and supervising them.

What changes in 2027 is the time allocation, not the responsibility.

The most common mistake in AI-in-finance commentary is conflating "automation" with "abdication." A controller who lets an agent draft the close memo without reviewing it is not automating; they're failing. The accountability didn't move. The signature still says human.

What does change is where the controller spends their time. The 2027 controller spends materially less time on tasks the agents are doing well, and materially more time on three things:

This is a significant change in time allocation. It is not, however, a change in what the controller is responsible for. The auditor still asks the same questions. The disclosure committee still wants the same documentation. The audit committee still wants someone to attest. AI doesn't change any of that — it just changes the route by which the controller gets there.

The new skills are not the obvious ones.

Most commentary on the future controller imagines someone who codes. That's almost entirely wrong. The 2027 controller does not need to write Python any more than the 2010 controller needed to write SQL. They needed to know what SQL could do — what it was good at, what it failed at, when to trust it, and how to talk to someone who could actually write it. The same is true for AI now.

The actual skills that matter:

The risks are real, and they're not the obvious ones.

Most of the risk discourse around AI in finance focuses on hallucinations and accuracy. Those matter, but they're table stakes. A more sophisticated read on the actual risks:

Deskilling. If a team uses AI to do reconciliations for two years, and then a junior analyst joins, that analyst never learns to do reconciliations. The institutional muscle for the task atrophies. When the agent fails — and it will — there's no one in the building who can step in. This is a slow-motion problem and most teams are underweighting it.

Confident-but-wrong outputs. The most dangerous failure mode of AI in finance is not "the model gave a bad answer." It's "the model gave a confident, articulate, well-formatted bad answer that no one caught because it looked right." This requires a controls posture that doesn't exist in most finance teams yet.

Control gap during transition. Old controls assume human-in-loop at every step. New controls assume agents can operate autonomously with supervisory review. The middle period — where some workflows are migrated and some aren't — is when the most material control gaps appear, and where the audit findings will land.

Talent flight. Junior finance talent is reading this market correctly. They want to work somewhere their skills will compound. A team that uses AI thoughtfully — augmenting humans and giving people time for the harder judgment work — will retain and develop talent. A team that uses AI to flatten headcount will hollow out, and fast.

What the 2027 controller actually looks like.

Strip away the noise and the picture is fairly clear. The 2027 controller:

This is not a smaller job. It is, in important ways, a bigger one — with more leverage, more scope, and more strategic visibility. The controllers who recognize this and lean in will look like the ones who learned spreadsheets in 1990 or ERP systems in 2005: they ended up running things.

The controllers who treat AI as either savior or threat will spend the next two years arguing about the wrong things. By the time they look up, the controllers who got it right — the ones who learned the tooling the way their predecessors learned spreadsheets in 1990 or ERP systems in 2005 — will already be running things.

· · ·

If this read resonates, the longer-form treatment of agentic finance — frameworks, failure mode taxonomies, and control design — lives at theagenticcontroller.com.

About the Author
Nico Rivera
Nico is a senior finance controller writing on the application of AI to the controller function, payments accounting, and the evolving finance role. He is the author of three reference handbooks at paymentscontroller.com, financepmp.com, and theagenticcontroller.com. Deeper coverage of AI in finance lives at the third.

All views on this site are personal and do not represent any employer or affiliated organization. Content is for informational and educational purposes only — not professional accounting, legal, or tax advice.