Levercon.Brief
Issue 0224 Apr 2026By Cara Davies

Decks and memos, in minutes.

Claude Design ships, and the economics of fund-brand work shift. Plus GPT-5.5, Workspace Agents, and GPT Images 2.0 -- three flagship-tier OpenAI releases in seventy-two hours.

Hi folks,

Three flagship-tier AI releases landed in 72 hours this week. The one worth five minutes of your attention first is Claude Design, because it is the first release a fund could use on a Friday afternoon to rebuild an LP deck before Monday.

The one thing

Claude Design shipped, and it changes the economics of fund-brand work. (Anthropic)

  • Link a site, document, or codebase, type what you want, and Claude returns a polished deck, one-pager, landing page, or prototype.
  • The first draft isn't perfect. The jump is the iteration cycle: fifteen versions in the time it used to take to build one.
  • This is the first release where a non-designer on your team can credibly produce on-brand LP-facing material.

What this means in plain terms.

  • LP decks, fund overviews, quarterly updates, IM visuals: first-pass draft any team member can produce, not just the designer or the agency.
  • Once the model is linked to your brand assets and past IMs, output stays on-brand without hand-holding.
  • Value shift: an analyst can start Friday with a messy Word IM and have a reviewable deck by Monday morning.

In the mix

  • GPT-5.5 -- new OpenAI flagship. (Simon Willison)
    • Same price point as the model it replaces.
    • Why it matters: if your fund has built anything on GPT-5, the upgrade is a config change, not a rebuild.
  • OpenAI Workspace Agents -- Codex-powered agents for team accounts. (9to5Mac)
    • Agents run against shared team files inside the enterprise account, not just a single user's personal workspace.
    • Why it matters: "the investment team gets a shared analyst" becomes plausible at the team layer, not just per-seat.
  • GPT Images 2.0 -- first image model that "thinks" before it draws. (The New Stack)
    • Near-photo realism and readable text inside images.
    • Why it matters for DD: synthetic media just got harder to spot. If diligence relies on photos or visual verification of assets, build in a second source.

From my week

  • Long-running agents are the quieter story this week.
    • I had Claude Opus 4.7 run unattended for an hour on a work task: set it up, walked away, came back to finished work.
    • Same pattern translated a side-project app into Korean (around twenty minutes of agent time, quality genuinely good) and Cantonese.
    • Finance analogues: overnight covenant-breach scans, monthly portfolio-company roll-ups, first-pass IM reads on a new deal.
    • The job has shifted. It is now framing the task well enough that the agent lands the plane without you, not watching it fly.
  • Discovery signal of the week.
    • An Aus GP, on rolling AI out inside his team: "We're exactly no coders, no development experience. If they don't understand it, they don't touch it."
    • Same GP, same call: "It's much faster at picking up mistakes than we are. Even in modelling, it's awesome at just picking up little inefficiencies."
    • Translation: "AI catches what you missed" sells inside a fund. "AI does your analyst's job" doesn't. Adoption is the blocker, not capability.

Stat of the week

OpenAI release cadence
3flagship-tier releases in 72 hours

GPT-5.5, Workspace Agents, and Images 2.0 all between Tuesday and Thursday. The release cadence is now shorter than most funds' quarterly review cycle. (Simon Willison, 9to5Mac, The New Stack)

Cheers,

Cara
Levercon (previously Fund OS) is what we are calling this. I am talking to fund managers and their analysts about how they actually use AI in their work. Forward this if useful. Reply if there is something you want me to dig into next week.