Post

Applied AI 2025 Fall

Applied AI 2025 Fall

I attended Applied AI Conference at the University of St. Thomas on Nov 3, 2025.

Quite a lot has changed since the first Applied AI conference, Nov 4, 2022 which was 3 weeks before ChatGPT was released!

The interest and enthusiasm around it feels like the mobile interest of 2008 through now.

Justin Grammars

  • largest Applied AI conference ever

not just what you learn, its who you meet and how you grow together

The people here around you share your curiosity and drive. Find them, learn from them, and build together!

Dr. Manjeet Rege: Implementation Is the Differentiator

  • «https://www.manjeetrege.com>

Rege’s opening keynote on why leadership, not tooling, is the current bottleneck.

  • 78% of orgs report using AI, yet only 10% see bottom-line gains (McKinsey, 2024). High performers obsess over implementation discipline.
  • Training compute is doubling every five months and the cost to query has fallen 280×. Execution and data practices separate the success/failures.
    • strategic implication - playing field leveling on technology and implementation is the differentiator
  • Employees are already experimenting — 90% in his numbers, with 21% heavy users. Only 33% of companies have real governance, so incidents are inevitable.
    • need to confirm that 90% number

He discussed a 18-month roadmap:

  1. 6 months building skills, guardrails, and data access
  2. 6 months of serious pilots
  3. 6 months integrating the wins until 10% of revenue ties back to AI.

Caroline Swift: Creatives Are Operators Now

Caroline presented her idea that “creatives will run the next wave”. She noted AI is driving most US productivity growth, but it only helps when someone with taste directs it.

AI is good at statistics and context compression, not novel ideas or breaking news. The people winning right now pair clear communication with curiosity and agency.

  • Treat prompt engineering as writing - clear instructions, unambiguous instructions.
  • Use AI for reviewing transcripts, extracting brand signals, and building impossible visuals.
  • Human touch is back to being a differentiator; ambient AI is coming, but audiences still crave human touch and authenticity.

The reminder that “AI slop” will show up wherever we are lazy was a gut-check!

Stephen Kaufman: Versioning Agents Like Products

Stephen’s session was discussing how we need to treat AI deployments - just like disciplined Software Engineering.

BUT - Traditional semantic versioning misses the mark because agent behavior depends on model choice, prompt scaffolding, tool availability, and coordination patterns. Context must be captured and versioned!

  • Track agents side by side against a baseline to catch behavioral drift.
  • Shadow deployments and rollback paths are non-negotiable once agents hit production.
  • Invest in lightweight agent registries—even a disciplined SharePoint/Confluence list beats tribal knowledge.

“Agent versioning is a must-have, not a nice-to-have.”

Piers Concord: AI-Assisted Engineering Without the Vibes Spiral

Piers presented his spectrums from “vibe coding” tools like Lovable to opinionated IDE integrations.

The guardrails were straightforward:

  • Start with plan mode for anything substantial; force the model to explain the approach.
  • Seed prompts with the exact files you care about and review every diff with the same skepticism you would apply to a teammate.
  • Use parallel agents for exploration, but consolidate changes manually.

His dotnet security example (LLM pinpointing a long-standing impersonation bug) reinforced that these tools amplify engineers jumping into large systems to allow them to quickly navigate.

Adam Terlson: Building Production-Grade Agents

Adam framed agentic systems with a “triforce”: outcomes, enablement, and execution, all wrapped with assurances.

  • Treat specs as living contracts that evolve with the agent. Pair them with a “golden path” platform so teams can move fast without rewriting the basics.
  • Borrow from Temporal and saga patterns to nail orchestration. LLMs handle ambiguity; state machines keep you honest.

His Best Buy self-checkout case study showed how orchestration services already solve the problems we now expect agents to handle. Temporal’s durability guarantees are exactly what I need for our long-running automations.

Scott Bromander: Structured Outputs as the Hidden Superpower

Scott’s talk was a reminder that we can treat LLMs like APIs, not chatbots. Structured Data Output (SDO) - get structured responses that you can validate and process.

  • Keep schemas shallow and still validate! Correct shape does not mean correct facts.
  • SDOs will not save you from security issues, but they let you build repeatable flows without training custom models.
  • His HeyZoo demo showed the concept: playful experience, fully driven by structured responses.

Graham Wood: Automating Product Data Entry

Graham walked through turning photos into merchant-ready listings. The tech stack: vision models plus language models - but the hard part is taxonomy accuracy and regression testing.

  • Shopify and eBay taxonomies were the start
  • Being a one-person AI company is viable when the tooling is mature—his Smartsheet integration was built in a weekend because a customer needed it.
This post is licensed under CC BY 4.0 by the author.