Post

Applied AI 2025 Fall

Applied AI 2025 Fall

I attended Applied AI Conference at the University of St. Thomas Nov 3, 2025.

Quite a lot has changed since the first Applied AI conference, Nov 4, 2022, 3 weeks before ChatGPT was released! Feels like the mobile interest of 2008 through now.

Justin Grammars

  • largest Applied AI conference ever

not just what you learn, its who you meet and how you grow together

The people here around you share your curiosity and drive. Find them, learn from them, and build together!

Dr. Manjeet Rege: Implementation Is the Differentiator

  • «https://www.manjeetrege.com>

Rege’s opening keynote on why leadership, not tooling, is the current bottleneck.

  • 78% of orgs report using AI, yet only 10% see bottom-line gains (McKinsey, 2024). High performers obsess over implementation discipline.
  • Training compute is doubling every five months and the cost to query has fallen 280×. Execution and data practices separate the success/failures.
    • strategic implication - playing field leveling on technology and implementation is the differentiator
  • Employees are already experimenting—90% in his numbers, with 21% heavy users. Only 33% of companies have real governance, so incidents are inevitable.
    • need to confirm that 90% number

He discussed a 18-month roadmap:

  1. 6 months building skills, guardrails, and data access
  2. 6 months of serious pilots
  3. 6 months integrating the wins until 10% of revenue ties back to AI.

Caroline Swift: Creatives Are Operators Now

Caroline presented her thought that “creatives will run the next wave”. She noted AI is driving most US productivity growth, but it only helps when someone with taste directs it.

Her refrain: AI is good at statistics and context compression, not novel ideas or breaking news. The people winning right now pair clear communication with curiosity and agency.

  • Treat prompt engineering as writing - clear instructions, unambiguous instructions.
  • Use AI for reviewing transcripts, extracting brand signals, and building impossible visuals.
  • Human touch is back to being a differentiator; ambient AI is coming, but audiences still crave human touch and authenticity.

The reminder that “AI slop” will show up wherever we are lazy was a gut-check.

Stephen Kaufman: Versioning Agents Like Products

Stephen’s session was discussing how we need to treat AI deployments similar to how we treat application development and deployment.

BUT - Traditional semantic versioning misses the mark because agent behavior depends on model choice, prompt scaffolding, tool availability, and coordination patterns. Context must be captured and versioned.

  • Track agents side by side against a baseline to catch behavioral drift.
  • Shadow deployments and rollback paths are non-negotiable once agents hit production.
  • Invest in lightweight agent registries—even a disciplined SharePoint/Confluence list beats tribal knowledge.

“Agent versioning is a must-have, not a nice-to-have.”

Piers Concord: AI-Assisted Engineering Without the Vibes Spiral

Piers covered the spectrum from “vibe coding” tools like Lovable to opinionated IDE integrations. The guardrails were straightforward:

  • Start with plan mode for anything substantial; force the model to explain the approach.
  • Seed prompts with the exact files you care about and review every diff with the same skepticism you would apply to a teammate.
  • Use parallel agents for exploration, but consolidate changes manually.

His dotnet security example (LLM pinpointing a long-standing impersonation bug) reinforced that these tools amplify intent—they do not replace judgment. I left with a better mental model for when to invite an agent into the loop.

Adam Terlson: Building Production-Grade Agents

Adam framed agentic systems with a “triforce”: outcomes, enablement, and execution, all wrapped with assurances. Two ideas stuck:

  • Treat specs as living contracts that evolve with the agent. Pair them with a “golden path” platform so teams can move fast without rewriting the basics.
  • Borrow from Temporal and saga patterns to nail orchestration. LLMs handle ambiguity; state machines keep you honest.

His Best Buy self-checkout case study showed how orchestration services already solve the problems we now expect agents to handle. Temporal’s durability guarantees are exactly what I need for our long-running automations.

Scott Bromander: Structured Outputs as the Hidden Superpower

Scott’s talk was a reminder to treat LLMs like APIs, not chatbots. Structured Data Output (SDO) bridges the gap between model responses and product workflows.

  • Keep schemas shallow and validate everything—correct shape does not mean correct facts.
  • SDOs will not save you from security issues, but they let you build repeatable flows without training custom models.
  • His HeyZoo demo made the concept tangible: playful experience, fully driven by structured responses.

I am revisiting a couple of internal tools that still rely on free-form responses; enforcing schemas would make monitoring much easier.

Graham Wood: Automating Product Data Entry

Graham walked through the grind of turning photos into merchant-ready listings. The technical stack is straightforward—vision models plus language models—but the hard part is taxonomy accuracy and regression testing.

  • Shopify and eBay taxonomies were the backbone; 99% accuracy is still elusive because categories drift.
  • Custom workflows per retailer matter more than model novelty.
  • Being a one-person AI company is viable when the tooling is mature—his Smartsheet integration was built in a weekend because a customer needed it.

His honesty about pricing, competition, and staffing the “last mile” was refreshing. Any team automating catalog work should budget time for taxonomy maintenance.

Tools I Flagged for Follow-Up

This post is licensed under CC BY 4.0 by the author.