Conference Overview

The conference underscored the tension between the pressure to adopt AI and the difficulty of proving impact. While most organizations are experimenting, few achieve durable gains. Success requires embedding AI across workflows and evolving culture, fluency, and measurement.

A standout theme was career growth: AI adoption is not only about productivity, but also about retention, engagement, and professional development. Framing AI as a growth opportunity makes adoption stickier and improves recruiting.


Overarching Themes

  • AI in the product must deliver real customer value: Checking an “AI box” is meaningless. Duolingo’s success came not from sprinkling AI across the product, but from using it to unlock entirely new learning categories. We should hold ourselves to the same bar — AI initiatives should tie directly to customer outcomes like fewer escalations and faster troubleshooting, better admin efficiency, better end user UX.
  • Engineering work is changing: Engineers are shifting from “code craftspeople” to orchestrators of workflows and AI agents. This raises the importance of system design, architecture, evaluation design, and discovery skills, not just implementation.
  • AI productivity hype: The METR study (2025) found developers using AI were 19% slower on coding tasks than controls due to context switching and “waiting for the model” despite reporting that they felt faster. AI impact measurement is immature, but now a focus point.
  • AI beyond code: Multiple sessions emphasized: the leverage isn’t just faster coding. Gains come in upstream artifacts: problem framing analysis, PRDs, ATDD tests and documentation.
    • Well-defined requirements as a bottleneck: Multiple sessions flagged incomplete requirements as a core cause of rework and waste. As an Engineer and Engineering leader I know this to be true from experience.
  • AI culture and fluency: A recurring theme across sessions was that AI success depends more on culture and fluency than on tooling. Success won’t come from one-off training or passive exposure. It will require structured practice, visible leadership endorsement, and safe spaces for experimentation.
    • Shared beliefs: Teams need a clear answer to why they’re using AI (e.g. to free capacity for innovation, to accelerate safe iteration, to improve user experience). Without this, efforts fragment and stall.
    • Fluency: Individuals and teams only gain confidence through hands-on, repeated use in real workflows. Training alone doesn’t build fluency — projects, retros, and reflection do. In one example, Amplitude’s AI Week claims to have achieved ~40% productivity gains during a concentrated experiment by committing org-wide, aligning on shared beliefs, and focusing on fluency through practice. They did not, however, get into the metrics they’ve used to quantify this.
  • Growth and retention:Growth ≠ promotions. Learning opportunities (stretch assignments, exposure, education) drive retention. AI literacy can become a growth lever when framed as structured stretch assignments.
  • Speed is a strategic lever, but must have guardrails: Moving quickly is a competitive advantage, but design intentional “brakes” to avoid reckless execution. Understand that speed is relative to the system’s constraints and capacity, go at the optimum speed the the system can handle to avoid bottlenecks.
  • SDLC Metrics: Frameworks for measurement like QuADS (Quality, Accuracy, Delivery, Satisfaction) emphasize on connecting SDLC metrics to business outcomes. It aims to avoid metric category silos that put too much emphasis on one area to account for Newton’s Third Law of Motion: for every action (a force exerted by one object) there is an equal and opposite reaction. Too much focus on speed metrics can impact quality. Too much focus on quality metrics can impact velocity.

Biggest Insights

  1. Believing the hype is costly: Experiment, find what works for your organization and systemize it.
  2. A lot of AI’s leverage is upstream: PRDs, user stories and journeys, test cases, mockups — not just in coding.
  3. Perception ≠ productivity: Incorporate outcome metrics along with surveys and usage metrics for holistic insights.
  4. Tailored adoption: AI learning should be adapted for different personas.
  5. AI \= growth: Literacy should be framed as career development.

Risks

  • AI brittleness: With AI’s probabilistic models updates can break experience in our product. Evals can solve for this.
  • Over-indexing on speed: Without guardrails, leads to bottlenecks and rework.
  • AI vendor volatility: Partnerships may fail; contracts should remain short-term.
  • Metrics overload: Metrics without actionability is noise.

Growth Beyond the Ladder (Session by Jill Wetzler)

  • 70/20/10 Framework: The 3 E’s of Growth

    • 70% Experience (stretch assignments, rotations, projects)
    • 20% Exposure (mentorship, shadowing, cross-functional work)
    • 10% Education (courses, workshops, conferences)
  • Stretch Assignments as a Growth Engine
    Engineers grow most when asked to solve ambiguous, impactful problems. AI adoption using AI for PRDs, ATDD, prototyping, and monitoring is a perfect example of a high-value stretch assignment. Some examples:

  • Skills: Picking up a new skill (e.g. context engineering)

  • Complexity: Working on things with greater ambiguity / less definition

  • Business Impact: Being given explicit responsibility for business goals without defined solutions

  • Retention Impact

    • 94% of employees would stay longer if the company invested in their career.
    • 70% say learning opportunities increase their connection to the company.

Take away: Frame AI literacy not as compliance but as career growth, supporting both retention and recruiting.


The Bottom-Up Adoption Journey: How AI is Reshaping Our Roles, Org Workflows and Culture (Session by Randall Tombaugh)

Engineering Personas

  • Craftsman: motivated by mastery and rigor. Gains from AI in code quality, testing, and design.
  • Explorer: motivated by curiosity. Gains from AI in prototyping, discovery, and research synthesis.
  • Sprinter: motivated by momentum. Gains from AI in automation, PRD drafting, and iteration.
Engineering Leadership Conference craftsperson slide and stage photo
Engineers do indeed need three hands sometimes

Each persona reacts differently to AI — craftspeople resist delegation, explorers risk burnout, sprinters disengage if slowed.

Take away: AI growth programs must be persona-aware, not one-size-fits-all.


My Final Take

AI is promising and disruptive, but it’s not magic. We’re still early, measuring impact is still immature and in the limelight now. We get to learn from other’s early failures and successes.

We should treat it like any other strategic investment: build organizational fluency, incur debt when testing, repay debt before it compounds, and measure value rigorously against operational efficiency, customer outcomes and ROI. Over time, our growth will come not from chasing hype, but from developing the maturity to use AI where it matters.