PM × AI
Enter the password to continue
Intro Context Problems Enhancement Framework Roadmap Principles
Product Management in the Age of AI

In an agentic world PM doesn't get replaced; it evolves

Agents enable product managers to expand scope, move faster through cycles, and explore opportunities previously unavailable, but we still have to solve the right problems.

Scope Product Management
Focus PM × AI Usage
Audience PM & R&D Leadership
00 — Context

Where we are today

Our Feature Development Life Cycle follows five sequential stages with formal handoffs between Product Management, UX Design, and Engineering. Each transition is a gate — and every gate is a potential delay, context loss, or misalignment.

FDLC — Current Stages
01
Explore
Problem discovery, market research, customer interviews, opportunity sizing
PM — leads UX — research
02
Define
Requirements, specs, acceptance criteria, scope negotiation, technical feasibility
PM — leads Eng — feasibility
03
Design
Wireframes, prototypes, interaction design, design system alignment, usability testing
UX — leads PM — review
04
Implement
Sprint planning, development, code review, QA, integration testing
Eng — leads UX — validation
05
Release
Deployment, feature flags, GTM coordination, documentation, customer comms
Eng — leads PM — validation
Product Management
UX Design
Engineering
Important Context

We're not waterfall — but the stages are still highly structured

We've adopted agile practices within each stage, but the FDLC itself remains a structured, gated progression. This is intentional — it mitigates risk, reduces waste, and creates clear accountability. The question isn't whether to remove structure. It's how AI can operate within and across these stages to compress cycle times, reduce context loss, and free PMs to focus on higher-value judgment work.

01 — Problem Spaces

Where PMs feel the friction — even without AI

Before we talk about what AI can do, we need to be honest about where the PM role already has structural friction. These are the pain points that eat time, drain energy, and keep PMs from their highest-value work.

Where PMs often spend their time
  • Chasing status updates across teams and tools
  • Reformatting the same information for different audiences
  • Sitting in implementation standups as a "just in case"
  • Manually triaging and deduplicating backlogs
  • Writing boilerplate spec sections from scratch every time
  • Acting as a human router between Design and Engineering
  • Producing slide decks and readbacks instead of thinking
Where PMs should be spending their time
  • Deep customer discovery and problem validation
  • Strategic framing — which bets to make and why
  • Cross-product and platform-level opportunity identification
  • Sharpening acceptance criteria and definition of done
  • Building conviction with stakeholders on direction
  • Evaluating trade-offs with real data, not gut feel
  • Coaching and developing junior PMs
Structural Friction Points
🧭

Context Fragmentation

Product context lives across Aha, Slack, Docs, Miro, email. There's no single view of what's been decided, why, and what's still open — for humans or agents.

Data Architecture
🔁

Repetitive Artifact Production

PMs spend disproportionate time producing artifacts — specs, decks, updates, roadmap views — that follow predictable patterns but still require manual assembly every time.

Time Drain
🚦

Implementation Support Gravity

PMs get pulled into implementation details — standups, bug triage, scope clarifications — because the original intent degrades through handoffs. They become reactive instead of proactive.

Role Drift
🔍

Signal Buried in Noise

Customer feedback, support tickets, competitive intel, and usage data all exist — but synthesizing them into actionable insight is manual, slow, and often skipped under delivery pressure.

Discovery Gap
📊

Measurement Gaps

PM productivity is hard to measure meaningfully. The visible outputs — specs, tickets, meetings attended — don't correlate well with the actual value: quality of decisions and outcomes achieved.

Metrics & KPIs
🤝

Coordination Overhead

Cross-team dependencies, stakeholder alignment, and release coordination consume PM bandwidth. Much of this is routing and tracking work, not judgment work.

Process Cost
02 — Enhancement Map

How AI amplifies what PMs already do

The opportunity isn't to replace PM judgment — it's to compress the time between "I need to understand this" and "I have enough context to decide."

Today (Manual)
Enhanced (AI-Assisted)
Discovery & Research
Manually reviewing support tickets, customer calls, and competitive intel. Days to synthesize patterns.
AI surfaces signal across channels — clusters themes from Salesforce cases, call transcripts, and Ideas. PM focuses on which patterns matter, not finding them.
Feature Writing
Blank page → PM writes from scratch. Context gathered from memory, past specs, Slack threads.
AI generates structured first drafts from intent + context pack. PM shapes intent, challenges assumptions, sharpens acceptance criteria.
Prototyping
PM writes requirements → waits for UX wireframes → reviews → iterates. Multi-day cycle per round.
PM describes intent → AI generates working code prototype in minutes. Figma becomes a refinement tool, not the starting point.
Stakeholder Alignment
PM manually creates roadmap slides, status updates, and executive summaries. Repetitive reformatting of the same data.
AI generates audience-specific views from a single source of truth. PM spends time on narrative and strategic framing, not reformatting.
Backlog Management
Manual triage. Items go stale. Duplicate requests hide in different formats across Aha and ADO.
AI deduplicates, clusters, and surfaces decay. PM makes prioritization decisions on a clean, current backlog.
Cross-team Coordination
Meetings, Slack threads, email chains to track dependencies. PM as human router.
Agent monitors dependencies, flags blockers, generates status. PM intervenes on exceptions, not routine updates.
Deep Dive — Discovery & Research

AI as a signal filter, not a signal generator

Discovery has always been the PM function with the highest leverage and the least time allocated to it. AI changes the economics fundamentally:

Continuous Signal Monitoring

AI watches support tickets, NPS responses, Salesforce cases, and community forums in real time — surfacing emerging patterns before they become escalations.

Agent prompt
Example Agent PromptReview all Tier 1 and Tier 2 support tickets from the last 14 days across [product area]. Cluster them by theme, identify any emerging patterns that weren't present in the prior 14-day window, and flag any theme that has grown by more than 20% in volume. Exclude known issues already on the backlog. Output a summary with theme name, ticket count, trend direction, and 2–3 representative ticket excerpts per theme.

Competitive Intelligence at Scale

Track competitor releases, pricing changes, and positioning shifts across dozens of players. AI compresses what used to be a quarterly research project into a living feed.

Agent prompt
Example Agent PromptMonitor the changelogs, blogs, and press releases of [Competitor A, B, C] for the past 30 days. Identify new feature launches, pricing changes, or positioning shifts. For each, assess: does this affect our competitive position in [market segment]? Rate impact as high/medium/low and suggest whether any warrant a response in our roadmap. Format as a brief per competitor.

Interview & Call Synthesis

Auto-transcribe and theme customer calls. Surface contradictions between what customers say and what usage data shows. PM reads the insight, not the transcript.

Agent prompt
Example Agent PromptHere are transcripts from 8 customer discovery calls conducted this sprint on [feature area]. Extract: (1) the top 5 recurring pain points with direct quotes, (2) any requests that contradict each other across customers, (3) any gap between what customers say they want and what their actual usage data in [analytics tool] suggests they need. Highlight the 2–3 strongest signals for product direction.

Adjacent Opportunity Detection

AI can cross-reference feature requests, usage patterns, and market trends to identify opportunities PMs wouldn't have time to spot manually across a large product surface area.

Agent prompt
Example Agent PromptAnalyze our Aha Ideas backlog alongside the last quarter's usage data for [product area]. Identify clusters of feature requests that, taken individually, seem small but together suggest an unmet workflow or use case we haven't explicitly targeted. Cross-reference with market trend data from [source]. Recommend the top 3 adjacent opportunities worth exploring, with estimated customer reach and effort level.
The Key Shift

From producing artifacts to shaping decisions

The PM role moves up the value chain. AI handles the scaffolding — first drafts, data synthesis, status tracking, reformatting. PMs focus on judgment: framing problems, making trade-offs, building conviction, and aligning humans around outcomes.

03 — Structuring the Work

A framework for PM work in an agentic FDLC

The goal isn't to bolt AI onto the existing process. It's to redesign PM workflows around three operating layers where human judgment and AI capability intersect.

01
PM Owns

Strategic Intent Layer

PMs own the "why" and "what." This layer is irreducibly human — it requires market judgment, customer empathy, and business context that AI can inform but not replace.

  • Outcome definition
  • Bet framing & prioritization
  • Roadmap sequencing
  • Stakeholder alignment
  • Customer problem framing
See agent examples
Agent Prompts — Strategic Intent

Bet Framing & Sizing

PromptI'm evaluating whether to invest in [capability]. Pull together: (1) the relevant customer requests from our Ideas backlog with vote counts, (2) competitive coverage of this capability across [competitors], (3) estimated TAM impact based on our current segment mix. Frame this as a bet with a clear thesis, supporting evidence, key risks, and a recommended confidence level.

Roadmap Narrative Generation

PromptHere are the features we've committed to for next quarter across [product areas]. Generate a roadmap narrative for our executive stakeholders that ties each initiative back to a strategic objective. Group them by theme rather than team. Highlight dependencies and call out the 2–3 items with the highest customer impact. Tone: confident, concise, forward-looking.
Intent feeds definition
02
Collaborate

Spec & Definition Layer

The collaborative zone. AI generates, PMs refine. The spec becomes the executable contract between human intent and machine execution.

  • AI-assisted spec drafting
  • Context pack assembly
  • Code-first prototyping
  • Acceptance criteria sharpening
  • Cross-discipline alignment
See agent examples
Agent Prompts — Spec & Definition

Spec First Draft from Intent

PromptI need a feature spec for [capability]. Here's my intent: [2–3 sentence problem + desired outcome]. Use our spec template. Generate sections for: purpose, users & context, experience overview, components, states (default, empty, loading, error, success), interactions, and acceptance criteria. Flag any sections where you need more input from me before this is review-ready. Reference our design system at [location].

Acceptance Criteria Stress Test

PromptHere's the spec for [feature]. Review the acceptance criteria section. For each criterion: (1) is it testable as written? (2) are there edge cases not covered? (3) are there implicit assumptions that should be made explicit? Suggest revised acceptance criteria that an engineer could implement against without needing to ask clarifying questions. Be specific about boundary conditions.
Specs drive execution
03
AI Leads

Execution & Feedback Layer

AI handles the routine; PMs handle the exceptions. Monitoring, status, and coordination shift from PM-as-router to agent-managed with PM oversight.

  • Agent-managed status tracking
  • Automated backlog hygiene
  • AI-generated updates
  • Exception-based PM intervention
  • Continuous feedback loops
See agent examples
Agent Prompts — Execution & Feedback

Backlog Health Audit

PromptAudit the backlog for [product area]. Identify: (1) items that haven't been updated in 90+ days, (2) duplicate or near-duplicate requests, (3) items with no acceptance criteria or unclear scope, (4) items that reference features or APIs that have since changed. For each category, recommend: close, merge, update, or escalate. Group the results so I can act on them in a single triage session.

Stakeholder Status Generation

PromptGenerate this week's status update for [product area] using data from our project tracker. Format for two audiences: (1) Engineering leadership — focus on velocity, blockers, and technical risks. (2) Executive stakeholders — focus on milestone progress against committed dates, customer impact, and decisions needed. Keep each version under 200 words. Flag anything that's off-track with a clear "needs attention" marker.
The Feedback Loop

Execution insights feed back into strategy

The bottom layer isn't a dead end. Agent-managed feedback loops — usage data, delivery metrics, customer signals — flow back up into the Strategic Intent layer, informing the next cycle of bets and prioritization. AI compresses this loop from quarterly reviews to continuous signal.

Beyond the FDLC

PM work doesn't stop at the feature lifecycle

A significant portion of a PM's week is spent on work that sits outside any individual feature's FDLC. These tasks are equally ripe for AI enhancement.

📣

Stakeholder Communication

Executive updates, board prep, cross-functional readbacks, and GTM coordination. AI generates audience-tailored views from a single narrative source.

See example
Example — Quarterly Business Review Prep

Scenario: A PM needs to prepare QBR materials for three audiences — engineering leadership (delivery focus), executive team (outcome focus), and sales (customer impact focus).

  • PM writes a single narrative brief: key outcomes, risks, and next quarter priorities
  • AI generates three audience-specific decks from the brief — different depth, different emphasis, same underlying data
  • PM reviews and refines each in minutes instead of building three separate presentations from scratch
  • Time saved: hours per review cycle, redirected into preparing for the actual conversation
🧑‍🏫

Team Development & Coaching

1:1 prep, career conversations, skill assessments. AI surfaces patterns across a team's work to inform coaching — what's working, where the gaps are, who's ready for more.

See example
Example — 1:1 Preparation for a Senior PM

Scenario: A PM lead manages six PMs and runs weekly 1:1s. Prep currently takes 20–30 minutes per person — reviewing Aha, Slack, past notes.

  • AI summarizes each PM's week: features moved, blockers raised, stakeholder interactions, spec quality trends
  • Highlights patterns over time — e.g., "This PM's specs consistently lack edge case coverage" or "Shipping velocity is up but customer feedback scores are flat"
  • Suggests coaching prompts based on the patterns, aligned to their growth plan
  • PM lead walks into every 1:1 with context and intent — not scrambling for what happened this week
📈

Product Analytics & Reporting

Usage analysis, feature adoption tracking, funnel reviews. AI automates the data pull and initial analysis, so PMs arrive at the insight faster.

See example
Example — Post-Launch Feature Adoption Review

Scenario: A feature launched two weeks ago. PM needs to assess adoption, identify friction, and decide whether to invest in iteration or move on.

  • AI pulls adoption data across cohorts — activation rate, time-to-value, drop-off points, support ticket correlation
  • Cross-references usage patterns with the original success criteria defined in the spec
  • Generates a gap analysis: "Activation is 62% of target. Drop-off concentrates at step 3. Users who complete onboarding retain at 2× the baseline."
  • PM gets a decision-ready brief instead of spending a day in dashboards assembling the same picture manually
🗺️

Strategic Planning & Bets

FY planning, portfolio reviews, market landscape analysis. AI synthesizes competitive data, trend research, and internal performance into structured strategy inputs.

See example
Example — FY27 Bets Exercise Preparation

Scenario: PM leadership needs to prepare a bets exercise to shape the FY27 roadmap across multiple product areas and align with engineering leads.

  • AI aggregates inputs: last cycle's bet outcomes, customer retention drivers, competitive moves, sales win/loss themes, and usage trend data
  • Generates a structured brief per product area — what's working, what's stalling, where the market is moving
  • Surfaces cross-product opportunities that no individual PM would see from their area alone
  • PM leadership arrives at the strategy session with pre-work distributed, questions pre-seeded, and time spent on discussion — not readbacks
💬

Customer & Partner Engagement

Call prep, follow-up synthesis, feature request tracking across accounts. AI pre-briefs PMs before calls and auto-captures commitments and themes afterward.

See example
Example — Enterprise Customer Advisory Board Prep

Scenario: A PM is meeting with a strategic customer. They need to understand the account's full history — feature requests, support issues, usage trends, and prior commitments — across multiple product areas.

  • AI compiles a customer brief from Salesforce, Aha Ideas, support history, and previous call notes
  • Highlights open commitments, unresolved requests, and usage patterns that suggest unspoken needs
  • After the call, AI transcribes, extracts action items, and routes them to the right teams and backlogs
  • PM focuses on the relationship and the conversation — not the 45 minutes of prep and 30 minutes of follow-up notes
The Shift — Before & After
PM Activity Today With AI Enhancement
Feature authoring Write from scratch every time, gathering context from memory and Slack AI drafts from intent + structured context; PM shapes, challenges, sharpens
Prototyping Requires UX handoff and multi-day Figma cycles Code prototypes generated directly from specs in minutes
Status & updates Manually crafted per audience — same data, different formats One source of truth generates audience-specific views automatically
Backlog management Manual triage, stale items, hidden duplicates Agent-managed hygiene; PM focuses on prioritization decisions
Cross-team coordination PM acts as human router — meetings, Slack, email chains Dependency agents flag exceptions; PM intervenes only when needed
Discovery & research Manual synthesis of tickets, calls, and competitive intel — days per cycle AI monitors signals continuously; PM evaluates patterns, not raw data
Quality measurement Measured by output volume — specs written, tickets closed Measured by decision quality — bet accuracy, outcomes achieved
04 — Adoption Roadmap

Phased rollout, not big-bang transformation

Each phase delivers standalone value while building toward the full vision. Start with the highest-friction, lowest-risk opportunities.

Phase 1 — Foundation ● We are here
Standardize & Baseline
We're currently in this phase — establishing the foundational decisions that everything else depends on. Before we can accelerate with AI, we need alignment on a set of open structural questions:
Open QuestionWhere does the spec live? We need a single canonical home for feature specifications that both humans and agents can read, write, and reference — Aha, ADO, a structured doc store, or something else.
Open QuestionWhat agentic tooling do we standardize on? Teams are experimenting independently today. We need a deliberate decision on which AI tools become the supported stack across PM, UX, and Eng — rather than accumulating incompatible point solutions.
Open QuestionDo our current feature management tools (Aha, ADO) still fit the new model? As agents become participants in the FDLC, the integration and data structure requirements change. We should validate that our toolchain supports the operating model we're moving toward — or identify where it doesn't.
Phase 2 — Pilot
Validate with a Small Set of Teams
With Phase 1 decisions in hand, we'll work with a deliberately small group of volunteer teams — likely 2–3 — to validate the frameworks in practice before scaling. The goal isn't to just move fast; it's to learn what actually works in our context and ways-of-working. Teams will pilot AI-assisted spec drafting, discovery signal monitoring, and code-first prototyping, then feed findings back into the framework before broader rollout.
Phase 3 — Scale
Expand & Measure
Roll successful patterns across all PM teams. Deploy agent-managed coordination for cross-team dependencies and stakeholder reporting. Introduce PM effectiveness metrics that reflect the new operating model.
Phase 4 — Transform
Redefine the Operating Model
PM, Design, and Eng converge around shared spec artifacts. The FDLC compresses significantly for standard features. PMs operate primarily at the strategic and judgment layers, with AI handling production, coordination, and continuous signal monitoring.
05 — Operating Principles

Guardrails for the transition

These aren't aspirational values — they're decision filters. When in doubt about how to integrate AI into PM workflows, run it through these.

🧠

Judgment Over Output

Never measure a PM by what AI could have generated. Measure by the quality of decisions: problem framing, trade-off navigation, and outcome definition.

📐

Structure Before Speed

AI on top of chaos creates faster chaos. Standardize context, spec formats, and quality gates before accelerating with agents.

🔍

Human Review Is Not Optional

Every AI-generated artifact that enters the FDLC pipeline must have a named human owner who has reviewed and accepted accountability for it.

🧪

Pilot First, Scale Second

Test with willing teams. Measure impact. Adjust. Then scale. Resist the pressure to roll out AI tooling to everyone simultaneously without guardrails.

🔗

Reduce Tools, Increase Seams

Fewer tools with better integration beats more tools with brittle sync. Every handoff is a risk. Consolidate where the cost of switching is lower than the cost of maintaining.

The Opportunity

We're not adding AI to product management — we're rebuilding how product work gets done.

The teams that choose to adapt to new paradigms don't just ship faster. They attract better talent, make sharper bets, and compound their advantage every cycle. Will the change be disruptive? Yes. Is that the point? Also yes. We are on the edge of a true new way-of-working, and it's now time to lean in.