AI PMs face challenges that go beyond standard product management. They must bridge technical complexity, business impact, and ethical responsibility, often in environments where outcomes are uncertain. Frameworks provide structure for decision-making, alignment, and storytelling.
Here are four of the most important frameworks for AI Product Managers: D-R-E-A-M, S-O-L-V-E, STAR, and SMART.
D-R-E-A-M (Define, Research, Execute, Analyse, Measure)
The D-R-E-A-M framework provides a structured approach for taking an AI initiative from concept to evaluation. It is especially valuable for scoping ambiguous AI projects, where success criteria may not be immediately apparent.
- Define
- Clarify the problem, target audience, and objectives.
- Avoid framing the problem as “We need AI” and instead define the business challenge.
- Example: At LinkedIn, rather than saying “We need a new AI recommendation engine,” the problem was defined as “How do we help members grow their networks more effectively?” This led to “People You May Know.”
- Research
- Gather data, market intelligence, and stakeholder inputs.
- Validate whether data is available, sufficient, and ethical for solving the problem.
- Example: Spotify analyzed listening behavior and music metadata to research how to personalize playlists beyond genres, ultimately leading to the development of Discover Weekly.
- Execute
- Build MVPs, run pilots, and launch initial experiments.
- Execution should focus on speed of learning, not perfection.
- Example: Duolingo executed a pilot for AI grammar corrections with a limited group of learners, testing usability before rolling out globally.
- Analyse
- Evaluate model performance (precision, recall, F1, AUC) and business impact (retention, churn reduction, engagement).
- Example: Google Ads continuously analyzes whether AI-powered Smart Bidding improves advertiser ROI compared to manual bidding.
- Measure
- Track ongoing KPIs to ensure models remain relevant and effective.
- Measurement is not a one-time event; it requires monitoring data drift, fairness, and business results over time.
- Example: Tesla measures disengagements per mile driven as a key metric to monitor Autopilot safety in real-world conditions.
PM Application: Use D-R-E-A-M when kicking off AI projects or evaluating product proposals. It forces alignment between problem definition, data feasibility, execution speed, and long-term monitoring.
S-O-L-V-E (Strategic Storyline)
The S-O-L-V-E framework helps AI PMs communicate complex ideas to executives, clients, or cross-functional teams. AI projects are often misunderstood or overhyped; this framework provides a narrative arc that connects strategy to outcomes.
- Situation
- Describe the context or problem in a way that stakeholders understand.
- Example: “Churn rates are rising in our subscription app despite launching new features.”
- Opportunity
- Show how AI creates a unique advantage or path forward.
- Example: “AI-driven personalization can tailor experiences for each user, reducing churn and increasing lifetime value.”
- Leverage
- Highlight assets the organization already has (data, user base, platform integrations).
- Example: “We already capture behavioral data from 10 million daily active users, which can fuel personalization models.”
- Vision
- Paint a clear picture of the future if the AI initiative succeeds.
- Example: “Imagine every user opens the app and sees a personalized dashboard of recommendations designed for them—driving habit and loyalty.”
- Execution
- Outline the steps, timelines, and resources required to deliver.
- Example: “Phase 1 will launch personalized recommendations for the top 20% of users; Phase 2 will expand globally with continuous retraining pipelines.”
Real Example: When Netflix pitched the idea of “Cinematch” (the original recommendation system) to executives, the pitch followed S-O-L-V-E naturally: the situation was user churn, the opportunity was personalization, the leverage was viewing data, the vision was a customized experience, and execution included building predictive models with regular A/B testing.
PM Application: Use S-O-L-V-E in presentations, product roadmaps, or executive briefings. It aligns technical AI ideas with business strategy in a way that decision-makers can understand.
STAR (Situation, Task, Action, Result)
The STAR framework is often used in interviews, but it is equally powerful for structuring case studies, post-mortems, or communicating AI success stories internally.
- Situation
- Describe the context and challenge.
- Example: “Our fraud detection model was missing emerging fraud patterns, leading to $10M in losses annually.”
- Task
- Define what needs to be done.
- Example: “We needed to improve recall without frustrating legitimate customers.”
- Action
- Explain what the team did.
- Example: “We deployed a new anomaly detection model, added human-in-the-loop verification, and integrated results into the transaction system.”
- Result
- Share outcomes with measurable impact.
- Example: “Fraud losses decreased by 60%, while false positives dropped 20%, saving the company $8M annually.”
Real Example: At Grammarly, PMs can frame product improvements in STAR: the situation was user frustration with incorrect grammar suggestions, the task was to improve contextual accuracy, the action was fine-tuning models with new datasets, and the result was higher user satisfaction scores and increased premium subscriptions.
PM Application: Use STAR to structure retrospectives, success stories, and investor updates. It ensures clarity and impact by always tying actions back to measurable outcomes.
SMART (Specific, Measurable, Achievable, Relevant, Time-bound)
The SMART framework is a staple for setting effective goals, but it is especially crucial in AI projects, where scope can quickly expand. SMART goals keep teams accountable and aligned.
- Specific
- Goals must be precise, not vague.
- Example: “Improve Day-7 retention by 10% among new users through AI-driven personalization.”
- Measurable
- Define how success will be tracked with metrics.
- Example: “Measure retention using cohort analysis in the analytics dashboard.”
- Achievable
- Goals should be ambitious yet realistic, considering available resources and constraints.
- Example: “Launch MVP to 20% of users before scaling globally.”
- Relevant
- Ensure goals align with strategic objectives.
- Example: “Retention aligns with company OKRs for user growth and reduced churn.”
- Time-bound
- Set clear deadlines.
- Example: “Achieve target retention increase within Q2.”
Real Examples:
- Duolingo set SMART goals for its AI-powered “hard mode” lessons: improve learning outcomes (measurable), roll out to 5% of users (achievable), tie to engagement OKRs (relevant), and deliver within one quarter (time-bound).
- Microsoft Teams set SMART goals for its AI transcription service: launch in English first (specific), achieve 90% accuracy (measurable), build within 6 months (time-bound), and ensure it supports remote work strategy (relevant).
PM Application: Use SMART when setting OKRs, project milestones, or evaluation criteria for AI launches. It disciplines teams to focus on measurable, time-limited outcomes rather than vague ambitions.
Key Takeaway
Frameworks are essential tools for AI PMs:
- D-R-E-A-M structures the lifecycle of AI initiatives from definition to measurement.
- S-O-L-V-E helps craft compelling narratives that align stakeholders.
- STAR provides clarity when documenting or communicating outcomes.
- SMART keeps goals grounded, measurable, and aligned with business strategy.
Together, these frameworks help AI PMs manage uncertainty, communicate complexity, and deliver impact.