Data Science & Analytics Topics
Statistical analysis, data analytics, big data technologies, and data visualization. Covers statistical methods, exploratory analysis, and data storytelling.
Data Analysis and Insight Generation
Ability to convert raw data into clear, evidence based business insights and prioritized recommendations. Candidates should demonstrate end to end analytical thinking including data cleaning and validation, exploratory analysis, summary statistics, distributions, aggregations, pivot tables, time series and trend analysis, segmentation and cohort analysis, anomaly detection, and interpretation of relationships between metrics. This topic covers hypothesis generation and validation, basic statistical testing, controlled experiments and split testing, sensitivity and robustness checks, and sense checking results against domain knowledge. It emphasizes connecting metrics to business outcomes, defining success criteria and measurement plans, synthesizing quantitative and qualitative evidence, and prioritizing recommendations based on impact feasibility risk and dependencies. Practical communication skills are assessed including charting dashboards crafting concise narratives and tailoring findings to non technical and technical stakeholders, along with documenting next steps experiments and how outcomes will be measured.
Data and Business Outcomes
This topic focuses on converting data analysis and insights into actionable business decisions and measurable outcomes. Candidates should demonstrate the ability to translate trends into business implications, choose appropriate key performance indicators, design and interpret experiments, perform cohort or funnel analysis, reason about causality and data quality, and build dashboards or reports that inform stakeholders. Emphasis should be on storytelling with data, framing recommendations in terms of business levers such as revenue, retention, acquisition cost, and operational efficiency, and explaining instrumentation and measurement approaches that make impact measurable.
Data Driven Recommendations and Impact
Covers the end to end practice of using quantitative and qualitative evidence to identify opportunities, form actionable recommendations, and measure business impact. Topics include problem framing, identifying and instrumenting relevant metrics and key performance indicators, measurement design and diagnostics, experiment design such as A B tests and pilots, and basic causal inference considerations including distinguishing correlation from causation and handling limited or noisy data. Candidates should be able to translate analysis into clear recommendations by quantifying expected impacts and costs, stating key assumptions, presenting trade offs between alternatives, defining success criteria and timelines, and proposing decision rules and go no go criteria. This also covers risk identification and mitigation plans, prioritization frameworks that weigh impact effort and strategic alignment, building dashboards and visualizations to surface signals across HR sales operations and product, communicating concise executive level recommendations with data backed rationale, and designing follow up monitoring to measure adoption and downstream outcomes and iterate on the solution.
SQL for Data Analysis
Using SQL as a tool for data analysis and reporting. Focuses on writing queries to extract metrics, perform aggregations, join disparate data sources, use subqueries and window functions for trends and rankings, and prepare data for dashboards and reports. Includes best practices for reproducible analytical queries, handling time series and date arithmetic, basic query optimization considerations for analytic workloads, and when to use SQL versus built in reporting tools in analytics platforms.
Analysis to Recommendation and Decision Framing
Ability to move from analysis to a concise, justified recommendation and a pragmatic plan for decision and implementation. Candidates should lead with a clear recommendation or conditional decision, support it with evidence and trade offs, quantify expected business impact, estimate effort and time horizon, and state assumptions and limitations. The skill set includes proposing prioritized action plans and alternative options, anticipating objections, defining monitoring and rollback strategies, translating technical remediation or risk into business terms and measurable success metrics, and tailoring recommendations to stakeholder needs and constraints.
Data Storytelling and Insight Communication
Skills for converting quantitative and qualitative analysis into a clear, persuasive narrative that guides stakeholders from findings to action. This includes leading with the headline insight, defining the business question, selecting the most relevant metrics and visual evidence, and structuring a concise story that explains what happened, why it happened, and what the recommended next steps are. Candidates should demonstrate tailoring of language and technical depth for diverse audiences from engineers to product managers to executives, summarizing trade offs and uncertainty in plain language, distinguishing correlation from causation, proposing follow up experiments or investigations, and producing concise executive summaries and status reports with an appropriate cadence. Interviewers evaluate the ability to persuade and align cross functional partners, answer questions about data validity and methodology, synthesize qualitative signals with quantitative results, and adapt presentation format and level of detail to the decision maker.
Business Impact Measurement and Metrics
Selecting, measuring, and interpreting the business metrics and outcomes that demonstrate value and guide decisions. Topics include high level performance indicators such as revenue decompositions, lifetime value, churn and retention, average revenue per user, unit economics and cost per transaction, as well as operational indicators like throughput, quality and system reliability. Candidates should be able to choose leading versus lagging indicators for a given question, map operational KPIs to business outcomes, build hypotheses about drivers, recommend measurement changes and define evaluation windows. Measurement and attribution techniques covered include establishing baselines, experimental and quasi experimental designs such as A B tests, control groups, difference in differences and regression adjustments, sample size reasoning, and approaches to isolate confounding factors. Also included are quick back of the envelope estimation techniques for order of magnitude impact, converting technical metrics into business consequences, building dashboards and health metrics to monitor programs, communicating numeric results with confidence bounds, and turning measurement into clear stakeholder facing narratives and recommendations.
Metrics Selection and Dashboard Storytelling
Focuses on selecting metrics and designing dashboards and reports that directly support stakeholder decision making. Candidates should be able to identify distinct audiences and the specific decisions each audience must make, choose actionable metrics rather than vanity metrics, and balance leading indicators with lagging indicators as well as strategic metrics with operational metrics. This topic covers defining key performance indicators and targets and justifying each metric by the decision it enables, setting data freshness requirements and update cadence, and ensuring instrumentation and data quality to make metrics reliable. It includes dashboard architecture and visual narrative design such as layering from high level summaries to detailed drill down, tailoring views for executives, managers, and operational teams, selecting appropriate visualizations and annotations to guide interpretation, and enabling root cause analysis. Reporting practices are covered, including formatting, distribution channels, and alerting. Governance and metric definition topics include creating a single source of truth, assigning ownership, documenting definitions, and change control. Candidates must also recognize metric interactions and common pitfalls that can make metrics misleading such as aggregation bias, sampling issues, correlation versus causation, and perverse incentives, and propose mitigations. Interview questions typically ask candidates to design metric sets and dashboards for hypothetical scenarios, explain why metrics were chosen based on decisions they support, and describe cadence, distribution, drilling, and governance approaches.
Design and Product Analytics
Using quantitative metrics to inform product and design decisions. Covers key user engagement metrics such as conversion rates, task completion, retention, and feature adoption, and how to instrument and interpret these signals using analytics platforms and product dashboards. Explains how quantitative data complements qualitative research, how to identify design problems from metrics, design experiments and metrics for validation, and how to translate findings into design priorities and success criteria.