Explanation: Core MMM Methodology
AMMM treats a Bayesian MMM as a workflow discipline, not only a model specification. A model run is considered reliable when it satisfies explicit diagnostic gates and produces traceable artefacts at each stage.
For the full stage map, see Workflow Stages.
Methodological Position
Section titled “Methodological Position”AMMM V2 combines three principles:
- Generative transparency: the model equation and priors are explicit.
- Stage-gated diagnostics: model outputs are consumed only after diagnostics are reviewed.
- Decision-linked reporting: decomposition and optimisation are interpreted with uncertainty, not point estimates alone.
This gives strong computational and statistical discipline, but it does not by itself prove causal validity. See the causal caveat below.
V2 Architecture Context
Section titled “V2 Architecture Context”The standard execution path is the V2 driver (MMMBaseDriverV2) orchestrating a stage-based run:
00_run_metadata/10_pre_diagnostics/20_model_fit/30_model_assessment/40_decomposition/50_diagnostics/60_response_curves/70_optimisation/80_interpretation/
The stage structure is not cosmetic. It encodes model lifecycle ordering and downstream validity.
Core Generative Structure
Section titled “Core Generative Structure”At a high level:
$$ y_t = \mu_t + \varepsilon_t $$
$$ \mu_t = \alpha + \sum_{m=1}^{M} \beta_m,\mathrm{Sat}!\left(\mathrm{Adstock}(x_{m,t};\alpha_m);\lambda_m\right) + \sum_{k=1}^{K} \gamma_k z_{k,t} $$
Where:
- $y_t$ is the KPI (for example revenue).
- $x_{m,t}$ is channel spend/exposure.
- $z_{k,t}$ are control variables (for example macro, competitor, events, seasonal controls).
- $\beta_m$, $\alpha_m$, and $\lambda_m$ are channel-effect, adstock, and saturation parameters.
- $\varepsilon_t$ captures residual variation under the selected likelihood.
Prior Design and custom_priors
Section titled “Prior Design and custom_priors”AMMM supports explicit prior configuration in YAML via custom_priors. In practice this is where domain constraints are encoded, for example:
- positivity for channel effects (
beta_channelwith HalfNormal-type priors), - bounded memory dynamics for adstock (
alphaoften on $(0,1)$), - positive saturation rates (
lamwith positive-support priors).
The key point is not any single prior family. The key point is whether the joint prior predictive implications are plausible on the outcome scale; this is checked explicitly in pre-diagnostics (see Prior Predictive Checking).
Diagnostic Gate Framework
Section titled “Diagnostic Gate Framework”AMMM’s V2 diagnostics are designed to support gate decisions. Core outputs and thresholds are:
| Gate | Diagnostic | Primary artefact | Pass / warning / fail guidance |
|---|---|---|---|
| g1 | Prior predictive plausibility | 10_pre_diagnostics/prior_predictive_summary.csv | plausibility_ratio >= 0.5 passes practical plausibility screening; below 0.5 triggers concern. |
| g2 | Convergence: R-hat | 50_diagnostics/convergence_report.json | Pass at $\max \hat{R} \le 1.01$, warning in $(1.01, 1.05]$, fail above 1.05. |
| g3 | Convergence: ESS | 50_diagnostics/convergence_report.json | Practical target is ESS (bulk and tail) $\ge 100 \times$ chains. |
| g4 | Convergence: divergences | 50_diagnostics/convergence_report.json | Operationally AMMM treats non-zero divergences as convergence failure; warning/fail bands are 1+ and 10+ respectively. |
| g5 | Calibration (PIT + coverage) | 50_diagnostics/calibration_report.json | well_calibrated is the machine-readable calibration outcome; diagnose over-/under-confidence and bias. |
| g6 | PSIS-LOO reliability (Pareto k) | 50_diagnostics/pareto_k_summary.json | Good: $k \le 0.5$, marginal: $(0.5, 0.7]$, problematic: $> 0.7$. |
| g7 | Energy geometry review | 50_diagnostics/energy_diagnostic.png | Energy behaviour is visually reviewed; BFMI guidance is typically > 0.3 (warning below, fail near 0.2). |
Notes:
convergedinconvergence_report.jsonis the key machine-readable convergence gate.well_calibratedincalibration_report.jsonis the key machine-readable calibration gate.okinpareto_k_summary.jsonindicates whether problematic high-k observations were found.
For detailed calibration interpretation, see Calibration Diagnostics. For LOO and Pareto context, see LOO-CV & Model Checking.
diagnostics_gating Policy
Section titled “diagnostics_gating Policy”diagnostics_gating accepts strict, warn, or off.
strict: pipeline halts on convergence failure (converged = false).warn: pipeline continues but diagnostics are surfaced as warnings.off: gating behaviour is disabled, but diagnostic artefacts are still generated where possible.
In current V2 workflow implementation, strict halting is directly enforced for convergence. Other diagnostics are surfaced for governance and interpretation control.
Baseline Specification and Leakage Trade-off
Section titled “Baseline Specification and Leakage Trade-off”A central AMMM modelling choice is how to specify baseline dynamics:
- Exogenous baseline: engineer seasonal/trend controls directly from time index and external signals.
- Assisted baseline (Prophet): fit components from the target series and inject them as controls.
Assisted mode can improve numerical stability and reduce multicollinearity pressure, but prophet.trend: true introduces target-derived information into regressors and can attenuate media coefficients. AMMM treats this as a documented, explicit trade-off rather than a hidden default.
See Baseline Trade-off for protocol and reporting language expectations.
From Inference to Decisions
Section titled “From Inference to Decisions”Downstream steps use posterior-informed summaries to drive:
- decomposition and contribution narratives,
- response-curve estimation,
- constrained budget optimisation,
- agentic interpretation and governance outputs.
These outputs are only as trustworthy as upstream diagnostic adequacy.
Causal Caveat (Required)
Section titled “Causal Caveat (Required)”Passing diagnostics means the model is computationally stable and predictively adequate under its assumptions. It does not guarantee:
- correct causal identification,
- immunity to omitted variable bias,
- robustness to structural regime breaks.
AMMM should therefore be interpreted as disciplined Bayesian decision support under explicit assumptions, not automatic causal truth extraction.