Skip to content

Quickstart Guide

This guide walks through a first AMMM run using the current V2 pipeline.

Terminal window
# Clone the repository
git clone https://github.com/tandpds/ammm.git
cd ammm
# Create and activate a fresh virtual environment
python -m venv venv_ammm
source venv_ammm/bin/activate # Windows: venv_ammm\Scripts\activate
# Upgrade build tooling (recommended)
python -m pip install --upgrade pip setuptools wheel
# Install AMMM in editable mode
pip install -e .

Windows note: Conda is often easier when installing Prophet and compiled dependencies. See Installation.

Terminal window
python runme.py

By default, runme.py resolves files in this order:

  1. data-config/ (first priority)
  2. examples/default/
  3. fallback demo files in examples/demo/

It expects one config file (.yml/.yaml) and one data file (.csv/.xlsx) in the selected folder, plus optional holidays file (holidays.csv or holidays.xlsx).

Step 3: Examine the stage-based output structure

Section titled “Step 3: Examine the stage-based output structure”

After the run, inspect the results root (default: results/, or your --results-dir path). V2 outputs are stage-based:

results/
├── 00_run_metadata/ # Config, provenance
├── 10_pre_diagnostics/ # Data checks, prior predictive
├── 20_model_fit/ # Posterior draws, traces, forest plots
├── 30_model_assessment/ # PPC fit, R², RMSE
├── 40_decomposition/ # Channel contributions, ROAS
├── 50_diagnostics/ # Convergence, calibration, Pareto-k
├── 60_response_curves/ # Saturation curves
├── 70_optimisation/ # Budget allocation, scenarios
└── 80_interpretation/ # Reports (if LLM enabled)

Key files to inspect first:

  • 30_model_assessment/model_fit_predictions.png (overall fit)
  • 20_model_fit/model_trace.png (chain behaviour)
  • 20_model_fit/prior_posterior_comparison.png (how data updated priors)
  • 60_response_curves/response_curves.png (channel saturation curves)
  • 50_diagnostics/convergence_report.json (machine-readable convergence result)
  • 50_diagnostics/calibration_pit_histogram.png (calibration shape)
  • 10_pre_diagnostics/prior_predictive_check.png (prior plausibility)

Step 3.5: Review diagnostics before using business outputs

Section titled “Step 3.5: Review diagnostics before using business outputs”

Before acting on decomposition or optimisation outputs, check 50_diagnostics/:

  • convergence_report.json:
    • read converged (true/false)
  • calibration_report.json:
    • read well_calibrated and diagnosis

The V2 workflow also writes rank_trace.png, energy_diagnostic.png, pareto_k_summary.json, pair_plot.png, and residuals_vs_{channel}.png into 50_diagnostics/.

Practical rule: treat 40_decomposition/, 60_response_curves/, 70_optimisation/, and 80_interpretation/ as decision-ready only when upstream diagnostics are acceptable.

Gate policy is controlled by diagnostics_gating: strict | warn | off in config. Under strict, convergence failure halts the pipeline.

Recommended next reading:

  1. Data Preparation
  2. Configuration Guide
  3. Methodology
  4. Workflow Stages
  5. Interpreting Results
from driver import MMMBaseDriverV2
config_file = "path/to/your/config.yml"
data_file = "path/to/your/data.csv"
holidays_file = "path/to/holidays.csv" # optional
output_folder = "your_results"
driver = MMMBaseDriverV2(
config_filename=config_file,
input_filename=data_file,
holidays_filename=holidays_file,
results_filename=output_folder,
)
driver.main()

runme.py uses the same from driver import MMMBaseDriverV2 import pattern.

  • Produces scenario comparisons across budget shifts.
  • Main artefacts in 70_optimisation/:
    • budget_scenario_results.csv
    • scenario_budget_allocation.png
    • scenario_total_contribution.png
    • scenario_roi_comparison.png

Use --no-scenarios to run direct optimisation.

  • Main artefacts in 70_optimisation/:
    • optimization_results.csv
    • budget_optimisation.png

Use --multiperiod --multiperiod-weeks N.

  • Main artefacts in 70_optimisation/:
    • multiperiod_optimization_results.csv
    • multi-period budget and contribution figures
FlagTypeDefaultDescription
--datastrNonePath to input data CSV/Excel.
--configstrNonePath to configuration YAML.
--holidaysstrNonePath to holidays CSV/Excel.
--results-dirstrNoneOutput directory override.
FlagTypeDefaultDescription
--no-scenariosflagscenarios enabledDisable scenario planning.
--scenariosstr-20,-15,-10,-5,0,5,10,15,20Comma-separated budget percentages.
--multiperiodflagdisabledEnable multi-period optimisation.
--multiperiod-weeksint13Planning horizon in periods.
--no-seasonalityflagseasonality enabledDisable seasonality multipliers in multi-period mode.
--use-adstockflagdisabledEnable adstock-aware optimisation (not compatible with --multiperiod).
FlagTypeDefaultDescription
--fastflagdisabledFast mode (draws=1000, tune=500, chains=2).
--jaxflagdisabledUse JAX/NumPyro sampler mode.
--gpuflagdisabledRequest GPU for JAX (JAX_PLATFORMS=cuda).
--chain-methodchoiceNoneJAX chain method: vectorized, parallel, sequential.
FlagTypeDefaultDescription
--drawsintNoneOverride posterior draws.
--tuneintNoneOverride tuning steps.
--chainsintNoneOverride chain count.
--target-acceptfloatNoneOverride NUTS target acceptance rate.
FlagTypeDefaultDescription
--ramp-absfloatNoneUniform absolute ramp limit.
--ramp-pctfloatNoneUniform percentage ramp limit.
--ramp-configstrNonePath to JSON ramp constraints.
--strict-ramp-configflagdisabledFail on invalid/unknown ramp config entries.
--ramp-epsfloat1e-6Safe denominator for percentage ramp maths.
FlagTypeDefaultDescription
--round-incrementfloat1000.0Round budget allocations to nearest increment.
--seasonality-clip-minfloat0.5Minimum seasonality multiplier.
--seasonality-clip-maxfloat1.5Maximum seasonality multiplier.
Terminal window
# Default scenario planning
python runme.py
# Scenario planning with custom percentages
python runme.py --scenarios "-10,-5,0,5,10"
# Single-period optimisation
python runme.py --no-scenarios
# Multi-period optimisation (13 periods)
python runme.py --multiperiod --multiperiod-weeks 13
# Custom input and output paths
python runme.py \
--data data-config/demo_data.csv \
--config data-config/demo_config.yml \
--holidays data-config/holidays.csv \
--results-dir results_custom

A convenience wrapper is available for Linux/macOS:

  • run_pipeline_linux.sh

It passes through to runme.py and supports the same core run-mode flags.

Do:

  • inspect 50_diagnostics/ before using business outputs,
  • compare multiple model specifications,
  • keep model assumptions documented.

Avoid:

  • using optimisation outputs from non-converged runs,
  • skipping pre-diagnostics and prior predictive checks,
  • treating good fit as proof of causal validity.
  1. Data Preparation
  2. Configuration Guide
  3. Optimisation Guide
  4. Troubleshooting