Quickstart Guide
This guide walks through a first AMMM run using the current V2 pipeline.
Step 1: Set up your environment
Section titled “Step 1: Set up your environment”# Clone the repositorygit clone https://github.com/tandpds/ammm.gitcd ammm
# Create and activate a fresh virtual environmentpython -m venv venv_ammmsource venv_ammm/bin/activate # Windows: venv_ammm\Scripts\activate
# Upgrade build tooling (recommended)python -m pip install --upgrade pip setuptools wheel
# Install AMMM in editable modepip install -e .Windows note: Conda is often easier when installing Prophet and compiled dependencies. See Installation.
Step 2: Run the demo pipeline
Section titled “Step 2: Run the demo pipeline”python runme.pyBy default, runme.py resolves files in this order:
data-config/(first priority)examples/default/- fallback demo files in
examples/demo/
It expects one config file (.yml/.yaml) and one data file (.csv/.xlsx) in the selected folder, plus optional holidays file (holidays.csv or holidays.xlsx).
Step 3: Examine the stage-based output structure
Section titled “Step 3: Examine the stage-based output structure”After the run, inspect the results root (default: results/, or your --results-dir path). V2 outputs are stage-based:
results/├── 00_run_metadata/ # Config, provenance├── 10_pre_diagnostics/ # Data checks, prior predictive├── 20_model_fit/ # Posterior draws, traces, forest plots├── 30_model_assessment/ # PPC fit, R², RMSE├── 40_decomposition/ # Channel contributions, ROAS├── 50_diagnostics/ # Convergence, calibration, Pareto-k├── 60_response_curves/ # Saturation curves├── 70_optimisation/ # Budget allocation, scenarios└── 80_interpretation/ # Reports (if LLM enabled)Key files to inspect first:
30_model_assessment/model_fit_predictions.png(overall fit)20_model_fit/model_trace.png(chain behaviour)20_model_fit/prior_posterior_comparison.png(how data updated priors)60_response_curves/response_curves.png(channel saturation curves)50_diagnostics/convergence_report.json(machine-readable convergence result)50_diagnostics/calibration_pit_histogram.png(calibration shape)10_pre_diagnostics/prior_predictive_check.png(prior plausibility)
Step 3.5: Review diagnostics before using business outputs
Section titled “Step 3.5: Review diagnostics before using business outputs”Before acting on decomposition or optimisation outputs, check 50_diagnostics/:
convergence_report.json:- read
converged(true/false)
- read
calibration_report.json:- read
well_calibratedanddiagnosis
- read
The V2 workflow also writes rank_trace.png, energy_diagnostic.png, pareto_k_summary.json, pair_plot.png, and residuals_vs_{channel}.png into 50_diagnostics/.
Practical rule: treat 40_decomposition/, 60_response_curves/, 70_optimisation/, and 80_interpretation/ as decision-ready only when upstream diagnostics are acceptable.
Gate policy is controlled by diagnostics_gating: strict | warn | off in config. Under strict, convergence failure halts the pipeline.
Step 4: Follow the core docs in order
Section titled “Step 4: Follow the core docs in order”Recommended next reading:
Step 5: Customise for your own data
Section titled “Step 5: Customise for your own data”Programmatic V2 driver usage
Section titled “Programmatic V2 driver usage”from driver import MMMBaseDriverV2
config_file = "path/to/your/config.yml"data_file = "path/to/your/data.csv"holidays_file = "path/to/holidays.csv" # optionaloutput_folder = "your_results"
driver = MMMBaseDriverV2( config_filename=config_file, input_filename=data_file, holidays_filename=holidays_file, results_filename=output_folder,)driver.main()runme.py uses the same from driver import MMMBaseDriverV2 import pattern.
Run Modes and Main Optimisation Outputs
Section titled “Run Modes and Main Optimisation Outputs”Scenario planning mode (default)
Section titled “Scenario planning mode (default)”- Produces scenario comparisons across budget shifts.
- Main artefacts in
70_optimisation/:budget_scenario_results.csvscenario_budget_allocation.pngscenario_total_contribution.pngscenario_roi_comparison.png
Single-period optimisation mode
Section titled “Single-period optimisation mode”Use --no-scenarios to run direct optimisation.
- Main artefacts in
70_optimisation/:optimization_results.csvbudget_optimisation.png
Multi-period optimisation mode
Section titled “Multi-period optimisation mode”Use --multiperiod --multiperiod-weeks N.
- Main artefacts in
70_optimisation/:multiperiod_optimization_results.csv- multi-period budget and contribution figures
runme.py CLI Reference (current)
Section titled “runme.py CLI Reference (current)”Input/output
Section titled “Input/output”| Flag | Type | Default | Description |
|---|---|---|---|
--data | str | None | Path to input data CSV/Excel. |
--config | str | None | Path to configuration YAML. |
--holidays | str | None | Path to holidays CSV/Excel. |
--results-dir | str | None | Output directory override. |
Run mode
Section titled “Run mode”| Flag | Type | Default | Description |
|---|---|---|---|
--no-scenarios | flag | scenarios enabled | Disable scenario planning. |
--scenarios | str | -20,-15,-10,-5,0,5,10,15,20 | Comma-separated budget percentages. |
--multiperiod | flag | disabled | Enable multi-period optimisation. |
--multiperiod-weeks | int | 13 | Planning horizon in periods. |
--no-seasonality | flag | seasonality enabled | Disable seasonality multipliers in multi-period mode. |
--use-adstock | flag | disabled | Enable adstock-aware optimisation (not compatible with --multiperiod). |
Performance/backend
Section titled “Performance/backend”| Flag | Type | Default | Description |
|---|---|---|---|
--fast | flag | disabled | Fast mode (draws=1000, tune=500, chains=2). |
--jax | flag | disabled | Use JAX/NumPyro sampler mode. |
--gpu | flag | disabled | Request GPU for JAX (JAX_PLATFORMS=cuda). |
--chain-method | choice | None | JAX chain method: vectorized, parallel, sequential. |
Sampling overrides
Section titled “Sampling overrides”| Flag | Type | Default | Description |
|---|---|---|---|
--draws | int | None | Override posterior draws. |
--tune | int | None | Override tuning steps. |
--chains | int | None | Override chain count. |
--target-accept | float | None | Override NUTS target acceptance rate. |
Ramp constraints
Section titled “Ramp constraints”| Flag | Type | Default | Description |
|---|---|---|---|
--ramp-abs | float | None | Uniform absolute ramp limit. |
--ramp-pct | float | None | Uniform percentage ramp limit. |
--ramp-config | str | None | Path to JSON ramp constraints. |
--strict-ramp-config | flag | disabled | Fail on invalid/unknown ramp config entries. |
--ramp-eps | float | 1e-6 | Safe denominator for percentage ramp maths. |
Other controls
Section titled “Other controls”| Flag | Type | Default | Description |
|---|---|---|---|
--round-increment | float | 1000.0 | Round budget allocations to nearest increment. |
--seasonality-clip-min | float | 0.5 | Minimum seasonality multiplier. |
--seasonality-clip-max | float | 1.5 | Maximum seasonality multiplier. |
Quick command examples
Section titled “Quick command examples”# Default scenario planningpython runme.py
# Scenario planning with custom percentagespython runme.py --scenarios "-10,-5,0,5,10"
# Single-period optimisationpython runme.py --no-scenarios
# Multi-period optimisation (13 periods)python runme.py --multiperiod --multiperiod-weeks 13
# Custom input and output pathspython runme.py \ --data data-config/demo_data.csv \ --config data-config/demo_config.yml \ --holidays data-config/holidays.csv \ --results-dir results_customPipeline wrapper
Section titled “Pipeline wrapper”A convenience wrapper is available for Linux/macOS:
run_pipeline_linux.sh
It passes through to runme.py and supports the same core run-mode flags.
Common pitfalls
Section titled “Common pitfalls”Do:
- inspect
50_diagnostics/before using business outputs, - compare multiple model specifications,
- keep model assumptions documented.
Avoid:
- using optimisation outputs from non-converged runs,
- skipping pre-diagnostics and prior predictive checks,
- treating good fit as proof of causal validity.