← QueueSim home  ·  All models

M/M/1 Queue — README

M/M/1

The canonical M/M/1 queue — Poisson arrivals, exponential service, one server — run across multiple replications to validate Little's Law (L = λW) against closed-form predictions. The reference model for rigor: statistical correctness checks, long-horizon runs, and analytical confirmation that the engine computes the right numbers.

Problem

Classic single-server queue (defaults; override ρ with --rho=<0.1–0.95>):

Theoretical steady-state values for this system are exact:

Quantity Formula Value
L (in-system) ρ / (1 − ρ) 4.0
W (sojourn) 1 / (μ − λ) 40 s
Lq (in-queue) ρ² / (1 − ρ) 3.2
Wq (wait) ρ / (μ − λ) 32 s

The model runs 10 replications of 1,000,000 seconds each (~100k customers per run) and reports the observed averages with their Little's-Law cross-check: L should equal λW to within sampling noise. When it doesn't, something is wrong with the model, the RNG, or the engine — and this is the example that catches it.

Model in this directory

Why this shape

Steady-state measurement requires long horizons. The defaults (end_time = 1,000,000, 10 runs) are not arbitrary. At ρ = 0.8 the mean sojourn is 40 s, and the autocorrelation in sojourn times across consecutive customers is high — you need many independent busy periods to get a stable mean. One million seconds at λ = 0.1 is ~100,000 arrivals, giving a standard error on mean sojourn around 1% per run. Ten runs drop the combined SE to under 0.3%, which is tight enough that a genuine engine bug would show as a multi-SE deviation.

L and Lq by integration, not sampling. Both accumulators update only at state-change moments (arrival, service start, service completion). This is exact — the step function is integrated without error. The older pattern of sampling on a timer (every N seconds) has both bias (misses short excursions) and variance (random sampling adds noise). See modeling guide §9.

Customer carries svc: db.RV, not the server pointer alone. Each customer is handed a copy of the service-time RV handle at creation (mm1.odin:217), even though it could be read off c.server.svc_stream at service-start time. This is intentional documentation: the customer "owns" the right to draw a service time, and the draw happens at service start, not arrival. Swapping to per-customer service distributions (a mixture model, class-dependent service) is then a matter of changing what the generator assigns, no customer-code changes.

Little's Law is computed post-run, not as an assertion. The model reports L (measured from area-under-curve), W (measured from sojourn sum / completions), and lambda * W (the Little's Law prediction of L). The user reads the table and sees whether they agree. We don't assert the equality because with finite sample sizes the match is statistical, not exact — embedding a tolerance in an assert would either be too loose (useless) or too tight (flakes). Instead, the number is surfaced so a regression shows up visibly.

Bespoke Server struct, not sim.Facility. The example predates the primitives layer being fully settled and was kept in its standalone form for pedagogical clarity — Server is what a reader would write from scratch after reading the engine kernel alone. It happens to have the same shape as sim.Facility plus an embedded RNG, which is itself a useful observation: Facility is what falls out when you write this struct for the third time. A future cleanup would replace Server with a Facility + an external svc_stream field on a Shop struct, matching the other examples.

Alternatives considered

Replace Server with sim.Facility

Would save ~10 lines and unify with the rest of the examples. We haven't yet; see the note above. When this example is next touched, that's the cleanup to make.

JSON output for site ingestion

The --json flag (mm1.odin:33) emits the per-run results as JSON for the qsimhealth.com comparison harness. The console mode is for humans; JSON is for automation. Both run the same model — only the reporter changes.

Shorter runs with more replications

Ten runs of 1M vs a hundred runs of 100k gives the same total CPU. For Little's-Law checking, longer runs are better because they average over more complete busy cycles — short runs near ρ = 0.8 frequently hit long-queue tails that bias a single run heavily. The model defaults to long-and-fewer for this reason.

Measure W from average-per-arrival, not total/completed

We divide total_sojourn by completed ([mm1.odin:172+]). An alternative is to record every sojourn and take the median or trimmed mean, which would be robust to heavy-tail excursions. For Little's Law validation specifically, the arithmetic mean is what matters (Little relates expectations), so the straight divide is correct. For reporting customer experience, percentiles would be more useful — see sim.Histogram in the modeling guide.

Warm-up deletion

Textbook M/M/1 analysis deletes the first K customers to remove transient-phase bias. We don't, because at ρ = 0.8 with 100k customers the transient contribution is below measurement error. At higher utilizations (ρ = 0.95+) warm-up deletion starts to matter, and this model would need a "discard until T" clause in the stats callbacks.

What this example teaches

This is the reference for:

CLI

./mm1 [options]

Add --json to emit the uniform envelope (metadata, execution_stats, metrics, details) instead of the default text output.

Flag Type Default Description
--end-time float 1000000 Simulation end time in seconds.
--runs int 10 Number of replications.
--rho float 0.8 Utilization ρ = λ/μ. Clamped to [0.1, 0.95] at parse time. μ stays fixed at 0.125; λ derives as ρ·μ.
--json bool false Emit uniform JSON envelope instead of text.

Example runs:

./mm1                                      # default text run (ρ=0.8)
./mm1 --rho=0.95                           # near-saturation — L and W explode
./mm1 --json                               # uniform envelope
./mm1 --end-time=100000 --runs=20 --json   # custom params, JSON output

Running it

odin run examples/mm1                              # 10 runs, human output
odin run examples/mm1 -- --end-time=100000         # shorter runs
odin run examples/mm1 -- --runs=20                 # more replications
odin run examples/mm1 -- --json                    # machine-readable

See also