← QueueSim home  ·  All models

Threshold Routing — README

Threshold Routing

Shared queue, two servers: Server A is preferred, Server B is overflow that activates only when the queue exceeds a threshold. The reference model for multi-facility sim.dispatch with predicate-based routing, and the non-GPSS but practically ubiquitous "preferred-plus-overflow" pattern.

Problem

Customers arrive at a single wait list (Exp(10) inter-arrival) and need one server (Exp(8) service). Two servers exist:

Expected behavior: under light load Server B is silent and the system behaves as M/M/1 on Server A. Under heavy load, B is enlisted whenever the queue backs up, dropping average wait at the cost of utilization imbalance between A and B.

This isn't a named GPSS example — it's closest to TRANSFER BOTH with a state-based guard on the second destination, but the natural puck shape (shared wait list + predicate dispatch) doesn't correspond to a single GPSS block.

Model in this directory

Shop (threshold_routing.odin:52-83) holds both facilities, the shared wait list, the threshold, and a richer stats layer than the other worker-queue examples: time-series objects for queue length, in-system count, total served, and running average wait. These feed a sim.Report for site ingestion.

Why this shape

Predicate-based dispatch, not if/else in code. The routing rule is expressed as two predicates:

pred_prefer_a   :: proc(puck, f, ctx) -> bool { return !f.busy }
pred_overflow_b :: proc(puck, f, ctx) -> bool {
    return !f.busy && sim.set_count(&ctx.wait_list) > ctx.threshold
}

sim.dispatch walks the head of the wait list and, for each waiter, tries each target in declaration order until a predicate passes. Adding a third server, or a priority-class filter, or a shift-aware gate, is one more Dispatch_Target entry and one more predicate — no edits to the dispatcher loop. Contrast with tool_crib where priority is baked into the Set's rank function; here the policy lives in the target list.

The dispatcher assigns the facility, not the customer. When sim.dispatch finds a match, it flips facility.busy = true, sets facility.owner = puck, removes the puck from the wait list, calls the user callback (to tag the customer with which server won), and reactivates the puck. When the reactivated customer reaches .Waiting, it already owns the facility — it does not re-check or re-seize. This is the same contract as the tool_crib inline dispatcher, but pushed into a reusable helper.

c.server is stored on the customer, not inferred at release. When service completes, the In_Service branch needs to know which facility to release. Re-inferring ("which facility has me as owner?") would work but is O(n_servers). Storing the assignment at dispatch time is O(1) and reads as documentation of what happened. For two servers the difference is academic; for a server farm it starts to matter.

Richer stats than the other worker queues. This model maintains both L (area under items-in-system) and Lq (area under queue length) integrations, plus four time-series captured on every state change. That's because threshold_routing is the one example intended to feed the qsimhealth.com Report viewer — the time-series objects are ingested as charts. For other examples, counters-at-end are sufficient.

Time-series uses a step-record pattern. ts_step_record (threshold_routing.odin:115-128) writes a duplicate sample at the old value and timestamp before recording the new value. This gives a piecewise-constant rendering — the step-function shape — rather than a linear interpolation between points. Essential for correctly visualizing queue length over time.

Alternatives considered

Inline dispatcher (tool_crib-style)

Two if !shop.server_a.busy { ... } else if ... { ... } clauses in a hand-written dispatcher would be shorter than the Dispatch_Target array. We don't do that because this example was the motivating use case for sim.dispatch — writing it with the helper demonstrates the helper's value (predicates as data, target list open to extension). For one-facility worker queues the inline form stays the right choice; here it doesn't.

Two separate wait lists

wait_a and wait_b, with the generator picking a list at arrival based on the current queue length. This is the "static routing" variant: decision at enqueue, not at dispatch. It's simpler but wrong for this model — if Server A becomes free while customers are sitting in wait_b, they should snap back to A. Dynamic routing at dispatch time is the only way to express that cleanly, which is exactly why sim.dispatch re-evaluates on every head walk.

Threshold as a Notify_Var

If the threshold were itself modeled as a variable that could be changed mid-run (e.g., a shift-dependent policy), Notify_Var + wait_until would be a cleaner fit: B's waiters could sleep on a predicate and wake automatically on threshold change. For a fixed threshold, the predicate-in-dispatch pattern is lighter.

Collapse A and B into a two-server Storage

A sim.storage_create(capacity = 2) would model two interchangeable servers, with no per-server stats. That's the right shape for a genuinely pooled resource (multi-trunk phone system, nurses in a ward). This model distinguishes A and B — "A preferred, B overflow" is the whole point — so Storage would erase the distinction it's meant to capture.

What this example teaches

This is the reference for:

CLI

./threshold_routing [options]

Add --json to emit the uniform envelope (metadata, execution_stats, metrics, details) instead of the default text output.

Flag Type Default Description
--end-time float 100000 Simulation end time in seconds.
--threshold int 5 Queue length at which Server B becomes eligible.
--runs int 5 Number of replications (seed sweep).
--json bool false Emit uniform JSON envelope instead of text.

Example runs:

./threshold_routing                                         # default text run
./threshold_routing --json                                  # uniform envelope
./threshold_routing --runs=2 --end-time=5000 --json         # short smoke, JSON
./threshold_routing --threshold=3 --json                    # tighter overflow

Output is a per-seed summary plus an aggregated table. Time-series objects are attached to a sim.Report for chart export.

Running it

odin run examples/threshold_routing
odin run examples/threshold_routing -- --threshold=3   # tighter overflow
odin run examples/threshold_routing -- --end-time=50000
odin run examples/threshold_routing -- --runs=10       # seed sweep

See also