Impatient customers leave the queue if not served within their patience window. The reference model for fork — spawning a second puck that shares data with the first — and for racing conditions where two processes compete to determine an entity's outcome.
Exponential arrivals at a single barber, exponential service, plus a third random variable: per-customer patience, Uniform(2, 8) minutes. If the customer has not started service within that window, it abandons the queue (reneges) and leaves. The model counts served vs reneged, reports average wait among served, and average time-until-abandon among reneged.
This is one of the two SLX-style patterns that cannot be
expressed in pure GPSS without extensions — the other being
PREEMPT. GPSS transactions don't fork, and there is no clean way
to say "wait for A or B". SLX introduced fork and
wait until (C₁ or C₂) specifically for this case; odin-des
inherits both.
Two puck types, explicitly linked:
Customer (reneging.odin:67-136) — the
primary puck. Three active states (Arriving / Waiting / In_Service). Carries two boolean flags, reneged and
in_service, which are read by the timeout puck.Timeout_Puck (reneging.odin:161-206) —
the forked offspring. One-shot: sleeps patience, fires once,
terminates. At fire time it either finds the customer already
in service (stands down silently) or sets customer.reneged = true and reactivates the customer.Shared state lives on the Customer struct; the Timeout_Puck
holds a ^Customer pointer and reads those fields directly. No
Notify_Var, no channel — the fork pattern is fields + a
pointer.
Gen_Process (reneging.odin:222-254) is the
generator. Two separate pools — customer_pool and
timeout_pool — keep the heterogeneous puck types in sized-right
slabs.
Two pucks, one decision, one winner. The customer and its timeout are racing: whichever event fires first wins. The pattern:
sim.seize. If it succeeds immediately, no fork
is needed — the customer is already winning. Fast path out.in_service not yet true). Customer falls through .Waiting
and calls begin_service, which sets in_service = true
before any yield.reneged = true, reactivates
customer. Customer wakes in .Waiting, sees the flag,
removes itself from the facility's wait set, records stats,
terminates.The timeout puck, on fire, checks in_service — if the customer
has already started, the timeout is moot and stands down. This
clean-stand-down path is the critical correctness property:
without it, a served customer could be hit by a stale timeout and
marked reneged after the fact.
Shared fields, not channels. The reneged and in_service
flags live on the Customer struct and are read/written across
two pucks without synchronization. This is safe because the
engine is single-threaded: between any two puck ticks there is no
interleaving. The flags work as plain data; fork does not need
locks, queues, or signals.
sim.seize is used here, unlike the barbershop. The customer
goes into the facility's internal wait set
(reneging.odin:92). The reneging logic needs to
remove a suspended puck from that set on timeout
(reneging.odin:105) — sim.set_remove on
barber.waiters directly. A model-owned wait list (barbershop
style) would work equally, but the example is short enough that
reaching into the facility's waiters is both acceptable and
illustrative of when such access is legitimate: when you're
modeling leaving the queue, you have to be able to remove
yourself from it, and the facility's set is the canonical place
to do that.
The timeout is a separate puck, not a scheduled event. One could imagine adding a "deadline" field to the customer puck and having the engine fire a special callback at that time. We don't — making the timeout a real puck means:
engine.processes and on the heap like any
other work.Patience is currently deterministic-pseudo-random. The model
computes patience as 2.0 + 6.0 * f64(id % 100) / 100.0
(reneging.odin:176) — a note-to-self placeholder
for a proper Uniform(2, 8) draw from an sdidb stream. This is
the kind of detail per-example READMEs are good at flagging: the
code works and the model demonstrates the pattern, but the
RNG hygiene isn't production-grade. A one-line swap to
db.d_uniform(&patience_rng, 5.0, 3.0) would clean it up and
match the RNG discipline of the other examples.
sim.wait_until with a compound predicateSLX writes reneging as:
wait until (barber available or reneged);
We could mirror that with a Notify_Var on barber availability
and another on reneged-flag, and a sim.wait_until predicate
over both. It works, but the Notify_Var/predicate machinery is
heavier than the fork pattern for a one-shot race: the predicate
allocator runs, the Waiter registers on two vars, and on signal
the waiter chain is walked. Fork-with-flags is one advance, one
reactivate, and two field reads. For this model, fork is the
cleaner hammer.
The wait_until variant becomes the right shape when the
predicate is naturally compound — "queue non-empty AND shift
active AND not on break" — because then you'd otherwise fork N
timeouts, one per condition, and have to join them manually.
We could forgo the fork and have the customer advance(patience)
after joining the wait list, then check on wake-up whether it
landed in service or timed out. This is almost right and fails
subtly: if the barber frees while the customer is mid-advance,
the release mechanism would need to cancel the advance and wake
early. eng.advance has no cancel; the engine would deliver a
stale tick at arrival_time + patience regardless. Fork is the
way to get a cancelable timeout.
The engine exposes a calendar — raw scheduled events with a
handler proc, no puck involved. A reneging timeout could be a
calendar event: cheaper allocation, no per-timeout pool.
engine/calendar.odin supports this. We don't use it here because
the example is deliberately showing Henriksen's fork vocabulary —
fork_timeout is pedagogically the right name and shape. A
production variant aiming for millions of short-lived timeouts
might well use the calendar directly.
If reneging thresholds varied by customer class (VIP waits
longer, walk-ins leave faster), patience would be drawn from a
class-keyed distribution. One Random_Spec per class, stored on
the shared Shop struct, is all that's needed. The fork pattern
itself doesn't change.
This is the reference for:
in_service / reneged flag protocol that disambiguatescustomer_pool + timeout_pool)
sized-right per typesim.seize (and reach into barber.waiters) vs
the model-owned wait list pattern./reneging [options]
Add --json to emit the uniform envelope (metadata, execution_stats,
metrics, details) instead of the default text output.
| Flag | Type | Default | Description |
|---|---|---|---|
--json |
bool | false |
Emit uniform JSON envelope instead of text. |
Example runs:
./reneging # default text run
./reneging --json # uniform envelope
Inter-arrival and service distributions come from
examples-db-text-files/barbershop.txt; edit that file to vary
them. Patience is hardcoded — see the note above.
odin run examples/reneging
Default output is a per-event trace (verbose by design for pedagogy) and a summary block: served %, reneged %, average time in system for each cohort.
wait_until as the alternative
compound-condition shapecoffee_shop — uses wait_until
for a compound hold; contrast with fork-with-flags heredocs/wait-until-and-helpers.md — the full technical reference
on Notify_Var and wait_untilwait until compound waits