# Machine World — full content (llms-full.txt) Concatenated user-facing content for LLM ingestion. Each section corresponds to one page on https://machineworld.io/machine-world/. Source-of-truth lives in the engine repo at https://github.com/dinukxx/Machineworld; this file is the build-time concatenation generated by the mirror pipeline. Sections: 16 pages, in reading-path order. Curated entry list: see /llms.txt (with one-sentence summaries per page). ============================================================================== ### SECTION: /machine-world/caregiver ### SOURCE_URL: https://machineworld.io/machine-world/caregiver ### RAW_MARKDOWN: https://machineworld.io/mw-content/caregiver.md # Caregiver across continents *A programmable process scenario — 1 of 15* > *Machine World holds the operational weight of your mother's life, > thirteen thousand kilometres away, > so you can live yours — and still be there for her.* --- **Anjali is 41. Toronto. Her mother Padma is 73, in Colombo, on her own.** Three siblings, two languages, ten-and-a-half hours between them. ## What Anjali sees on a Tuesday morning, 6:35 a.m. ``` Padma — overnight ✓ Morning BP tablet taken 7:42 SLT (helper voice-noted in Sinhala — recorded; not shared) ! Mobile credit low — top up 1,500 LKR? [approve] · Thursday cardiology booked. Driver Sanath confirmed. Family invites sent. · WhatsApp thread triaged. One real concern (gas bill); your brother is looking on Saturday. No action from you. ⓘ Two overlapping subscriptions found — both unused in 6 weeks. [review] ``` Five lines. Thirty seconds. **No app opened. No tab juggled. The day is hers.** --- ## What Machine World holds for her - **The medications**, in Sinhala, through the helper's voice - **The appointments**, across two continents, three siblings, one doctor, one driver - **The household watching** — bills changing, credit running low, an overdue payment, a deposit that didn't arrive - **The family thread**, surfacing real concerns, dampening noise - **The decisions waiting** — proposed clearly, decided by Anjali, never auto-applied It is one calm interface where, before, there were nine. --- ## What stays Anjali's, always - Her mother's data lives on a machine she controls. Audio of Padma's voice never crosses the network. - Every consequential action — paying a bill, cancelling a service, contacting the doctor — waits for Anjali's tap. - The system can be walked away from at any time. The household directory is hers — to copy, to encrypt, to delete. The vinaya invariant binds the *system*, never the human. The system serves. It does not gate-keep. --- ## How the work that runs this gets paid for Machine World runs on **prepaid tokens** — you buy a pack the way you'd add credit to any service you trust. Every action's token cost is shown before it runs. When a skill you've installed runs, **15% of its tokens go to the person who made it — forever.** When MW orchestrates someone's work — the helper, the bookkeeper, the driver — that person chooses, per job, how they're paid: tokens credited to their own MW wallet, or cash through standard payment rails. They set their own rate; you see it before you commit. The platform takes a transparent, capped operating fee, visible on every transaction. Right now, the local CLI is free and remains so. → [How the token economy works](../../values/economy.md) --- ## Why this matters The system holds the world so the mind can rest. **Citta viveka** (චිත්ත විවේකය) — mental seclusion — has always been the point. Anjali at 6:35, her mother's tablet taken at 7:42, the inbox quiet, the day intact: these are the small measurements of a life unburdened. --- → [Read the deep version](/machine-world/caregiver-deep-dive) — the friction in detail, who else is in the Process, the honest map of what works today and what's still roadmap, and a paragraph in Sinhala for those who carry this work in their first language. ============================================================================== ### SECTION: /machine-world/caregiver-deep-dive ### SOURCE_URL: https://machineworld.io/machine-world/caregiver-deep-dive ### RAW_MARKDOWN: https://machineworld.io/mw-content/caregiver-deep-dive.md # Caregiver across continents — Anjali's story *A programmable process scenario — deep version of [the summary](./caregiver.md)* > *Anjali is 41. She lives in Toronto and runs a small product team at a software company. Her mother **Padma** is 73, lives alone in a quiet street in Colombo, speaks Sinhala fluently and a little English, and uses a basic Android phone she has had for six years. Anjali has two siblings — one in Sydney, one in Galle. The family is close. Padma is fine, mostly. The work of keeping that mostly true sits, by default, on Anjali.* --- ## The friction without MW Tuesday morning in Toronto. Anjali stands in her kitchen with her coffee. The list runs in her head before she has finished the first sip: - Did Padma take her morning blood-pressure tablet? Last time the helper forgot the reminder, the next clinic reading was high. - The cardiologist appointment next Thursday at 10:30 SLT — is a driver booked? The helper has a sick day in the calendar. Should her brother in Galle drive up? - Padma's mobile credit is at 110 LKR. Last time it lapsed, no one could reach her for two days. Anjali keeps meaning to set up auto-top-up. - Padma said in the family WhatsApp last night, "*the gas man's bill seems different this month.*" What did that mean? Was she confused? Did she pay it anyway? - Anjali pays for two overseas caregiving apps that overlap. She has been meaning to audit them for four months. Together they cost roughly the price of Padma's monthly utilities. - Her two siblings ask, in well-meaning rotation: *"how's mum?"* Anjali doesn't have a clean answer — because there isn't one place where everything lives. The cognitive load isn't any single item. It's the **holding** — keeping all of this present in working memory across two continents, three siblings, two languages, knowing that any one ball dropped has a real-world cost to a 73-year-old woman living alone. This is what most caregivers carry. It is largely invisible to the people around them. It does not show up on a productivity dashboard. --- ## A morning with MW Anjali wakes at 6:30, opens `mw` on her laptop. The TUI shows a short, calm summary in mixed Sinhala and English: ``` Padma — overnight ✓ Morning BP tablet taken 7:42 SLT (helper voice-noted in Sinhala) ! Mobile credit at 110 LKR (below your 200 threshold). Auto top-up scheduled for tomorrow morning unless you decline. · Thursday cardiology: 10:30 SLT confirmed. Driver Sanath booked. Calendar invites sent to your brother + helper. · Gas bill question — looked at the last three months. This month is +12% but matches the seasonal pattern of the past two years. Your brother will look at the meter on his Saturday visit. No action needed from you. Waiting for your decision 1. Auto top-up Padma's mobile — confirm $4.50 / 1,500 LKR [approve · decline · change threshold] 2. Family update digest for siblings — drafted in your style, ready. [review · send now · hold] 3. Two overlapping subscriptions found — CareSync ($14.99/mo, used 0 times in 6 weeks) and FamilyCheck ($9.99/mo, used 0 times in 3 weeks). [keep both · cancel one · cancel both · read details] ``` She approves the top-up, sends the digest, taps **read details** on the subscriptions. By 6:35 she is done. The morning did not start with four browser tabs and three apps. --- ## What MW is orchestrating, in plain language | What MW does | Where the human stays in the loop | |---|---| | Daily medication reminder voice-call to Padma in Sinhala, routed through the helper's phone | The helper voice-notes confirmation. If a dose is missed twice in a row, MW escalates to Anjali — never silently. | | Calendar reconciliation across Anjali, Padma, helper, brother, and the doctor's office | Anjali sees the merged view. Each sibling sees what Anjali has scoped to them. Nothing posted externally without approval. | | Mobile credit and utility bills watched at thresholds Anjali set | MW *proposes* a top-up or surfaces an anomaly. The act of approving stays Anjali's. The system never auto-spends. | | A bilingual family update digest, drafted in Anjali's writing style | MW drafts. Anjali sends. Never sends without a tap. | | Subscription audit across her credit-card statements | MW surfaces overlapping and unused services. If Anjali says cancel, MW drafts the cancellation; the final click is hers. | | Family WhatsApp triage — surfaces the real concerns, lets the rest stay quiet | She sees the structured worries first. The raw thread is one tap away whenever she wants it. | There are no skill names here, no product jargon. Internally these are individual MW skills, each versioned, each with its capability-owners attributed in `SKILL.md`. On every invocation, 15% of the tokens spent flow into the skill's capability pool and route to the people who own the capabilities the invocation actually exercised — when Anjali is on a paid tier. What Anjali sees is one calm morning briefing. --- ## Who else is in this Process Machine World's actor contract treats every contributor — human, digital, physical — as a first-class participant with declared capabilities and declared limits: - **Anjali** (Toronto) — primary decider. All consequential actions route through her. - **Padma** (Colombo) — the person being supported. Her preferences are first-class data. Her dignity is a hard constraint, not a soft preference. - **Two siblings** (Sydney, Galle) — scoped collaborators. They see what Anjali shares. They contribute what they choose. - **The helper** (Colombo, Mon/Wed/Fri 8–14 SLT, Sinhala + English, can lift up to 15 kg, voice-note only — does not use apps). - **Sanath the driver** — declared rate, declared availability, contact graph entry. - **The cardiologist's office** — calendar integration when their system supports it, structured-email fallback when it doesn't. - **Padma's bank** — read-only statement access via the local Open Banking equivalent, when that MCP ships. When any one of these can't meet a Process need — the helper sick, the driver not answering, the bank API down — **MW surfaces the gap.** It does not paper over it. It does not silently retry forever. It says: *the helper is out today, the driver isn't responding, the appointment is in 4 hours, here are your options.* This is the system's promise about a messy real world: humans need sleep, helpers have other lives, APIs go down. MW's job is not to pretend otherwise. It is to make the constraint visible so the human can plan around it. --- ## What MW deliberately does *not* do The vinaya invariant binds the *system*, not Padma's life. - MW does not decide which medications Padma should take. Her doctor does. - MW does not refuse to surface a bill Anjali might find stressful. Her judgment about her mother's finances is hers. - MW does not surveil Padma. Padma's voice and location stay on her device. When the edge-voice container ships, audio never crosses the network at all. - MW does not score Anjali on caregiver performance. There is no streak, no leaderboard, no engagement metric. The goal is *less of Anjali's attention*, not more. - MW does not auto-cancel subscriptions, auto-pay bills, or auto-book appointments. It drafts. Anjali approves. The act of clicking *send* or *pay* remains hers. If Anjali walks away from MW for a month, nothing collapses. Padma's life is her life with or without the system. --- ## What works today, what's partial, what's roadmap Honest mapping. The vinaya is also about not lying about what we have built. | Capability | Status (2026-05) | |---|---| | Calendar reconciliation across Anjali, helper, brother, doctor's office | **Today** — via the calendar MCP + a calendar-merge skill | | Family chat triage and bilingual digest in Anjali's voice | **Today** — via comms MCPs + the LLM backend Anjali chose at first run | | Subscription audit across card statements | **Partly** — depends on which banking MCP is available in Anjali's jurisdiction. UK / US first; Sri Lanka next. | | Medication reminder via the helper's voice in Sinhala | **Partly** — works today through WhatsApp voice; on-device Sinhala STT/TTS on Padma's phone is roadmap | | Threshold-based mobile top-up | **Roadmap** — needs a payment MCP per jurisdiction | | Auto-rotation when the helper is sick | **Today** — via the scheduler + workflow definitions | | Read-only bank statement access for bill watching | **Roadmap** — needs an Open Banking MCP (LK + UK + US scoped) | | Multi-device household state across Toronto + Colombo + sibling locations | **Today** — via opt-in remote sync (or remains local-only if Anjali prefers) | Some of this works end-to-end today. Some of it depends on MCPs that haven't shipped. The page tells you which is which. --- ## In plain words — what this is, told simply Anjali is in Toronto. Her mum is in Colombo. Thirteen thousand kilometres apart, with ten and a half hours of time difference between them. But the things her mum needs — taking her tablet at the right time, getting to the doctor, the bills at the end of the month, hearing from family — those have to happen every day, regardless of where Anjali is. Here is what Machine World does about that. It brings every moving part of one day into one place. The decisions stay with Anjali. The data stays with Anjali and her mum. The system always asks before acting — it does not go off and do things on its own. And her mum's voice stays on her mum's own phone; it does not travel anywhere it doesn't need to. The discipline that keeps the system from drifting away from any of this comes from **Marga Sakacchā**, the Dhamma-practice community the system grew out of. The word for it is *vinaya*. In practice it means a simple thing: the system does not try to behave like a community, a family, or a person on its own. It joins them. > *A Sinhala re-expression of this section will be added once a native-speaker reviewer who holds both the language and the meaning has worked through it. The English above is the source.* --- ## What Anjali gets back We don't have polished numbers from large-scale usage yet. What Anjali might see, drawn from caregiver-tooling research and plausible household assumptions: - **Time reclaimed:** roughly 2–3 hours per week of operational holding. Not a currency figure — the deepest one. - **Duplicate or unused services found:** typically two or three caregiver-adjacent apps lapsed-into-the-background, surfaced by the audit in the first weeks. The price varies by country; the relief is the same. - **Unscheduled clinic visits avoided:** typically one or two per year in this age group when medication adherence is reliable. Modest in money, large in worry. - **Family tension:** harder to measure, anecdotally significant. The siblings ask *"how's mum?"* less often because they share one calm picture. ## How the work that runs this gets paid for Machine World runs on **prepaid tokens** — Anjali buys a pack the way she'd add credit to any service she trusts. Every action's token cost is shown before it runs, so there are no surprises. The model in plain shape: > **Buying.** Anjali buys a token pack with fiat. Pack pricing is published, regionally adjusted where it sensibly can be. Tokens don't expire; unused tokens are refundable within a stated window. > > **Spending.** Each skill publishes its token cost upfront. When it runs, the cost is deducted. **15% goes to the skill's capability pool — forever — and routes to the people whose capabilities the invocation called.** The remainder covers LLM inference, infrastructure, and Machine World's transparent operating fee. > > **Paying humans.** When MW orchestrates a person's work — Padma's helper, a bookkeeper, a verifier, the driver — that person chooses, **per job**, how they're paid: tokens credited to their MW wallet, or cash through their standard payment rail. They set their own rate; the household sees it before committing. MW takes the same small, transparent fee either way. > > **What the platform earns.** The MW operating fee on every transaction is published in the ledger and aimed at 10% or less of the token-pack price. The platform has real operating costs — infrastructure, LLM passthrough, audit, support — and as those costs shift, the fee may need to shift with them. When it does, we publish the change, the reason, and the new number ahead of when it applies. The aim is not to never change the fee. It is to never hide what it is. What this isn't: a closed-loop currency that traps earnings, a financial instrument, or an investment vehicle. Tokens are prepaid usage credits. People who earn cash for their work get cash through standard rails. People who choose tokens can spend them within the system — at their own pace — on services their household needs. **Today, the local CLI is free and remains so.** When the managed tier ships, every rule — including the operating-fee cap — will be visible before anyone commits. → [The full economy design + open questions](../../values/economy.md) --- ## What stays Anjali's, always - Her mother's data lives on machines Anjali controls. Nothing leaves unless she has explicitly opted into remote sync. - The decision-rights for her mother's life stay with her and her mother. MW proposes, surfaces, drafts, reminds. The human decides. - The right to walk away: `~/.machineworld/households//` is a directory on her machine. She can copy it, version it, encrypt it, share it with her sister, or delete it. Nothing is held hostage. The system clings to one thing — the vinaya invariant binding its own outputs. Everything else is impermanent, including the system itself. --- ## Why this matters Anjali is one person carrying a load that, in earlier generations, would have been shared across the household *in person*. The diaspora reorganised that. The market filled the gap with apps that capture attention, sell anxiety, or hold the family's data hostage. The Buddhist analysis says these patterns amplify *lobha* (craving), *dosa* (aversion), and *moha* (delusion). The Buddhist analysis is right; the patterns hurt the people they claim to help. Machine World is built to do this work *without* amplifying any of those three. Not because we're optimising for a metric. Because the system was designed to refuse them at the architecture layer — and the only thing it clings to is that refusal. > *No output, action, skill, or agent behaviour may give rise to lobha, dosa, or moha. This cannot be overridden by any skill, agent, user, model, or instruction.* Anjali's morning at 6:35, the silence of an inbox handled, the knowledge that her mother's BP tablet was taken at 7:42 — these are the small measurements of *citta viveka* (චිත්ත විවේකය, mental seclusion). They were always the point. --- ## Read next - [← Back to the one-page summary](./caregiver.md) — the A4 version of this story - [Household financial intelligence](./household-finance.md) — when MW watches the bank statements - [Saman's restaurant](./small-business-restaurant.md) — multi-human + physical-agent coordination - [Mutual-aid kitchen](./mutual-aid-kitchen.md) — civic-scale orchestration - [The token economy](../economy.md) — how MW sustains itself without extraction --- *Last updated 2026-05-11. This scenario doubles as a validation scenario for the self-evaluation-loop gauntlet — the shape of every claim above is something we can validate, fail, or correct in a measurable run.* ============================================================================== ### SECTION: /machine-world/orchestrates-a-real-day ### SOURCE_URL: https://machineworld.io/machine-world/orchestrates-a-real-day ### RAW_MARKDOWN: https://machineworld.io/mw-content/orchestrates-a-real-day.md # How MW orchestrates a real day *Inside a programmable Process — what's happening underneath the calm interface* --- ## The big idea, in one paragraph Machine World is **not** an AI assistant. It is the **operating layer** where you, your skills, your devices, and the people in your life cooperate on the actual work of running a household, a small business, a community kitchen, or a caregiving relationship across continents. The value is **not** that an AI does work for you. The value is that *digital agents, physical agents, and humans all participate in one Process, with clear contracts at every boundary, with you in the loop on anything consequential, and with a complete audit trail of who did what and when.* Most AI products are single-actor: you ask, it answers. MW is **multi-actor by design**. That is the whole shift. --- ## What a Process is, in plain language A **Process** is a multi-step coordination flow across actors. Picture the score for a small ensemble. Each musician has a part they can play (their *capability*). Each part has a window when it gets played (their *availability*). Each part fits into the larger piece (the *Process*). The conductor never plays an instrument — she coordinates who plays what, when, and how the parts fit. The audience hears one thing. The musicians did one thing together. MW is the conductor. The instruments are the **actors**: - **Digital agents** — skills, model calls, MCP servers (small purpose-built programs that expose tools, like *send a calendar invite*, *read a Sinhala voice note*, *fetch a bank statement*) - **Physical agents** — smart thermostats, robot vacuums, IR-controlled lights, medical sensors, smart locks, voice-only phones with limited apps - **Human actors** — you, your family, a helper, a bookkeeper, a driver, a doctor's receptionist, the people whose work you're coordinating Every actor declares what they can do, when they're available, what they charge, what they need from the orchestrator before they act. The Process executes against those declarations. If an actor can't do their part — helper is sick, robot vacuum is charging, API is rate-limited, you're asleep — MW *surfaces the gap*. It does not paper over. The audience — the household whose life is being held — hears one thing: a calm summary at 6:35 in the morning, with three things waiting for their decision. The actors did one thing together. --- ## Anjali's morning, by actor moment Let's open the lid on what's happening underneath the calm five-line summary from the [caregiver scenario](../scenarios/caregiver.md). Each line is the surface of a multi-actor coordination. ### 7:42 SLT — *"BP tablet taken (helper voice-noted in Sinhala)"* What you see: a checkmark. What happened underneath: | Step | Actor | What they did | |---|---|---| | Trigger | A scheduled **routine** (a Process step on a clock) | Fired at 7:30 SLT, the household's medication window | | Reminder routed | A comms **MCP** | Sent a WhatsApp voice note to the helper's phone, in Sinhala, with Padma's name and the tablet to take | | Helper acts | The **human actor** (the helper) | Reminded Padma in person, watched her take the tablet, voice-noted *"ඇය බෙත් ටික ගත්තා (aeya beth tika gaththa — she took the medicine)"* | | Voice note received | A speech-to-text **skill** | Transcribed the helper's Sinhala voice note locally; confirmed the medication identifier | | Ledger updated | The household **ledger** (your local data layer) | Appended a verified medication event | | Surfaced upstream | A digest **skill** | Included the event in the next morning's summary for Anjali | What stayed Anjali's: she didn't need to ask. If the helper *hadn't* voice-noted within 90 minutes of the window, the digest would have surfaced *"medication not confirmed"* and an escalation routine would have routed it to her at the urgency level she had chosen for medication misses. ### *"Mobile credit low — top up 1,500 LKR?"* What you see: one decision waiting for one tap. What happened underneath: | Step | Actor | What they did | |---|---|---| | Trigger | A monitoring **skill** with a household-defined threshold (200 LKR) | Polled the mobile carrier API hourly; saw 110 LKR | | Diagnosis | The same skill | Confirmed credit was below threshold, computed a sensible top-up amount based on past three months' usage | | Proposal | A payment **skill** (using a payment MCP) | Drafted the transaction; did *not* execute | | Surfaced | The TUI | Presented the option to Anjali with one tap to approve, one to decline, one to change the threshold | The skills *propose*. The payment is *not made* until Anjali approves. **MW never auto-spends.** This is not a setting; it is the structural promise of the **trust gradient**: for anything irreversible (paying money, cancelling a service, contacting a doctor), MW waits. ### *"Thursday cardiology booked, driver Sanath confirmed"* What you see: a confirmation. What happened underneath, in coordination across **five actors**: | Actor | What they did | |---|---| | Calendar **MCP** | Reconciled Anjali's calendar, the helper's working days, Padma's prior commitments, and the doctor's published availability | | Scheduling **skill** | Picked the slot, checked it against the helper's sick day, surfaced that the helper wouldn't be present | | Anjali (the night before) | Decided her brother would attend instead — said this to MW | | Driver **MCP** (contact graph) | Looked up Sanath in Anjali's trusted-driver list, sent a booking request | | Sanath (**human actor**) | Replied confirming; rate auto-included | | Calendar invites | Sent to Anjali, brother, helper (for awareness), Padma's phone (in Sinhala) | Five actors. Two appointments coordinated. One line of summary. **No app opened.** ### *"WhatsApp thread triaged — one real concern; your brother is looking on Saturday"* What you see: relief. What happened underneath: | Step | Actor | What they did | |---|---|---| | Read | A messaging **skill** | Read the last 24 hours of the family WhatsApp thread (with your consent; threads are local) | | Classify | A **moha-aware** skill | Filtered chitchat, surfaced concrete concerns. Found one ambiguous remark from Padma about the gas bill | | Context-gather | A history **skill** | Pulled the last three months of gas bills; computed seasonal variance | | Compose | A response **skill** | Drafted a clear-eyed assessment: bill is +12% but matches the seasonal pattern; no anomaly | | Delegate | The household's **routing rules** | Decided this needed a physical check, not a phone call. Asked Anjali's brother (a **human actor**, scoped to physical-world tasks in Sri Lanka) to look at the meter on his planned Saturday visit | | Reflect back to Anjali | The digest | One line. No action from her. | The investigation isn't automated. It's *delegated to the human actor with the right context*. Anjali's brother is part of the Process — not a passive recipient of an alert. The skill that triages the thread is small; the human relationship that responds to it is large. **MW orchestrates; it does not replace.** ### *"Two overlapping subscriptions found — review"* What you see: an audit, ready when you have a minute. What happened underneath: a banking **MCP** with read-only access to Anjali's statements; an audit **skill** that compares charges against an evolving picture of what she actually uses; a surfacing routine that batches non-urgent findings into the morning digest, never pushes mid-day. **The skill drafts the cancellation. The act of clicking *send* remains Anjali's.** --- ## What "programmable" means here You'll have noticed: Anjali didn't write code. So in what sense is this *programmable*? She declared, over the first few days of living with MW: - **Triggers** — *"morning BP medication window starts at 7:30 SLT, alert if not confirmed within 90 minutes"* - **Thresholds** — *"top up mobile credit when it drops below 200 LKR"*; *"alert me if a bill is more than 15% over the seasonal mean"* - **Escalation rules** — *"missed medication → me, urgency level 2; helper absent on appointment day → suggest brother, ask me to decide"* - **Scope** — *"my brother sees physical-world tasks in Sri Lanka; my sister sees family-chat digests; neither sees Padma's medical detail unless I share a specific item"* The skills + MCPs underneath those declarations are the actual code. They are written and maintained by **skill builders** across the MW community, not by Anjali. She installs the medication-adherence skill the way you install an app — except that every time the skill runs, **15% of its tokens flow into the skill's capability pool and route to the people whose capabilities the invocation called. Forever.** So *programmable* doesn't mean *you have to be a developer*. It means *the household can shape the Process from declarations alone*; the engineering layer that makes those declarations executable was authored by others, who are paid for the lifetime of the work's usefulness. (See [How MW pays for itself](../../values/economy.md) for the economic shape of this.) --- ## Why this is different from an AI assistant Most AI products today are single-actor by design. You type, the model replies. The interaction is between **one human and one model**. The model has no contract with anyone else; it doesn't share work with your devices, your helpers, or your family; it doesn't wait for you before acting on irreversibles; it doesn't expose what it did or why. MW inverts that: - **Multi-actor.** Every Process has multiple actors — at least one human, one or more digital agents, and (increasingly) physical agents. No participant carries the whole load. - **Contracted.** Every actor — including model calls — declares capabilities, constraints, rates, and availability up front. Nothing is implicit. Nothing is privileged. - **Auditable.** Every action lands in a local ledger. You can read it. You can copy it. You can show it to your sister. - **Interruptible.** Any Process can be paused, redirected, or stopped at any human checkpoint. - **Honest under failure.** When an actor can't do its part — helper sick, API down, model unsure — MW surfaces the gap rather than smoothing over it. - **Vinaya-bound.** The system clings to one thing: *no output may give rise to lobha (craving), dosa (aversion), or moha (delusion)*. Everything else is impermanent — including the system itself. (See [The vinaya invariant](../../VIVEKA-AND-AUTONOMY.md).) The first six properties are what make MW software you can trust with the operational weight of your mother's life. The seventh is what keeps it that way over time. --- ## Where to go from here This article is the centerpiece. The other explainers each deepen one part of it: - [**Digital, physical, human — one contract**](./digital-physical-human.md) — the actor contract made fully concrete - [**The trust gradient**](./trust-gradient.md) — how MW asks before acting on anything consequential - [**Where your data lives**](./where-your-data-lives.md) — household-scoped state, opt-in remote sync, the right to walk away - [**When things break**](./when-things-break.md) — honest about gaps; the system that surfaces, never papers over - [**How MW pays for itself**](../../values/economy.md) — the prepaid-token + worker-choice economy And for builders / skill authors who want the technical formalism: - [The Process pattern](../../PROCESS-PATTERN.md) — the 9-station structure (snapshot → generator → driver → monitors → synthesizer → verdict → calibration → plan-feeder → replay) that every Process composes - [Viveka and autonomy](../../VIVEKA-AND-AUTONOMY.md) — the architecture of stillness without paternalism - [Intent calibration and the skill economy](../../INTENT-CALIBRATION-AND-SKILL-ECONOMY.md) — how a need becomes a skill becomes permanent income --- ## A small honesty note Some of the steps above run end-to-end today (calendar reconciliation, family-chat triage in Sinhala+English, bilingual digests, threshold-based watching, scheduler routines). Some depend on MCPs that haven't shipped yet (mobile-credit top-up needs a payment MCP per jurisdiction; Sinhala voice transcription on Padma's phone needs the edge-voice container). Each scenario page tells you which is which. The vinaya is also about not over-claiming what we have built. *Last updated 2026-05-11. This article is part of the **How MW does the work** explainer set — what's happening underneath the calm interface.* ============================================================================== ### SECTION: /machine-world/digital-physical-human ### SOURCE_URL: https://machineworld.io/machine-world/digital-physical-human ### RAW_MARKDOWN: https://machineworld.io/mw-content/digital-physical-human.md # Digital, physical, human — one contract *Why a skill, a robot vacuum, and the helper in Colombo all participate in the same shape.* --- ## The big idea A Process only works if every participant **declares what it can do, when, and on what terms**, before it joins. Machine World treats *digital agents* (skills, model calls, MCP servers), *physical agents* (smart devices, sensors, robots), and *human actors* (you, your family, helpers, drivers, verifiers) under one universal shape: **the actor contract**. This is the architectural reason MW can promise transparency. There is no special class of "more trusted" participants. An LLM, a thermostat, and a person all have the same contract layout. The orchestrator routes around any of them when constraints clash with a Process need; it never papers over a missing actor; it never has hidden access to a participant it hasn't declared. If you've used software where AI "just figures it out" by tapping privileged channels you can't see, you know the discomfort. The actor contract is the architectural answer to that discomfort. --- ## The shape of a contract Every actor — digital, physical, human — declares the same six fields: | Field | What it is | Caregiver helper example | Robot vacuum example | Calendar MCP example | |---|---|---|---|---| | **Identity** | Stable name + role | *Suneetha, Helper (Mon/Wed/Fri shift)* | *Vacuum-1, Living-room device* | *Calendar MCP v1.2* | | **Capabilities** | What it can do | Sinhala+English voice, medication reminders, light shopping | Vacuum + spot-clean, returns to dock | Read/write calendar events | | **Availability** | When it's reachable | Mon/Wed/Fri 08:00–14:00 SLT | Anytime, 90-min battery, 4hr recharge | Continuous, rate-limited 100/min | | **Constraints** | What it cannot do | Cannot lift >15 kg; cannot drive; cannot administer injections | Cannot climb stairs; cannot vacuum during quiet hours | Cannot delete calendars; cannot send invites without approval | | **Rate / cost** | What it costs (per use, per hour, per call) | LKR 1,500 / 4-hr shift | Energy cost + amortised hardware | $0.0012 per call | | **SLA** | What you can expect | Replies to voice notes within 30 min during shift | Returns to dock when battery < 15% | 99.5% uptime, retries 3x on failure | That's it. Six fields. The whole spectrum of actors a household coordinates fits this shape. --- ## Why uniformity matters When every actor has the same contract shape, **three things become possible** that aren't possible in any other AI architecture I've seen: ### 1. Honest gap-surfacing When a Process needs an action and no actor with the matching capability is available, MW says so. *"Process needs verification at 02:00 SLT. No verifier on shift. Options: defer to morning, fall back to single-person check, or skip."* It doesn't silently use a less-suited actor. It doesn't quietly fail. The gap is visible, and you decide. ### 2. Substitutability If your usual helper is sick, MW can search the actor registry for anyone else with matching capabilities + availability, and surface candidates *to you* with their declared rates and SLAs. The substitution isn't magic; it's a query against well-formed contracts. *You decide whether to substitute.* ### 3. Predictable routing The orchestrator routes by matching Process needs against actor capabilities. There is no ML-decided "best agent" with opaque internal logic. The routing is read-out-loud: *"This step needs a Sinhala speaker, lifting capability up to 10kg, available Wed afternoon. Three matching actors found. Selected by lowest declared rate; you can override."* --- ## Where contracts come from | Actor class | Where the contract lives | Who writes it | |---|---|---| | **Digital agents** (skills, MCPs) | `SKILL.md` frontmatter; `mcp/registry/.yaml` | The skill creator, before publishing. Reviewed by the community before reaching production status. | | **Physical agents** (devices) | Manufacturer-supplied spec + household-supplied refinements | Device-maker spec, plus household-specific preferences ("don't vacuum during quiet hours") | | **Human actors** (helpers, drivers, verifiers, family) | A simple profile they fill out once, edit anytime | The human themselves. They set their own rates, availability, and constraints. | For the human side, the form is intentionally small. A helper signs up by stating: *I work Mon/Wed/Fri 08:00–14:00. I speak Sinhala and English. I can lift up to 15kg. My rate is LKR 1,500 per shift.* That's the entire registration. No bio. No skill ratings. No reputation score. Just the contract. **Declared capability is the entry; it isn't the whole story.** A human actor may know they're capable of medication reminders in principle but not the specific routine for *this* patient and *this* medication right now. That gap — between *can do* and *knows exactly how to do this, this time* — is closed by the [guidance agent](/machine-world/human-guidance): step-by-step instructions per task, on demand, in the actor's language, without surveillance. The contract is the structure; the guidance agent is the support layer that makes the structure liveable. --- ## What contracts deliberately do **not** include Forbidden field names, verified by structural test on every release: - `reputation_score` — no rankings of actors against each other - `total_calls` / `total_earnings` — earnings totals are private to each actor; not displayed in any sortable form - `rating` / `stars` — no five-star reviews; no thumbs up/down; no streaks - `tier` / `priority_level` — actors are not stratified into classes These absences are load-bearing. Once you can rank actors, the system has a leaderboard. Once there's a leaderboard, the system amplifies *lobha* (craving) in everyone trying to climb it. The architecture forbids the data structure that enables the failure mode. (See [Viveka and Autonomy § 4.9](/machine-world/viveka-and-autonomy) — *structural viveka in the contributor economy.*) --- ## A worked example: three actors in one Process step The cardiology booking from [the caregiver scenario](../scenarios/caregiver.md): ``` Process step: book cardiology appointment for Padma, Thursday 10:30 SLT Actors needed: - A capability to write a calendar event → matched: Calendar MCP (digital agent) → contract: read/write calendar events; auth via env-var; 99.5% SLA - A driver, Sri Lanka, available Thursday morning, with prior trust → matched: Sanath (human actor) → contract: available Thu mornings; rate LKR 4,000 / city return; replies within 1hr - An adult present at the appointment → no helper on Thu (declared sick day) → fallback: Anjali's brother in Galle (human actor) → contract: available Thursdays after morning meeting; willing for medical-attend role; lives 2hrs from Padma's home Orchestration: - Calendar MCP creates event at 10:30 SLT, invites Padma's phone + brother + helper - Driver MCP sends Sanath a booking request; he confirms; rate auto-attached - Brother gets a calendar invite and a short note explaining helper is out - Surfaced to Anjali as one line: "Thursday cardiology booked. Driver Sanath confirmed." ``` Three actors of three different classes (digital, human, human), each with declared contracts, coordinating through one Process step. Anjali sees one line. The actors did one thing together. --- ## What this means for trust The actor contract is the most quietly important architectural property MW has. Without it, every promise about transparency, auditability, and *no auto-acting on irreversibles* would be aspirational. With it, those promises have a structural foothold: - The audit trail can show *which contract was checked, which actor was selected, what their declared capability was*. - The trust gradient (see [the next explainer](/machine-world/trust-gradient)) operates on contract fields — *"this action requires an actor with an approval-not-implementation capability"*. - The vinaya invariant has somewhere concrete to bind — *"no skill contract may declare a field that ranks actors against each other"* is a CI check, not a policy hope. Trust without structure is fragile. The actor contract is the structure. --- ## Where to go from here - [The trust gradient →](/machine-world/trust-gradient) — how MW asks before acting on anything consequential, built on top of the actor contract - [How MW supports the humans doing the work →](/machine-world/human-guidance) — the guidance agent, can't-complete signal, and emergency escalation - [Where your data lives →](/machine-world/where-your-data-lives) — the household scope where contracts and audit trails live - [When things break →](/machine-world/when-things-break) — what happens when an actor can't honour its contract - [How MW orchestrates a real day ←](/machine-world/orchestrates-a-real-day) — the centerpiece this article supports - Actor contract spec (developer reference, in the engine repo) — the technical formalism *Last updated 2026-05-11.* ============================================================================== ### SECTION: /machine-world/trust-gradient ### SOURCE_URL: https://machineworld.io/machine-world/trust-gradient ### RAW_MARKDOWN: https://machineworld.io/mw-content/trust-gradient.md # The trust gradient *How Machine World asks before acting — and why that's structural, not a setting.* --- ## The premise Every AI tool today walks the line between *useful* (it does things for you) and *paternalistic* (it decides things for you). Most modern AI products err toward the second. They auto-act, auto-respond, auto-summarise, auto-decide — and the user discovers afterwards what happened. Machine World's design refuses that line. **The system surfaces, suggests, drafts, prepares. The human decides anything consequential.** This is not a policy choice. It is a structural property of the orchestrator, enforced at four distinct stages we call the **trust gradient**. --- ## The four stages ``` Stage 1 — Read-only MW observes. It does not act in the world. Default for: any new capability, the first time MW touches a new actor or a new external service. Stage 2 — Suggest with reason MW proposes. The human reads the proposal, sees the reasoning, decides whether to proceed. No action is taken without that decision. Default for: anything that touches money, contacts another human, changes a calendar, modifies stored state outside MW. Stage 3 — Act on one-click approval MW drafts, the human reviews + taps. The friction is small but the consent is explicit, per-action. Default for: things you've previously approved in the same shape, where the action is reversible. Stage 4 — Act within rules I set MW acts autonomously, but only within boundaries the human has declared and only on actions that are inherently reversible. Default for: nothing. Each capability must be deliberately moved to Stage 4 by the household. ``` Every capability MW has — every skill, every MCP integration, every workflow — sits at one of these four stages for each household. The default for *anything new* is Stage 1 or Stage 2. **No capability escalates itself.** A household moves a capability up the gradient by deliberate trust, not by accumulation of behaviour. --- ## What never escalates above Stage 3 Some actions are **structurally irreversible**: - Paying money (any direction, any amount) - Sending a message to another human - Cancelling a subscription or a service - Deleting any data - Granting a new permission - Booking an appointment - Changing a household-shared decision These are pinned at **Stage 3 maximum** in the codebase. They cannot be promoted to Stage 4 by any user, skill, configuration flag, or model. The `permission-broker` skill enforces this; the structural test in CI verifies it; the rollback-registry stores an undo for every action that does run through. This is the architectural answer to the *moha* invariant: **the system does not auto-act on decisions it cannot undo.** --- ## How the gradient operates per actor Recall from the [actor contract explainer](/machine-world/digital-physical-human) that every actor declares its capabilities. The trust gradient maps onto those declarations: | Capability shape | Default stage | |---|---| | Read external state (read a calendar, fetch a bill, transcribe a voice note) | Stage 1 | | Propose an action with stated consequences (draft a top-up; draft a cancellation; draft a calendar event) | Stage 2 | | Execute a previously-shaped pattern (run the morning routine; schedule the recurring task) | Stage 3 | | Self-contained reversible action (turn on the light; play the audio file; set the room temperature) | Stage 4-eligible | A robot vacuum runs at Stage 4 for "vacuum the living room" the moment the household consents; the action is reversible (turn it off) and self-contained. A payment skill stays at Stage 2 forever; the action is irreversible and external. The gradient *per capability* lets a household run MW with high automation on safe things and high deliberation on consequential things, simultaneously, without choosing between paternalism and indifference. --- ## Worked example: Anjali's morning, through the gradient The 6:35 TUI display from the [caregiver scenario](/machine-world/caregiver), annotated: | Surface line | Trust stage | |---|---| | ✓ *"BP tablet taken 7:42 SLT — helper voice-noted in Sinhala"* | **Stage 1** (observed) → **Stage 3** (the household has previously approved "record verified medication events to the local ledger" as a recurring pattern) | | ! *"Mobile credit low — top up 1,500 LKR? [approve]"* | **Stage 2** — proposed, waiting. Payment is irreversible; will never auto-execute. | | · *"Thursday cardiology booked, driver Sanath confirmed"* | **Stage 3** — Anjali had previously approved "book pre-scheduled medical appointments through trusted drivers" as a pattern; the actual booking sent invites once she confirmed the night before | | · *"WhatsApp triage — one real concern; brother is looking on Saturday"* | **Stage 1** (read-only triage) → **Stage 2** (the *brother-routing* required Anjali to confirm her brother was the right recipient; she did so weeks ago when the pattern first arose) | | ⓘ *"Two overlapping subscriptions found — review"* | **Stage 2** — surfaced; the cancellation will require her tap | Every action on the morning's screen is sitting at a known stage. Anjali can audit any of them. She can move any capability up or down the gradient at any time. --- ## How a household moves a capability up the gradient The flow is conversational, not technical: 1. The first time MW proposes an action of a certain shape, it's at **Stage 2**. Anjali sees the proposal, the reasoning, and the consequence. She approves. 2. After several approvals of the same shape, MW asks: *"This pattern has happened five times; you've approved each one. Would you like me to do this automatically next time, with a one-tap notification but no approval needed?"* If she says yes, the capability moves to **Stage 3**. 3. If the action is *inherently reversible* (turn on a light, set a temperature, queue music) and the household consents, MW asks once more: *"Would you like me to handle this in the background without asking?"* That's the move to **Stage 4**. 4. **Anything irreversible** — payments, communications, deletions, bookings, permission grants — *never reaches the asking-once-more step*. The architecture forbids it. The gradient is the conversation, not a settings panel. It accumulates from your actual judgment over time. --- ## What can be rolled back, always Every Stage 3 or Stage 4 action lands in the **rollback registry** with an `undo_command` populated. If a household later regrets an autonomous action, the undo path is documented and clickable. The growth log is permanent; entries marked `rolled_back` are kept, never deleted. You cannot make a system that never makes a mistake. You can make a system where every mistake is recoverable. The trust gradient lets the household balance automation against recoverability, deliberately, per capability. --- ## Why this is different from "auto-approval" or "guardrails" Most AI products that have any human-in-the-loop story implement it as a feature flag: *do/don't auto-act*. The flag is binary, system-wide, and easy to forget the setting of. Users typically flip it once and move on. The trust gradient is *per-capability, per-household, deliberately accumulated*. It cannot be flipped wholesale. It cannot be advanced by the system on its own. The decision to trust MW with a new capability is made one capability at a time, by the human, on the basis of what they've actually seen MW do. That is what *autonomy without paternalism* looks like at the architecture layer. --- ## Where to go from here - [Digital, physical, human — one contract ←](/machine-world/digital-physical-human) — the actor layer the trust gradient sits on top of - [Where your data lives →](/machine-world/where-your-data-lives) — the household-scoped state the gradient operates within - [When things break →](/machine-world/when-things-break) — what happens when an action at Stage 3 or 4 fails partway through - [How MW orchestrates a real day ←](/machine-world/orchestrates-a-real-day) — the centerpiece this article supports - [Viveka and Autonomy § 4.5](/machine-world/viveka-and-autonomy) — *no silent action, no auto-proceed past checkpoints* *Last updated 2026-05-11.* ============================================================================== ### SECTION: /machine-world/where-your-data-lives ### SOURCE_URL: https://machineworld.io/machine-world/where-your-data-lives ### RAW_MARKDOWN: https://machineworld.io/mw-content/where-your-data-lives.md # Where your data lives *A household on your machine. Nothing leaves unless you opt in. You can walk away with everything.* --- ## The promise, in one paragraph Machine World stores **everything** on your own machine, under one directory you can read, copy, encrypt, version with git, or delete. The directory is `~/.machineworld/`. Your conversations, your skills, your workflows, your medication ledger, your subscription audit history, your wisdom model — all of it lives there. Nothing crosses the network unless you have explicitly opted into a sync target. When you do — Firebase mirror, multi-device sync, family sharing — the precise data being shared and the destination are visible per Process, never assumed, never silently expanded. **The system was designed so a household can leave at any time, by `cp -r` and `rm -rf`.** --- ## What's in `~/.machineworld/` ``` ~/.machineworld/ ├── config.yaml → backend choice, language preferences, theme ├── conversations/ → message history (organised by date) ├── households// → the household scope; default is "personal" │ ├── memory/ │ │ ├── profile.md → who lives here, what matters │ │ ├── preferences.yaml → operational preferences │ │ ├── wisdom-model.md → accumulated judgment about how this house runs │ │ └── context/[domain].md → ongoing context per area (health, finance, …) │ ├── skills/ → household-specific skill installs + customisations │ ├── workflows/ → workflow definitions (composed skills) │ ├── routines/ → scheduled workflows │ ├── mcps/ → household-scoped MCP configurations │ ├── vault/ → token wallet (loads with the managed tier) │ ├── ledger/ → every call, every action, every Process step │ └── improvement-signals/ → raw signals captured for skill improvement └── install// → the installed `mw` release ``` A few facts about this layout that matter: - **Households are first-class.** A single user can have multiple households (e.g., *personal* + *small-business* + *eldercare-for-mum*). State is bounded to its household; nothing leaks across boundaries. - **The wisdom model is a markdown file.** Not a database. Not a vector store. A markdown file you can read, edit, copy, share, or delete. It's the most intimate state MW holds about you, and it lives on the same filesystem as your notes. - **The ledger is append-only.** Every Process step that ran, every action taken, every decision waited on — recorded as it happened. If you want to know what MW did on March 14th, the answer is in `ledger/2026-03-14/`. - **MCP configurations are household-scoped.** Your work calendar credentials don't leak into your personal household. The architecture forbids it. --- ## What never crosses the network — unless you opt in By default: - **Audio is never uploaded.** Voice notes you record, voice notes you receive — they stay on the device that captured them. When the edge-voice container ships, on-device speech-to-text means audio doesn't even cross the household network. - **Raw chat history doesn't leave.** Skills that triage WhatsApp, email, or other comms read locally; only the structured signal (an alert, a draft, a digest line) ever surfaces. - **The wisdom model stays on your machine.** It's the heart of how MW knows you; it never goes to a vendor server. - **Financial transaction details stay local.** Banking MCPs return summaries; raw statement files live in your household directory and don't get uploaded. What does cross the network (always — this is the engine running): - **Calls to your chosen LLM backend** — every model call sends the prompt + context the skill needs. If your backend is `ollama`, this is local-only. If it's `claude-cli`, `openrouter`, or `openai-compat`, it crosses the network to that provider, per their terms. - **Calls to external MCPs you've connected.** Connecting MW to your calendar, your bank, or any external service means calls go to those services. The data sent is the minimum the skill declared it needs. - **Anonymous telemetry** (opt-in, off by default) — aggregate signals about which skills people find useful, for the skill-research improvement loop. Never includes message content, never includes anything that identifies a household. That's it. The complete list. If something else were crossing the network, it would be a defect, not a feature. --- ## Opt-in remote sync Some households want multi-device sync — same household state on the laptop, the desktop, and the phone (when the mobile interface ships). MW supports this **as an explicit opt-in**, not a default. When enabled: - Firebase (or an equivalent sync target) becomes a **mirror** of your local household. It is not the source of truth. - The local filesystem remains the canonical copy. If the mirror disappears tomorrow, every household keeps working. - You can see exactly what's being synced and pause sync at any time. - A specific set of capabilities (`mw sync status`, `mw sync pause`, `mw sync purge-remote`) lets you inspect, halt, or revoke the mirror at any moment. The default is *no remote sync.* Households that don't need it never get it. --- ## The walk-away path Every architectural decision was made so a household can leave at any time: 1. **Copy the directory.** `cp -r ~/.machineworld /backup/wherever/` produces a complete portable archive. Everything MW knows about you is in that directory. 2. **Take it to a new machine.** Install MW on the new machine, drop the directory into place, restart. Pick up where you left off. 3. **Delete the directory.** `rm -rf ~/.machineworld` removes everything. There is no shadow database, no vendor-controlled state, no "last known location." Gone. 4. **Encrypt the directory.** It's just files. Use the disk encryption of your choice. Use `git-crypt`. Use a household-managed key. 5. **Inspect any file.** Most are markdown or YAML. The handful that aren't (ledger snapshots, vault encrypted blobs) have documented formats — you can read them with standard tools. This is the **right to walk away** as a first-class architectural commitment. The system is designed so it cannot become a thing you can't leave. --- ## Why this matters The dominant model for consumer AI today is *your data, our database, our terms.* The household has no canonical copy of what the system knows. Migration is impossible. The lock-in is structural. MW inverts this. The household's data is the household's, on disk, in a directory whose layout is documented. The vendor — MW the organisation — has no copy unless the household has explicitly opted into remote sync. **Data sovereignty is the default, not a premium feature.** The vinaya gives this an additional weight: *no moha* — no false certainty about another person's interests. If the system claimed to know best where the household's data should live, that itself would be a moha-amplifying pattern. The household chooses. The system serves. --- ## Worked example: Anjali's data, by category | Data | Where it lives | What ever crosses the network | |---|---|---| | Padma's medication log | `~/.machineworld/households/anjali/ledger/medication/` | Nothing, unless Anjali opts into family-share with siblings (then summary-only, in Sinhala+English) | | The helper's voice notes confirming meds | On the helper's phone (WhatsApp), then a local transcript in Anjali's ledger | Audio file: only on the helper's phone. Transcript: stays in Anjali's local household. | | The family WhatsApp thread | The phones of the participants (WhatsApp servers, per WhatsApp's terms) | MW reads it locally via the WhatsApp MCP; structured signals (one-line alerts, weekly digests) go to Anjali. Raw thread is not stored or copied by MW. | | The cardiology appointment | Anjali's local household + the doctor's office's calendar API (per the office's terms) | Calendar invite to the doctor's office (their API), to her brother's calendar, to the helper's calendar — each requires Anjali's pre-approval of the integration | | The bank statement audit | Read-only summaries via the banking MCP; raw statements stay with her bank | The banking MCP request crosses the network to her bank; nothing else | | Anjali's wisdom model | `~/.machineworld/households/anjali/memory/wisdom-model.md` | Never | Six categories. One row that crosses to a vendor (her chosen LLM backend, on the calls where reasoning is needed). Everything else is hers. --- ## Where to go from here - [Digital, physical, human — one contract ←](/machine-world/digital-physical-human) — the actor layer that produces this data - [The trust gradient ←](/machine-world/trust-gradient) — what kinds of action MW can take on this data, when, and with whose consent - [When things break →](/machine-world/when-things-break) — what happens to local data when an opt-in sync target goes down - [How MW orchestrates a real day ←](/machine-world/orchestrates-a-real-day) — the centerpiece this article supports - [The vinaya invariant](../../VIVEKA-AND-AUTONOMY.md) — what binds the system's relationship to your data *Last updated 2026-05-11.* ============================================================================== ### SECTION: /machine-world/when-things-break ### SOURCE_URL: https://machineworld.io/machine-world/when-things-break ### RAW_MARKDOWN: https://machineworld.io/mw-content/when-things-break.md # When things break *Honest about gaps. The system that surfaces, never papers over.* --- ## The premise Every real system has failure modes. Helpers get sick. APIs go down. Robots run out of battery. Models give uncertain answers. Network connections drop. Devices forget their credentials. **Honesty about failure is the differentiator between software that ships to real users and software that demos well.** Machine World's commitment is plain: **when an actor can't do its part, the system surfaces the gap. It does not silently retry forever. It does not paper over with a less-suited substitute. It does not pretend to act when it can't. It tells you what failed, what it tried, and what your options are.** This article walks through the failure shapes a household actually encounters and what MW does for each. --- ## The shapes of failure | Failure shape | Example | What MW does | |---|---|---| | **Actor present but unsure** | Helper has the task but doesn't know the specific routine; verifier sees an unfamiliar Process shape | Routes to the [guidance agent](/machine-world/human-guidance) first — step-by-step instructions on demand, in the actor's language, without surveillance. Substitution is the *next* move only if guidance doesn't close the gap. | | **Actor unavailable** | Helper is sick; driver doesn't reply; doctor's API is down; robot vacuum is charging | Searches for a substitute matching the declared capability + constraints. Surfaces candidates with their rates. If no substitute, presents fallback options to the human. | | **Actor signals "I can't do this"** | Helper finds the medication strip empty; verifier flags something requiring expert review; volunteer can't physically complete the task even with guidance | The honest signal is treated as ground truth. The Process pauses; the actor's reason is logged in *their own* ledger (no performance event); substitution search runs; household decides whether to substitute, defer, or take the action personally. **No penalty to the actor.** Full path documented in [human-guidance.md → When the human says "I can't"](./human-guidance.md#when-the-human-says-i-cant-do-this). | | **Emergency triggered** | Fall detected; medical event; explicit human emergency signal; sensor-detected threat | The Process's pre-declared emergency escalation contract fires — parallel contact notification, pre-approved fast actions (and only those), guidance agent shifts to emergency mode, audit trail at maximum detail, platform operating fee waived. See [human-guidance.md → When it's an emergency](./human-guidance.md#when-its-an-emergency). | | **Partial success** | Calendar invites sent to 3 of 4 recipients; one bounced | Reports the success/failure mix; offers to retry the failed leg; never reports overall success when part of it failed | | **Ambiguous outcome** | Voice-note transcription confidence too low; LLM uncertain about the right next step; banking API returned partial data | Surfaces the ambiguity to the human. Refuses to act on uncertain data. Does not guess and then claim success. | | **Irreversible-action gate** | Process step requires paying a bill, cancelling a service, contacting a doctor | Stage 2 (suggest with reason) at minimum; never auto-acts even if the rest of the Process succeeded | | **Rate-limit / cost-cap** | LLM backend rate-limits the call; cost ceiling reached for the day | Pauses, surfaces the limit, lets the human decide whether to wait, increase the cap, or skip | | **Constraint clash** | Process needs verification at 02:00 SLT; no verifier on shift; or robot vacuum scheduled during declared quiet-hours | Honours the constraint. Surfaces the gap. Does not override declared rules to "make it work." | | **Loss of connectivity** | Internet drops during a Process step | Queues actions locally; resumes when connectivity returns; tells the human what's queued | | **Skill error / model hallucination** | Skill returns malformed output; model output fails vinaya gate | Discards the output; logs the incident; surfaces the failure to the human with the trace ID | That's the working classification. Every failure type is named, the response is consistent, and the human is informed. --- ## Worked example: the morning if the helper called in sick Anjali's morning at 6:35 a.m., on a Tuesday — but the helper has just messaged that she's sick. What MW would show: ``` Padma — overnight ⚠ ✗ Morning BP tablet not yet confirmed. Helper Suneetha is sick today. Window closes 90 min after 7:30 SLT (8:00 SLT now). Available paths: 1. Call Padma directly via voice MCP; ask her if she's taken it. (suggested action — confirm her in her own words) 2. Notify your brother Sandun (Galle, 2hr away). He can drive up by 11:00 SLT and verify in person. Rate: LKR 3,500 (his declared substitute-helper rate). 3. Pause the Process and decide later. What stays your decision: which path to take. ``` What happened underneath: | Step | What ran | |---|---| | Trigger | Medication routine fired at 7:30 SLT (standard) | | Reminder routed | Comms MCP attempted Suneetha's phone — got her "not working today" reply | | Substitute search | Actor-registry query for "Sinhala speaker available now in Colombo area, lifting < 15kg" — no actor available in next 30 min | | Family-fallback search | Found Sandun (the brother) with declared substitute-helper capability + Tuesday morning availability | | Direct-confirm option | Voice MCP can call Padma directly — but the household policy is "voice confirmation only as a fallback, with Anjali's approval" (this is a Stage 2 capability) | | Surfaced | The full picture, with three labeled paths and the costs/consequences of each | The system **did not** pick a path and announce it. It **did not** silently retry the helper's phone forever. It **did not** pretend everything was fine. It surfaced the gap, named the options with their declared rates and consequences, and waited for Anjali's judgment. --- ## The vinaya invariant in failure mode Three of the failure principles are restatements of the three roots: - **No false certainty when uncertain** — *moha.* If the transcription is low-confidence, MW says so. If the model can't decide, MW says so. The system does not perform confidence it doesn't have. - **No punitive automation when something fails** — *dosa.* When a helper doesn't reply, the system doesn't escalate aggressively or shame the actor. It surfaces neutrally to the household. - **No engagement amplification on failure** — *lobha.* The system doesn't dramatise failures to drive engagement. *"⚠ Padma's tablet not confirmed"* is the same calm typeface as *"✓ Tablet taken."* No alarms, no urgency theatre, no notification spam. Failures pass through the same vinaya gate as successes. The system is honest in both directions. --- ## What MW will never do at failure A short list of explicit refusals: - **Auto-substitute a less-suited actor.** If the Process needs a Sinhala speaker and only English speakers are available, MW will not substitute. It will surface the gap. - **Auto-extend an irreversible action.** If a payment fails, MW will not try a different payment method on its own. It will surface and wait. - **Silently retry forever.** Every retry path is bounded. After bounded attempts, the failure surfaces. - **Claim success when only part succeeded.** Partial success is reported as partial; never as success. - **Quietly degrade the trust gradient.** A capability that's been Stage 2 stays at Stage 2 even if MW is "confident" it knows what to do. Trust-stage advancement is the household's deliberate action only. - **Hide failures from the audit trail.** Every failure is logged with its trace ID. You can read every one. --- ## Local-first means failure is bounded Because MW runs as a local CLI, many failure modes that would be system-wide in a SaaS architecture are merely *capability-localised* here. - LLM backend unreachable → other skills that don't need a model keep working. - Banking MCP down → other MCPs keep working; finance-specific skills wait. - Firebase remote sync (opt-in) down → local household keeps working; sync queues writes for later. - A specific skill errors out → other skills keep working; the failed skill is reported with its version + trace ID. There is no single point of failure that takes the whole household offline. The graceful-degradation table is in `docs/MACHINEWORLD-COMPLETE.md § 20.5`; the design principle is *"the system always runs, just with reduced capability — but never silently."* --- ## What the household can do For every failure that reaches a human surface, three actions are always available: 1. **Read the audit trail** — `mw audit trail ` shows every step that ran, every actor that responded, every decision point. No black boxes. 2. **Roll back the affected action** — for anything that did execute (Stage 3 / Stage 4 captures), the rollback registry has an undo command. Run it. 3. **Adjust the actor's contract or the Process** — the failure may indicate the contract was wrong (helper's hours changed, threshold was too tight, skill was over-eager). The household can refine the declaration. The next run respects the new state. The system is operable under failure. It is not a black box that requires the vendor's help to fix. --- ## Why this is structural The reason MW can promise this honesty about failure is that **the actor contract** ([explained here](/machine-world/digital-physical-human)) makes every participant's limits *legible*. The system can say "I can't do this because the helper declared she's off-shift" because the declaration is in the contract, not buried in code. The trust gradient ([explained here](/machine-world/trust-gradient)) prevents the system from silently working around the limit because the gradient pins irreversible actions at Stage 2 or 3. Take away the actor contract and honest failure-surfacing becomes a promise nobody can verify. Take away the trust gradient and the system can quietly do things in the dark. The architecture is what makes the honesty possible. --- ## Where to go from here - [Digital, physical, human — one contract ←](/machine-world/digital-physical-human) — the contracts whose declared limits make honest failure-surfacing possible - [How MW supports the humans doing the work →](/machine-world/human-guidance) — guidance, can't-complete signal, and emergency escalation — the support layer above the failure-handling layer - [The trust gradient ←](/machine-world/trust-gradient) — how the gradient holds the line against silent workarounds - [Where your data lives ←](/machine-world/where-your-data-lives) — how local-first means failure is bounded to a capability, not the whole household - [How MW orchestrates a real day ←](/machine-world/orchestrates-a-real-day) — the centerpiece this article supports - Disaster recovery flows (graceful-degradation table + recovery sequences) are documented in the engine-internal reference, not on this site *Last updated 2026-05-11.* ============================================================================== ### SECTION: /machine-world/human-guidance ### SOURCE_URL: https://machineworld.io/machine-world/human-guidance ### RAW_MARKDOWN: https://machineworld.io/mw-content/human-guidance.md # How MW supports the humans doing the work *The guidance agent — written step-by-step instructions, on demand, in the human's language, without surveillance.* --- ## The premise The [actor contract](./digital-physical-human.md) declares what every participant can do. A human actor — the helper in Colombo, a bookkeeper, a verifier, a volunteer at a mutual-aid kitchen — says: *"I can do medication reminders, in Sinhala+English, Mon/Wed/Fri 8–14 SLT, up to 15kg of lifting."* The orchestrator routes work to them based on those declarations. But declared capability and *executed* capability are not the same thing. A new helper may know they're capable in principle but not the specific routine for this particular medication, this particular patient, this particular timing window. A volunteer may be willing to do route planning but new to the dispatch system. A verifier may be qualified to review a flagged action but unfamiliar with this Process's idiosyncrasies. **The guidance agent is how MW closes that gap — not by training, evaluation, or surveillance, but by providing written, step-by-step instructions, on demand, in the human's language, with no record of how many times help was asked for.** This is a different kind of architectural commitment from the trust gradient or the actor contract. Those describe how MW *asks before acting* and how participants *declare their limits*. The guidance agent describes how MW *supports the humans actually doing the work*. Without it, MW is a polite orchestrator of overworked people. With it, MW is a system that takes seriously the gap between "can do" and "knows exactly how to do this, this time." --- ## What the guidance agent is A skill (planned `task-guidance` or similar) that runs alongside any Process step where a human-actor is invited to act. When the human takes the task, the guidance agent generates: | Output | Detail | |---|---| | **Step-by-step instructions** | Numbered, plain language, in the actor's declared first language. For a Sinhala-speaking helper, the instructions are in Sinhala. For an English-speaking bookkeeper, in English. No translation slippage. | | **A check-question per critical step** | Not a quiz — a phrasing that helps the human verify they're on track. *"After you've watched her swallow the tablet, voice-note any one of: 'taken now', 'taken earlier', or 'refused — refused these reasons.'"* | | **Edge-case branches** | Concrete handling for predictable variations. *"If she says she's already taken it: ask when, voice-note that. If she says no but you can see the strip is missing: voice-note 'unsure — strip empty.'"* | | **One question button** | *"Something I don't understand."* Tapping it opens a short clarifying exchange with the guidance agent, never with the household. | | **A line at the bottom that does not change** | *"You decide. Nothing here overrides what you actually see."* The instructions are scaffolding; the human's judgment in the moment is ground truth. | The instructions are generated **per task, per human, per moment** — they reflect the task's declared shape, the actor's profile (language, literacy, accessibility needs), and the Process's specific context (which patient, which appointment, which household). They are not boilerplate. They are not a manual the human had to read in advance. They appear when the work appears. --- ## What it is *not* By explicit refusal: - **Not a quiz, an evaluation, or a rating.** The human is not being tested. There is no score. There is no "you should have done this better" notification. - **Not a record of dependency.** The system does not count how often a particular human asked for guidance, does not surface "asks-for-help" patterns to anyone, does not use guidance-request frequency as a signal in any flow. - **Not surveillance of how the task was done.** The guidance agent's logs contain *what instructions were provided* and *what the human marked complete*, never *how long it took, how many questions were asked, or whether the human stumbled.* - **Not a route for the household to evaluate the human.** The household sees the work done — the medication confirmation, the voice note, the appointment booked. They do not see the guidance trail. That belongs to the human and the system. - **Not a replacement for the human's judgment in the moment.** Every instruction set ends with *"You decide. Nothing here overrides what you actually see."* These absences are load-bearing. The moment guidance becomes evaluation, *moha* (false certainty about what someone needs) enters the system and the architecture's trust property collapses. The guidance agent is one of the places the vinaya invariant binds visibly. --- ## A worked example The helper in Colombo (Suneetha) is registered with a declared capability of *"medication reminders, Sinhala+English, Mon/Wed/Fri 8–14 SLT."* Padma's cardiologist prescribes a new medication on Thursday: a tablet to be taken with food, twice a day, with specific timing. Suneetha has never administered this particular medication for this particular patient. On Monday morning at 07:30 SLT, the medication routine fires. The guidance agent surfaces an instruction card to Suneetha's WhatsApp, in her declared first language (Sinhala). To show what she sees *in meaning*, here is the English-source version: ``` Losartan 50mg — the morning tablet 1. Before you give Padma the tablet, ask her: has she had her breakfast? ☐ Yes, she has → give the tablet ☐ Not yet → wait until she has eaten, then start again 2. Check that she is sitting up. 3. Hand her the tablet and a small glass of water. 4. Watch her swallow it. 5. Send a short voice note to her daughter — pick whichever fits: ▸ "taken" — for the normal case ▸ "no food yet, taking later" — if you waited for her to eat ▸ "a problem" — if something didn't go right You decide. Nothing here overrides what you actually see. [ Something I don't understand ] — only the guidance agent sees this, not the family ``` > *On Suneetha's actual phone, the words appear in idiomatic Sinhala. The Sinhala re-expression of this card will be published once a native-speaker reviewer who holds both the language and the medical context has worked through it. The English above is the source.* The instruction card is short, spoken-language, with no jargon. The numbered steps are in order. The check-options at step 1 prevent the most common mistake (giving a tablet on an empty stomach). The voice-note options at step 5 give Suneetha simple phrases to choose from so she does not have to compose a careful message in a hurry. The line at the bottom does not change — *"You decide. Nothing here overrides what you actually see."* And the question button is private to her and the guidance agent; her family does not see it. Suneetha can: - Ask the guidance agent any question about step 3 without anyone else seeing - Mark a step done and proceed - Voice-note Anjali with the simple result (taken / late / problem) What Anjali sees later: *"✓ BP tablet (Losartan 50mg) taken at 8:14 SLT, after breakfast. Voice-noted normal."* She does not see Suneetha's question on step 3. She does not see how long the task took. She does not see any guidance trail. The work was done; the surface reflects that. --- ## When the human says *"I can't do this"* Guidance helps with the gap between *can-do-in-principle* and *knows-exactly-how-now*. It does not close every gap. Sometimes — the medication strip is empty, the patient is refusing, the lift is heavier than declared, the form has a field the helper has never seen — guidance cannot complete the task, and the honest signal from the human is: > *"I can't do this."* Every guidance-supported task surfaces a clear, no-blame way for the actor to say so. One button (or one phrase, in voice mode): *"I can't do this — here's why."* The reason is captured in plain language by the actor, not by an evaluation form. What MW does next: | Step | What runs | |---|---| | 1. **Receive the signal** | The actor's stated reason is logged in their *own* ledger entry, not as a performance event. No "actor failed task" record anywhere. | | 2. **Pause the Process step** | The downstream actions waiting on this step are held, not cancelled. | | 3. **Run substitution search** | Same query as in [when things break](./when-things-break.md) — look for another actor with matching declared capability + availability. | | 4. **Surface to the human responsible for the Process** | Anjali sees the actor said "can't do this," the actor's reason, the available substitutes (with their rates), and what's needed for the step to complete. | | 5. **Wait for the household's decision** | Substitute? Defer? Skip? Take the action personally? The household decides. MW never auto-substitutes for an irreversible action. | | 6. **No penalty to the original actor** | They are not flagged. They are not down-rated. There is no down-rating field in the system. Their declared capability remains; they simply couldn't complete *this particular instance.* | The actor said "I can't." MW listens. The Process pauses. The household decides. Nobody is shamed. This is the same architectural shape as [the trust gradient](./trust-gradient.md) applied to human contributors: their honest signal is ground truth, and the system responds without auto-acting. --- ## When it's an emergency A different shape of failure: not "I can't complete the task" but "something is happening that needs response *right now*." Fall detected. Medical event. Fire. Theft. Missing person. Patient unresponsive. Volunteer locked out. Driver in an accident. Every Process that orchestrates around a vulnerable person, a physical site, or a time-sensitive obligation must declare an **emergency escalation contract** as part of its setup. That contract lists: | Field | What it specifies | |---|---| | **Trigger signals** | What counts as an emergency for this Process (medical event flag, no-response timeout, explicit human signal, sensor-detected fall, etc.) | | **Emergency-contact order** | The pre-declared sequence of people / services to notify, fastest first. For Padma's Process: helper → Anjali → brother → Sri Lanka 1990 ambulance. Each contact's reachability is part of the contract. | | **Pre-approved fast actions** | What MW may do *without* the normal Stage-2 approval — declared once, by the household, at Process setup. *"Call 1990. Notify all three siblings. Share location with helper's phone. Unlock the front door (for declared trusted neighbours)."* | | **What pauses** | Non-essential Processes that get held while the emergency is active: routine reminders, scheduled bill drafts, social drafts. | | **What the actor on the ground sees** | The guidance agent shifts to **emergency mode**: short, numbered, prioritised steps. *"1. Call 1990 now. 2. Stay with her. 3. Don't move her unless there's danger. 4. Take a picture of where she is. 5. I have notified Anjali; she will be on the phone with you in 90 seconds."* | **The architectural commitments that hold in an emergency:** - **Pre-declared consent for fast action.** MW does not auto-act on irreversibles in normal operation. In an emergency, it acts on the *narrow, pre-declared* fast actions the household has explicitly listed in the escalation contract. The consent is upfront, not retroactive. - **All emergency contacts get notified in parallel** when the contract says so, not in serial. Critical seconds are not spent on a wait-for-response cascade. - **Full audit trail at maximum detail.** Every emergency action is logged with timestamps, contact responses, location, sensor data, voice notes. Legal, medical, and insurance review may follow; the trail must be unimpeachable. - **Stabilization checkpoint.** Once the situation is stabilised (professional services on scene, patient under care, threat resolved), MW returns control to the household with a debrief: *"Here's what was done, in what order, with whose consent. Review when you're ready."* - **No revenue extraction in an emergency.** MW does not charge platform fees on emergency actions. Tokens spent on emergency LLM calls or MCP calls are at-cost; the operating fee is waived. This is a structural commitment. **What is *not* allowed in emergency mode:** - Auto-acting on actions not in the pre-declared list. MW will not, for example, auto-transfer money to a stranger because someone said it was urgent. Emergency fast-action is a narrow, named set — not a blank cheque. - Skipping the audit trail "to save time." The trail is written as actions happen, not after. - Surveillance escalation. Emergency mode does not unlock new monitoring — only the actions the household pre-declared. **Status:** the emergency escalation contract is specced as part of the actor-contract architecture (the per-Process declaration is the engineering surface that needs to ship). The guidance-agent emergency mode is specced alongside the broader guidance agent — both are roadmap, sequenced after the household-tier launch. --- ## How it composes with the rest of the architecture The guidance agent is not standalone. It fits into the existing pieces: - **Actor contract** ([explained here](./digital-physical-human.md)) declares the human's capabilities + language + accessibility needs. The guidance agent reads those declarations to pick the right instruction style. - **The trust gradient** ([explained here](./trust-gradient.md)) governs what action the human is taking. The guidance agent surfaces only at Stage 2 (Suggest) or Stage 3 (One-click approve) — never replaces the consent step. - **Where your data lives** ([explained here](./where-your-data-lives.md)) holds the guidance trails. They live in the actor's own ledger, not in the household's. Each human owns their own scaffolding history. - **When things break** ([explained here](./when-things-break.md)) treats guidance as the first response when a human-actor underperforms — *not* substitution. The first move is "help them succeed," not "find someone else." --- ## What this enables, that wasn't possible before The vinaya invariant rules out evaluation surfaces, leaderboards, and surveillance of contributors. Without something to fill the gap, that creates a real risk: humans get assigned tasks they're capable of but unsure about, and they have no scaffolding to fall back on. The architecture *appears* humane but the lived experience is undertrained-worker-isolation. The guidance agent fills exactly that gap: - **A new helper can take on tasks they're declared capable of, with real-time scaffolding, without being judged for needing it.** - **A volunteer in a mutual-aid kitchen can dispatch routes correctly the first time, with the system's help, without the embarrassment of asking a coordinator.** - **A verifier can review an unfamiliar Process with the framework provided in line, instead of having to read documentation in advance.** - **A new skill author capturing their lived expertise can be walked through the intent-calibration flow without feeling they need to "be technical."** In each case the human's dignity is preserved. They are not being tested. They are not being tracked. They are being *supported* — the way a calm colleague would help a new teammate, off the record, on demand. This is what the vinaya commitment to *non-paternalism* requires when extended to the humans inside MW's workflows. The system serves. The system also helps the humans it routes to serve well. --- ## Status The guidance agent is **specced; not yet shipped**. It is the natural extension of the existing `intent-capture` and `permission-broker` skills into the human-actor support layer. Closest current capability: the `intent-capture` Mode 1 single-clarifying-question loop, which already supports household users asking for help inside a Process; the guidance agent generalises this pattern to human contributors who are *executing* a Process step, not initiating one. Acceptance criteria before Phase 2 (the worker-payment path) ships: - [ ] Per-actor language + literacy + accessibility profile fields on the contract - [ ] Per-Process-step `guidance_template` declaration in skills that orchestrate human work - [ ] Pull-only mirror surface for the actor's guidance-trail history (private to the actor) - [ ] CI test verifying the household cannot query another actor's guidance trail - [ ] CI test verifying the system does not record or surface guidance-request frequency anywhere --- ## Where to go from here - [Digital, physical, human — one contract ←](./digital-physical-human.md) — the contract layer the guidance agent reads from - [The trust gradient ←](./trust-gradient.md) — the gradient inside which guidance operates - [When things break ←](./when-things-break.md) — guidance as the first response, before substitution - [How MW orchestrates a real day ←](./orchestrates-a-real-day.md) — the centerpiece this article supports *Last updated 2026-05-12.* ============================================================================== ### SECTION: /machine-world/economy ### SOURCE_URL: https://machineworld.io/machine-world/economy ### RAW_MARKDOWN: https://machineworld.io/mw-content/economy.md # How Machine World pays for itself — honestly > *Software that handles your household, your mother's medication, your time and attention is software you must be able to trust with money. Trust without honesty is a fragile thing. This page is the honest version of how the economics work.* --- ## The shape, in plain words Machine World runs on **prepaid tokens** — you buy a pack the way you'd add credit to any service you trust. Like AWS credits or mobile top-up: you put money in, you see what each action costs before it runs, you spend at your own pace, and unused tokens never expire. When a skill you've installed runs, **15% of its tokens go to the skill's capability pool. Forever.** That pool routes to the people who own the capabilities the invocation actually called — most of the time that's a single skill author; for skills that have been extended over time, the pool splits across all the contributors who own active capabilities, with a 1% floor preserving the original lineage. Not a one-time payment. Not a tip jar. Not a contract that ends. As long as the skill is useful to anyone, the contributors earn from it. Real work done once, generating real income for the lifetime of its usefulness — non-exploitative software economics applied at the layer below the user. When Machine World orchestrates a **person's work** in your life — a caregiver helper, a bookkeeper, a verifier, a translator, a driver — that person chooses, per job, how they're paid: - **In tokens** — credited to their own MW wallet, spendable within the system on services their household needs. - **In cash** — routed through standard payment rails (Stripe, Wise, mobile money, whatever fits their country), the same way they'd be paid for any work. They set their own rate. You see it before you commit. The system never decides what someone's work is worth on their behalf. The platform itself — Machine World as an organisation — takes a small operating fee on every transaction, visible on every line of the ledger. We aim to keep that fee at **10% or less of the token-pack price**. We have real operating costs — infrastructure, LLM passthrough, audit, support — and as those costs shift, the fee may need to shift with them. When it does, we publish the change, the reason, and the new number ahead of when it applies. The aim is not to never change the fee; it is to never hide what it is. This is not a path to extraction. --- ## What this is not To name it plainly: - **Not a subscription** priced in any single country's currency - **Not a financial instrument** — tokens are prepaid usage credits, not equity or speculation - **Not a cryptocurrency** — no fiat exit market, no trading venue, no scarcity premium, no investment surface - **Not a closed-loop trap** — workers who want cash get cash; nobody is forced into the token system to earn - **Not a leaderboard** — no display anywhere in the product of rank, top earners, streaks, or any form of competitive surface. Forbidden field names verified by structural test on every release. Tokens are credits. Workers choose currency. The platform earns a published, capped operating fee. That is the entire model. --- ## The values this serves The vinaya invariant — *no output may give rise to lobha, dosa, or moha* — is the most lobha-vulnerable at the economic layer, because money attracts every shape of distortion software can be subject to. So the economics are designed **defensively**: - **Lobha** — *craving*. No speculative token, no scarcity premium, no leaderboard, no engagement metric tied to earning, no upsell of premium tiers based on attention captured. - **Dosa** — *aversion*. No punitive fees, no surveillance-shaped pricing, no "you've used too much" framing, no withholding of access as punishment. - **Moha** — *delusion*. No hidden charges, no opaque margins, no silent rule changes, no marketing words for things we haven't designed. What we give up: the elegance of a fully circular community currency, the romance of "non-extractive" rhetoric. What we keep: a system people already understand, that ships in real jurisdictions, that doesn't lie about how the money flows, and that can be audited at any time. --- ## What is still being designed We don't pretend to have everything resolved. The honest list: - **Cash payouts for workers in countries without mature payment rails.** Mobile money, cash-pickup partners, and family-account designation are all on the table; specifics vary by jurisdiction. - **Identity for workers without formal ID.** Trust webs and attestations rather than ID requirements that exclude billions of people globally. - **Regional pricing that respects purchasing-power fairness** without becoming arbitrage bait. - **Inheritance and estate handling** for tokens — designed before we ship, not after. - **Long-term governance** — how decisions about this economy get made when the network is large, with no concentration of power in the founding organisation. The full design — every pitfall a community-token economy has historically broken on, and how this model handles each — is maintained as an internal planning artifact and updated as decisions land. We commit to publishing every rule before anyone is asked to commit to it. --- ## In plain words — how it pays for itself You buy a prepaid pack of tokens, the same way you'd add credit to a service you trust. Before any action runs, you see what it will cost in tokens. When a skill you have installed runs, **fifteen percent of those tokens go straight to the person who built that skill. Forever.** Not as a tip, not as a one-off — as long as the skill is useful to anyone, the person who made it earns from it. When Machine World organises a person's work in your life — a helper, a bookkeeper, a verifier, a driver — that person picks how they want to be paid, job by job. They can take tokens, credited to their own wallet, or cash through the standard payment rails of their country. They set their own rate; you see the rate and the choice before you agree to the work. The platform itself takes a small, published fee on every transaction. We aim to keep that fee at ten percent of the pack price or below. If it ever needs to change, we publish what is changing and why, before the change applies. We have real operating costs — infrastructure, audit, support — and we are honest about them. The discipline that keeps any of this from drifting comes from **Marga Sakacchā** and the Buddhist analysis of unwholesome action. The word for it is *vinaya*. At the financial layer it means the system does not amplify craving, aversion, or delusion — not in this layer either. > *A Sinhala re-expression of this section will be added once a native-speaker reviewer who holds both the language and the meaning has worked through it. The English above is the source.* --- ## What this commits us to When the managed tier of the token economy ships, these will be published and pinned **before** anyone is asked to commit: - The operating-fee cap (currently proposed: ≤10% of the token-pack price) - The refund window for unused tokens (currently proposed: 30 days) - The skill-creator share, immutable (15%, already specced) - The growth fund and community fund percentages - The pack-pricing bands per region - The dispute resolution mechanism - The full ledger, queryable **Today, the local CLI is free and remains so.** If you never want to use the managed tier, you never have to. The system was designed so that the engine works without the economy, and so that the economy serves the engine — not the other way round. --- → [Caregiver scenario](../use/scenarios/caregiver.md) — how the economy looks in one household's life → [The vinaya invariant](../VIVEKA-AND-AUTONOMY.md) — what binds the system at every layer, including this one *Last updated 2026-05-11. The numbers here are proposed anchors; the principle of publishing every rule before commitment is settled.* ============================================================================== ### SECTION: /machine-world/faq ### SOURCE_URL: https://machineworld.io/machine-world/faq ### RAW_MARKDOWN: https://machineworld.io/mw-content/faq.md # Machine World — Frequently Asked Questions *A question-shaped source document, designed for retrieval by NotebookLM and other knowledge-base tools. Every answer is grounded in the repo; file paths are cited so the answer can be cross-checked.* **Repository:** `dinukxx/Machineworld`. **Updated:** 2026-05-11 (post-local-CLI pivot). **Status:** pre-production blueprint. **Companions:** [`MACHINEWORLD-COMPLETE.md`](MACHINEWORLD-COMPLETE.md), [`VIVEKA-AND-AUTONOMY.md`](VIVEKA-AND-AUTONOMY.md), [`INTENT-CALIBRATION-AND-SKILL-ECONOMY.md`](INTENT-CALIBRATION-AND-SKILL-ECONOMY.md), [`PROCESS-PATTERN.md`](PROCESS-PATTERN.md). --- ## Getting started ### 1. What is Machine World, in one sentence? Machine World is an agentic AI operating system where humans, digital agents, and physical agents work together — governed by a single non-overridable rule: no output may give rise to *lobha* (craving), *dosa* (aversion), or *moha* (delusion). It is not a productivity tool. It is infrastructure that handles the world so the human is freed for practice. (See `README.md` and `SYSTEM.md`.) ### 2. Who built it and why? Designed and built by **Gehan Panapitiya** — AI Platform Engineering Manager · Digital Musician (Inner Realm) · Marga Sakacchā fellow-traveler. Machine World is the engineering work that emerged from holding these three practices together: **Inner Realm** (Dhamma meaning expressed through contemporary Sinhala music), **Marga Sakacchā** (a Dhamma-dialogue practice with no hierarchy), and the engineering itself. The driving question was not *"how do I build a better AI assistant?"* but *"what would it take for a human to have **citta viveka** — mental stillness — in a world that constantly pulls attention outward?"* (See `README.md` Origin and `GUIDE.md` The invention.) ### 3. Is this a religious project? Do I need to be a Buddhist to use it? No on both counts. The Dhamma teachings that inform the system have been freely given for 2,500 years and belong to no one. What is new in Machine World is their **application as a structural constraint in software** — the architectural decision to bake *no lobha/dosa/moha* into the OS as a non-overridable invariant. You don't have to share the philosophy to use the system. The vinaya invariant binds *the system's outputs*, not *your beliefs or choices*. (See `README.md` Origin, `VIVEKA-AND-AUTONOMY.md` § 4.3.) ### 4. How is this different from ChatGPT or Claude? ChatGPT and Claude are intelligence layers. Machine World is a **local AI operating system** that sits on top of any intelligence layer through BYOM (bring your own model — four backends: `claude-cli`, `ollama`, `openrouter`, `openai-compat`). It adds: a hard ethical invariant that cannot be turned off; a versioned skill ecosystem with permanent attribution and earnings for creators (specced; loads when the economy ships); MCPs as the explicit capability surface (your machine is the boundary); durable memory in `~/.machineworld/` on your machine (remote sync is opt-in); a native Sinhala+English bilingual default; the Process pattern (Skill → Workflow → Routine → Loop → Process) as the unit of system-level coordination; and a metric that optimises for *less* human intervention, not more engagement. Most other AI systems optimise for time-on-platform. Machine World optimises for the opposite. (See `README.md`.) ### 5. How do I install it? One line: ```bash curl -fsSL https://get.machineworld.io | sh ``` (The short URL is being wired up; while DNS is pending, the GitHub raw URL `https://raw.githubusercontent.com/dinukxx/Machineworld/main/scripts/install.sh` runs the same installer.) No sudo. No data leaves your machine. The installer drops everything under `~/.machineworld/install/`, checks Python 3.10+ and Node 20+, fetches the latest release, sets up `~/.machineworld/`, and adds `mw` to your PATH. Then run `mw` to start. **What you're installing** is the `mw` CLI — the AI scaffolding that connects your machine to the Machine World universe (skills, actor network, values, economy). Today the scaffolding runs as a command-line interface; web, voice, and ambient interfaces follow as the roadmap lands. The universe is bigger than any single interface. For development from source: `git clone https://github.com/dinukxx/Machineworld.git && cd Machineworld/cli && npm install && npm run build`, then run `./bin/mw`. (See `README.md`.) ### 6. Do I need an API key to start? Depends which backend you pick on first run. MW supports four (BYOM — you bring the model and the key): - **`claude-cli`** — uses your Claude Max or Pro subscription via the official `claude` CLI. No per-call cost beyond your existing subscription. No API key to configure inside MW. - **`ollama`** — local models (Llama, Qwen, Mistral, etc.). Free, fully offline, no key. - **`openrouter`** — any model via OpenRouter. You bring your OpenRouter API key. - **`openai-compat`** — LM Studio, vLLM, Groq, Together, or any OpenAI-compatible endpoint. You bring the URL + key. On first run, MW auto-detects what's available and asks you to pick. Your key and your data stay on your machine. (See `README.md` *Intelligence layer*, `src/machineworld/skills/llm_adapter.py`.) ### 7. What does the system actually do for me? Three things, accumulating over time. **(1) Hands off the routine.** Skills you build (or others build) take recurring workflows off your plate — inbox, calendar, finances, research, environment, practice support — handing only the things that need your judgment back to you. **(2) Closes open loops.** The mind fragments when too many things are unresolved. The system closes them with the kind of judgment you would apply, so attention can settle. **(3) Gives back space.** The metric is *citta viveka* and *kaya viveka* — mental and bodily seclusion — not output. If a week ends with more done but less stillness, the system has failed by its own standard. (See `GUIDE.md` and `VIVEKA-AND-AUTONOMY.md` § 1.) ### 8. Is this a finished product or still being built? Pre-production. `SYSTEM.md` declares `Version: 0.1.0-draft`; almost every `SKILL.md` carries `vinaya_verified: false` and `status: draft`. The architectural blueprint is comprehensive; the implementation is mid-build. The single skill currently at `status: production · vinaya_verified: true` is `vinaya-alignment` — the cornerstone has been locked first. The `mw` CLI source exists and compiles but has not been verified end-to-end (verification flow tracked in `DEVELOPMENT.md` → *First-run validation*). Treat the docs as architectural intent, accurate to the spec, but not yet proof of running behaviour. --- ## Using it day-to-day ### 9. What's the difference between the `mw` CLI and using Claude Code with the repo? `mw` is the **canonical interface** — purpose-built TypeScript + Ink TUI (`cli/bin/mw.ts`) that invokes `python -m machineworld.pipeline` per message via child_process. Pipeline runs locally — skill router, context harness, call ledger, vinaya gate, the chosen LLM backend, MCP client. No server, no WebSocket, no required gateway. Claude Code with the repo is a **development affordance** — Claude Code reads skill files directly off disk and stands in for both the interface and the intelligence backend in one process. Useful when you're iterating on the engine itself. Same universe, different surfaces. (See `README.md`, `cli/src/connection/local-pipeline.ts`.) ### 10. How do I run it offline? MW is local-first by default. Whether it works without network depends on which LLM backend you picked: - **`ollama`**: fully offline. Model inference runs on your machine; no network needed. - **`claude-cli`**, **`openrouter`**, **`openai-compat`**: need network to reach the model endpoint, but everything else (skills, ledger, household state, MCPs that don't make outbound calls) runs locally. Either way, your conversations, memory, and ledger live in `~/.machineworld/` and never leave the machine unless you explicitly opt in to remote sync. An Android edge container with on-device Whisper + local Llama + Kokoro TTS is specced (`core/EDGE-CONTAINER.md`) but **not yet implemented**. (See `README.md` *Memory layer*.) ### 11. Do I need a server running? No. MW runs entirely as a local CLI by default — `mw` spawns the Python pipeline per message, the pipeline calls your chosen LLM backend directly, and MCPs are launched as local subprocesses. The legacy MCP Gateway (`python -m machineworld.gateway`) still exists but is **opt-in**, used only when you want remote / multi-device sync (`mw --remote ws://...`). For most users, "no server, just `mw`" is the whole picture. The system is designed for graceful degradation too — if a backend, MCP, or notification channel is unreachable, MW continues with reduced capability and tells you what's down. Never silent. (See `README.md` *Architecture*.) ### 12. How do I trigger a skill? In the `mw` TUI: type into the prompt and the router matches your phrasing against installed skills' `trigger_conditions`. Several skills also expose explicit slash commands (`/system-tune`, `/calibrate`). The `intent-capture` skill runs first as the understanding layer — if your phrasing is clear, it routes; if it's partly clear, it asks **one** clarifying question; if it's unclear, it surfaces the gap and waits. Nothing else executes until intent is clear. (See `core/CORE-PROMPT.md`, `skills/intent-capture/SKILL.md`.) ### 13. How do I know what skills are available? Run `mw skills list` or `mw doctor` for a self-diagnostic that lists active skills installed in your household. The full registry of available skills lives in `registry/REGISTRY.yaml` (and is published via the skill marketplace when the managed tier ships). The repo currently has roughly 50 skill folders in `/skills/` — most still drafts. The full landscape (built / in-progress / planned, by tier) is documented in `SKILLS-ROADMAP.md`. The `collaborative-calibration` skill (`status: production`) is the system's own answer to *"is this skill working for you?"* — if a skill isn't reaching you after 7 days, it surfaces a calibration dialogue. ### 14. What happens if a skill fails? Three things, in order. **(1) The failure is traced.** Every call emits a trace into the local call ledger (`~/.machineworld/households//ledger/`) and, if Langfuse is configured, into Langfuse too. Failures are recorded with full context (input, error, vinaya gate result, MCP call history). **(2) The system surfaces, doesn't hide.** Anomalies surface via `system-log` and are included in the user-visible output, not silently logged. **(3) Recovery is preserved.** Active task state is checkpointed locally; on restart, the task resumes at its last checkpoint. If you've opted in to Firebase remote sync, state mirrors there too. If the failure crossed the vinaya invariant, the system halts the action, discards the output, and surfaces the situation as a CRITICAL notification. (See `SYSTEM.md` § 10 Observability.) --- ## The vinaya and the philosophy ### 15. What does *no lobha, dosa, moha* actually mean in practice? It means the system will not produce outputs that elevate craving (lobha — *"you're missing out"*, gamified streaks, FOMO framing), aversion (dosa — surveillance framing, fear-based motivation, adversarial design), or delusion (moha — false certainty, hidden limitations, auto-acting without disclosure). At every layer — notification timing, skill output, model selection, economy distribution — the design is checked against these three. Violations block. The `vinaya-alignment` skill (the only one currently at `status: production · vinaya_verified: true`) audits every feature, skill, and architectural decision through a three-gate filter: ethics, autonomy (offers vs imposes), and transparency. (See `core/CORE-PROMPT.md`, `skills/vinaya-alignment/SKILL.md`, `VIVEKA-AND-AUTONOMY.md` § 4.2.) ### 16. Will the system refuse to help me with things it disapproves of? No. This is a critical distinction. **The vinaya invariant binds the system's outputs, not your choices.** The system is bound from generating manipulative, fear-based, or deluding content. It is *not* given authority to decide that *your* belongings or activities increase craving and therefore restrict your access to them. Doing so would itself be moha — false certainty about another person's path. The Marga Sakacchā vinaya is explicit: *"fellow travelers, not teacher-student."* You are sovereign over your own life. The system serves; it does not gate-keep. If you ask it to help you with something it would never proactively suggest, it helps. (See `VIVEKA-AND-AUTONOMY.md` § 4.3 — *The crucial distinction*.) ### 17. What stops the system from becoming the thing that decides what's good for me? The architectural answer is the **Identity Principle** from `vinaya-alignment` v1.0.0 (production): *"Capabilities exist and are available, but the human chooses when to engage them. The system offers, it doesn't impose."* Every feature passes through three gates before reaching you: **Vinaya** (no lobha/dosa/moha in outputs), **Autonomy** (offers vs imposes — explicit table catches paternalism at design time), **Transparency** (you always know what the system is doing). Plus: human checkpoints never auto-proceed past a deferred decision; your wisdom model lives in your own git repo (you can inspect, edit, copy, or delete it); `permission-broker` enforces minimum-grant + always-revocable on every permission; `rollback-registry` keeps an undo command for every change. The system clings to one thing — the vinaya binding *itself* — and lets everything else be impermanent. (See `VIVEKA-AND-AUTONOMY.md` § 4 in full.) ### 18. What's the difference between *vinaya binds the system* and *vinaya binds the user*? The first is *sīla* — ethical conduct held by the actor about its own actions. The second would be moha — false certainty about another person's path, dressed up as care. Machine World holds itself to a practice code; it does not hold you to one. The system can be trusted as a workforce because it cannot manipulate you; you can be trusted as a sovereign human because the system has no authority to override your judgment. The first half makes the system safe to delegate to. The second half ensures it stays a workforce, not a warden. (See `VIVEKA-AND-AUTONOMY.md` § 4.3 and § 7.) ### 19. What is *citta viveka* and why is it the metric? **Citta viveka** (චිත්ත විවේකය — *mental seclusion / inner stillness*) is the condition where the mind is no longer scattered across a hundred open loops — the inbox needing checking, the appointment needing booking, the decision still pending. **Kaya viveka** (කාය විවේකය — *bodily seclusion*) is the corresponding withdrawal of the body from the busyness that fills the day and leaves no space for practice. These are the system's metric — not because it claims to *cause* enlightenment, but because freedom from continuous low-grade demand on attention is the *precondition* for any deeper practice. Productivity tools give you more output. Machine World gives you more space. The coexistence of human, digital, and physical agents is the method; viveka is the result. (See `GUIDE.md` *Start here*.) --- ## The actor contract and the contributor economy > *These questions serve two readers: someone running MW in their household (caregiver, parent, practitioner, household coordinator) who wants to understand who's paid for what; and someone considering becoming a contributor — turning their own lived expertise into a skill that earns for them every time it's used. Each answer leads with the household-side, then the contributor-side.* ### 20. How does an AI skill or person actually get paid for participating? Anyone whose work touches a Process MW runs for your household has a declared contract — the person who built the medication-tracking skill, the helper who took the voice note, the bookkeeper who reconciled the bills, the digital agent that drafted the family digest. Each has a rate, an availability window, and a payment path; you see all three before you commit. Payment routes to that contributor automatically, per call, at their declared rate. (See [docs/use/explainers/digital-physical-human.md](use/explainers/digital-physical-human.md) for the actor contract made concrete.) **If you're thinking of becoming a contributor — turning your real-life expertise into a skill:** the same shape applies to you. Maybe you've spent years working out the patterns of caring for an aging parent. Maybe you've solved how to coordinate a small Dhamma group. Maybe you've found the rhythm that lets a household with three kids and two jobs actually run. The intent-calibration flow inside MW (see `skills/intent-capture` Mode 2) interviews you to capture what you know — your triggers, your edge cases, your hard limits, your judgments — and hands the result to `skill-forge`, which drafts a publishable skill from it. You become an **actor** with a payout address. From then on, every invocation of the skill contributes to its **15% capability pool**, and your share — set by which capabilities you own and how often they're called — routes to your wallet. **Forever.** Real work captured once, generating real income for the lifetime of its usefulness. (See [`INTENT-CALIBRATION-AND-SKILL-ECONOMY.md`](INTENT-CALIBRATION-AND-SKILL-ECONOMY.md) for the full arc, and [`docs/values/economy.md`](values/economy.md) for the economic shape.) ### 21. What stops a contributor from gaming the system to earn more? **The short answer:** there's nothing to game, because there's no surface to climb. MW **cannot display rankings, leaderboards, top-earner lists, "most popular" surfaces, streaks, or popularity scores**. Field names like `reputation_score`, `total_calls`, `total_earnings`, and `rating` are *forbidden* on every contract — their absence is verified by a structural test in CI on every release. For a household user, this means: when MW suggests a skill or routes work to a contributor, the selection is based on declared capability + your stated preferences + price, never on a hidden popularity ranking the system has constructed. No payola surface; no engagement leaderboard. **If you're a contributor:** your skill earns when it's *actually used* by a real household for real work. You cannot pump invocations because three structural watchdogs watch for it: - **Lobha catcher** — hoarding patterns, sybil-routing (one person operating multiple "actors" to inflate calls), royalty-stacking schemes. - **Dosa catcher** — poisoning the well, withholding service for leverage, suppressing other contributors' work. - **Moha catcher** — declared capability drifting from delivered capability, repeated unaware misuse, intent-vs-effect divergence. Each runs on a **money-stripped trace projection** — the watchdogs literally cannot see who earned what when they look for patterns. They cannot drift toward catching big earners because they don't know who the big earners are. Catchers and reviewers are paid a flat rate per shift, never per catch. The bounty-hunter incentive is structurally removed. ### 22. What does the mirror show me when something is flagged? **If you're a household using MW: you never see flagged behaviour about other contributors.** No labels, no warning badges, no ratings — that's not the kind of information MW exposes to you. You see the work that happened and the audit trail of what ran. Other people's mirrors are private to those other people. **If you're a contributor whose work has been flagged**, the mirror surfaces three klesha-aligned panels (lobha / dosa / moha). Each panel contains: - *What pattern was observed* — named, neutral, specific (e.g. *"chain of identical-digest passes through your call path, observed in 14 traces over 6 days"*) - *How affected workflows were impacted* — expressed as patterns-of-effect (*"downstream callers waited longer for output without added value"*), never quoted reactions - *Evidence trace IDs* — by reference only; the mirror never shows call contents - *A contemplation invitation* — an open question derived from the specific pattern and the specific evidence: *"When you noticed [the named pattern], what did you intend?"* **No score, no rank, no streak, no badge** — anywhere. The mirror is **pull-only and event-triggered**. It does not push notifications, does not refresh on a timer, does not accumulate as a status surface. Between findings, pulling it returns *"nothing new since <ts>"*. By design, not by policy — there is no `refresh()` API. ### 23. Can I see how I rank against other contributors? **No. There is no ranking surface — at the data layer or at the UI layer.** For users of MW: discovery of actors (skills, helpers, services) is by **capability query** (*"does this actor offer X?"*), never by *"which actor is best at X?"* For contributors: you see your own earnings; you cannot see anyone else's. The actor registry has no `top_earners()`, `leaderboard()`, `most_active()`, or any sortable rendering. Field names like `total_calls` and `total_earnings` are explicitly forbidden on the contract — their *absence* is verified by structural test at every commit. Leaderboards are the engine of *lobha* (craving); the system does not have one. ### 24. What happens if I make a mistake? The architecture is built around **correction, not punishment**. **For households:** if a contributor in your workflow has a pattern flagged, you don't see drama or escalation. The system surfaces the issue to that contributor privately and gives them time to self-correct. You see the work continue or, if the contributor doesn't correct, a quiet substitution. No public shaming, no notification storm, no degrading of your trust in the surrounding system. **For contributors:** if a pattern in your contribution is flagged, the mirror surfaces it to *you* (and your owner if you're acting on someone's behalf) with a **self-correction window**. If the next batch of action traces doesn't match the pattern, the finding ages out. **No automatic forfeit, ever.** If the pattern persists past the window, then a forfeit fires — and even then, the destination is the **affected workflow's beneficiary**, never the catcher, never MW, never a bounty pool. You are not at risk from a single bad day, an experimental skill version that didn't land, or a temporary capability gap. The system reviews against patterns over time, never single moments. The integration test `test_full_flow_self_correct_results_in_no_forfeit` is the load-bearing proof that the system tests correction first. --- ## Sinhala and language ### 30. What if I don't speak Sinhala? Use it in English. The system is fully bilingual — Sinhala and English are equal-weight defaults — but English alone is sufficient for everything except reading lyrics in their original Sinhala or interacting with the Sinhala vinaya text in `marga-vinaya.md`. All architectural docs, all skill specs, all CLI output, all subagent interactions work in English. Pāli technical terms (Dhamma vocabulary like *anicca*, *sati*, *nibbāna*) appear with English glosses. The Sinhala-first stance is about cultural rootedness, not exclusion. (See `SYSTEM.md` § 12 Language Protocol.) ### 26. Why is Sinhala the default language alongside English? Because the system comes from Inner Realm and Marga Sakacchā, which are embedded in the Sinhala-speaking Buddhist world. Treating Sinhala as a primary language rather than a translation target is a statement that the system is not built English-first with translation as an afterthought. It also affects technical decisions: `audio-scanner` evaluates Sinhala STT/TTS quality as a *primary* criterion when selecting edge models, not secondary; `sinhala-unicode-tuner` exists as a dedicated skill; the `dhamma-song-guide` skill works on Sinhala lyrics natively. (See `README.md` *Language*, `skills/audio-scanner/SKILL.md`.) ### 27. Can I use it in another language? Yes, but with a process. Output in non-default languages goes through a **re-expression** flow, not machine translation: meaning is first extracted in Sinhala+English, then re-expressed in the target language as *"a living teacher in that tradition would speak it."* Re-expression of Dhamma content requires a human expert checkpoint before delivery. The `language-reexpression` meta-skill governs this. The reason: translation optimises for surface linguistic equivalence; re-expression preserves how meaning is *held*, which matters disproportionately for contemplative content. (See `SYSTEM.md` § 12.2.) ### 28. Will it translate Dhamma content for me? Not by machine. It will *re-express* — and any non-default-language Dhamma output is flagged as a working draft until a human expert who holds both the language and the Dhamma has reviewed it. The system never auto-publishes non-default-language Dhamma content. This is a deliberate constraint to prevent moha (false certainty) creeping in through translation infidelity, where a beautiful but slightly-off rendering carries different meaning than the original. (See `skills/dhamma-song-guide/SKILL.md` Language Protocol, `meta-skills/language-reexpression.md`.) --- ## Privacy, data, and trust ### 29. Where does my data live? On your machine, under `~/.machineworld/`. Layout: - `~/.machineworld/config.yaml` — your backend choice and global settings - `~/.machineworld/conversations/` — message history - `~/.machineworld/households//` — household-scoped state (default household is `personal`): skills, workflows, routines, MCPs, memory, vault, call ledger - `~/.machineworld/install//` — the installed MW release Read it, copy it, version it with git, delete it — it's yours. **Nothing leaves the machine unless you explicitly opt in to remote sync.** If you do opt in, Firebase becomes the live-state mirror for multi-device sync; otherwise it's not touched. Auth secrets stay in your environment, injected at the MCP boundary. (See `README.md` *Memory layer*, `SYSTEM.md` § 8.) ### 30. Can I take my wisdom model with me if I leave? Yes. Your wisdom model lives under `~/.machineworld/households//memory/wisdom-model.md`. It's a Markdown file. Copy it, version it, share it, or delete it. There is no vendor-controlled database holding your accumulated calibration. The architectural decision to put it on your machine (not in a cloud database, not on a server) was deliberate — sovereignty over the most intimate state the system holds about you stays with you. (See `VIVEKA-AND-AUTONOMY.md` § 4.6.) ### 31. Does my voice ever leave my device? Today MW is a text CLI — there's no voice path in the current local pipeline. A voice surface is specced for the Android edge container (`core/EDGE-CONTAINER.md`): on-device Whisper STT (small 244M, fine-tuned on Sri Lankan colloquial speech), on-device Llama 3.2 3B for response generation when confidence is high (>0.85), Kokoro TTS. The invariant `edge-voice` enforces is *"audio never leaves the device — only Whisper-transcribed text crosses the network when cloud escalation is needed."* This is *indriya-saṃvara* (sense restraint) at the network layer. Implementation: **not yet built**. (See `skills/edge-voice/SKILL.md`.) ### 32. Who can see my profile? You control it through `human-profile`'s **layered privacy model**. Three layers: **public** (visible to anyone in Machine World — display name, roles, joined date), **connected** (visible only after an accepted handshake with another party — language preference, practice level, surface count), **private** (visible only to you and the system — full surface registry, preferences, journey, memory references). You decide what goes in each layer. *"No other party writes to it without explicit consent."* Privacy is per-field, not all-or-nothing. (See `skills/human-profile/SKILL.md`.) ### 33. What if I want to revoke a permission? Every permission grant has a corresponding revoke command. The `permission-broker` skill enforces this as an invariant: *"if a permission cannot be cleanly revoked, it must be flagged before granting; the human decides whether to proceed."* Every grant is logged in `rollback-registry` with timestamp, task name, permission name, grant method, and undo command. You can revoke at any time and the system will be returned to its prior state. For external services (Gmail, Calendar, Slack), the credentials live in your environment, not the system's filesystem — revoking at the source removes access entirely. There is no hidden master key. (See `skills/permission-broker/SKILL.md`, `VIVEKA-AND-AUTONOMY.md` § 2.3 and § 4.8.) --- ## Building skills and contributing ### 34. How do I build my first skill? The path is automatic when a need recurs. Use the system normally; whenever the same workflow appears three times (`pattern_threshold: 3`), `intent-capture` will surface the offer: *"this looks like something you do regularly — should we automate it?"* If you say yes, you enter an 8-step **intent calibration** dialogue (`de01_open` through `de08_synthesise`) that captures your judgment model, success/failure criteria, edge cases, and hard limits. The output is a `skill-blueprint.md` handed to `skill-forge`, which interviews you to confirm details, drafts the skill (`SKILL.md`, `config.schema.yaml`, `improvement-spec.md`), shows you the draft for approval, and only then writes files. From draft to production-locked is a separate step you control. (See `INTENT-CALIBRATION-AND-SKILL-ECONOMY.md` and `skills/skill-forge/SKILL.md`.) ### 35. What is intent calibration? **Intent calibration is the human↔AI interaction alignment that makes collaboration with the skill creator easy and the skill being built clearly defined.** It's the dialogue between you and the system that tunes both parties to each other before a skill is built. The human articulates what they actually want (often discovering it as they speak); the system surfaces what it can and cannot do, and what it would need to know. The output is two artifacts: a `wisdom-artifact.md` for your memory and a `skill-blueprint.md` for `skill-forge`. The current implementation is `intent-capture · Mode 2`, labelled *Deep extraction* in the spec — but the better word is calibration, because it's bidirectional, drift-tolerant, and re-runnable. Distinct from `collaborative-calibration` (which runs *downstream*, after skills exist, when one isn't reaching its user). (See `INTENT-CALIBRATION-AND-SKILL-ECONOMY.md` § 2.) ### 36. Once I build a skill, what happens to it? It enters service in `status: draft`. You run it, watch it, refine it across days or weeks. When you're confident it behaves as your judgment would, you ask `skill-forge` to `version` it — it becomes `versions/v1.0.0/`, immutable. From that point: no further edits to v1.0.0 ever; every change requires a new version (`improve` → v1.0.1 or v1.1.0); old versions are preserved in `versions/`, never deleted. You can publish it to the central registry (a community review gate per `SYSTEM.md` § 9) so other humans can use it. Or you can keep it private to your own user repo. The choice is yours. (See `INTENT-CALIBRATION-AND-SKILL-ECONOMY.md` § 8 and `SYSTEM.md` § 7.4.) ### 37. Will I get paid for skills other people use? Yes — **every invocation contributes to the skill's 15% capability pool, forever**, and your share — set by the capabilities you own and how often they're called — is credited atomically to your MW wallet per call. This is structural, not policy. As long as the skill is useful to anyone, anywhere in the system, you earn from it. Attribution is immutable (your name and your owned capabilities are in `SKILL.md` and the central registry; cannot be silently removed). If someone extends your skill with new capabilities (via `skill-research` autoresearch + human approval), the pool split is rebalanced based on the new capability ledger — and the original-author floor of 1% protects you even if every original capability is later refined. The 15% pool itself is the engine's commitment; if it ever needs to change, the change is published with reasoning ahead of when it applies — no silent reduction. The economy is bound by the same vinaya as everything else: extractive growth strategies are prohibited. (See `INTENT-CALIBRATION-AND-SKILL-ECONOMY.md`.) ### 38. Can someone else change my skill after it's locked? Not in place. Once your skill is `status: production`, it is immutable — no one can edit `versions/v1.0.0/`. Anyone (including you) can propose an improvement via `skill-research` autoresearch: parallel experiments under fixed budgets, ranked by a coordinator, surfaced for human approval. If approved, a new version is published (v1.0.1 or v1.1.0); your v1.0.0 is preserved untouched. The earnings split for the new version is negotiated at improvement time, recorded in the registry. So your work isn't taken; it's built upon, with attribution and earnings adjustments tracked explicitly. (See `SYSTEM.md` § 7.4 and § 11.) --- ## The economy ### 39. How does the token-pack economy work? **Specced, not yet load-bearing.** In the current local CLI, you bring your own LLM key (BYOM) and pay the model cost directly to the provider — there is no MW token metering in the default path. The token-pack economy is part of the future MW-managed tier (prepaid token packs + server-hosted routines). When it ships, you'll buy a prepaid token pack with fiat (like AWS credits or mobile top-up). Each action's token cost is shown before it runs. Per token spent, the planned distribution is approximately: **60%** to the LLM provider (actual inference cost), **15%** to the skill creator (forever, immutable), **≤10%** to the MW operating fee (capped and transparent), with the remainder split between a community / accessibility fund and a growth fund that finances new skills and the watchdog audit work. Atomic ledger write on every call. (See [docs/values/economy.md](values/economy.md).) ### 40. Is there a subscription? No. Today (BYOM): you pay the LLM provider directly for whatever you use. Some backends (`claude-cli` with Claude Max, `ollama` locally) cost nothing per call. When the MW-managed tier ships, it will be prepaid **token packs** — no monthly fee, no free trial that auto-converts, no usage cap that gates basic functionality. Tokens don't expire; unused tokens are refundable within a stated window. Pay-as-you-use is a deliberate anti-engagement choice: subscriptions create a structural incentive to maximise time-on-platform; prepaid token packs do not. ### 41. What's the MW token for? **Specced; ships with the MW-managed tier.** MW tokens are earned by skill contributors (a 15% capability pool per invocation, split across the capability owners whose contributions were exercised) and by community contributors (skill reviews, test case authorship, improvement signal quality, documentation). Token utility, when live: unlock premium model access, sponsor skill development (allocate to improvement budgets), grant voting rights on skill governance, provide priority scheduling for improvement jobs. Roadmap items: peer-to-peer transfer, redemption for services from Inner Realm partners, community grants for Dhamma/contemplative projects. The token pool grows from 5–10% of every transaction; price is set by pool size / circulation. System agents do not earn tokens — agents are instruments, not earners. (See `SYSTEM.md` § 16.4.) ### 42. Why does the platform earn less when I use it more? Because the **growth fund** carved out of every transaction finances capabilities that require *less* of you. Other platforms earn more from heavier usage and design for engagement maximisation. Machine World structurally cannot — `SYSTEM.md` § 16.6 explicitly prohibits *"engagement maximisation, artificial scarcity"* as growth strategies, and any feature that violates this fails the `vinaya-alignment` Gate 1 audit before it ships. The system grows by being more genuinely useful (so your needs require less intervention over time), not by holding your attention. The economic flywheel and the practice metric (citta viveka) point in the same direction. (See [docs/values/economy.md](values/economy.md).) --- ## Edge / offline ### 43. Can I use it on a phone? Not yet. Today MW runs on macOS and Linux (Windows via WSL); the entry point is the `mw` desktop CLI. The Android edge container is specced (`core/EDGE-CONTAINER.md`) — local Whisper STT (Sinhala-fine-tuned), local Llama 3.2 3B Q4_K_M as on-device LLM, Kokoro TTS, Porcupine custom wake word, max 4 GB local model size — but **not yet implemented**. iOS is further out. Telegram + web interfaces are on the roadmap but not built. (See `core/EDGE-CONTAINER.md`.) ### 44. What works offline and what doesn't? In the current desktop CLI, "offline" depends on the LLM backend you picked: - **With `ollama`**: fully offline. Skills, ledger, household state, MCPs that don't make outbound calls — all run locally. - **With `claude-cli` / `openrouter` / `openai-compat`**: model calls need network; everything else (skills, ledger, household state, local MCPs) is local. Skills that explicitly declare `offline_capable: true` (`intent-capture`, `edge-voice`, `permission-broker`, `vinaya-alignment`, `human-profile`, others) work without any network round-trip. External-integration MCPs (Slack, WhatsApp, Telegram, Firebase remote sync) need network. Mode switching is transparent — the system announces what's reachable. ### 45. What happens to my activity when I'm offline? Everything is local. Conversations land in `~/.machineworld/conversations/`; the call ledger writes to `~/.machineworld/households//ledger/`; traces queue for later Langfuse upload if configured. If you've opted in to Firebase remote sync, pending writes accumulate in `~/.machineworld/queue/firebase_writes.jsonl` and a sync protocol drains them when connectivity returns: authenticate, push local state, pull remote updates, validate integrity, sleep until next connectivity event. Without remote sync opted in, the queue is empty by design — everything stays local. Conflicts (rare): task state uses last-write-wins by timestamp; the ledger is append-only on your machine. (See `SYSTEM.md` § 17.) --- ## Community ### 46. What is Marga Sakacchā? Marga Sakacchā (මාර්ග සාකච්ඡා — *Path Dialogue*) has two senses inside Machine World, both load-bearing. **(1) A practice community** — *"a community dedicated to deep conversations on the Noble Eightfold Path and the living practice of the Dhamma … in a spirit of open dialogue, not as teacher and student, but as fellow travelers on the path."* (Quoted from `skills/dhamma-song-guide/references/marga-vinaya.md`, the Sinhala vinaya text.) **(2) A governance vinaya inside the system** — the practice code that prevents Machine World from speaking down to anyone or imposing its judgments. The same posture appears in every skill that involves dialogue: `dhamma-song-guide` Discussion Mode (*"meet the user as a fellow traveler — no hierarchy, no authority"*) and `intent-capture` (*"human judgment is ground truth"*). The community came first; the architecture inherited the posture. ### 47. How do I join the community? Honest answer: the community has a real presence outside this repository — the **Marga Sakacchā** group has a Facebook and YouTube footprint; **Inner Realm** music is on SoundCloud, Spotify, and YouTube — but a documented onboarding path for new community members is not yet shipped from the system side. The repo is the operating system; the community is its surrounding context. A community-facing onboarding surface is on the roadmap, sequenced after the first-human handoff on the desktop CLI. Until then, the realistic path is following Inner Realm and Marga Sakacchā on the platforms above, and direct conversation with Gehan and the early community. (See `skills/dhamma-song-guide/references/inner-realm-catalog.md` for the public-facing links.) ### 48. Is there a place to discuss this with other practitioners? The Marga Sakacchā community has Facebook and YouTube presence (linked from `inner-realm-catalog.md`). Inner Realm has a SoundCloud, Spotify, and YouTube footprint. Within the system, the `dhamma-song-guide` skill enters Discussion Mode after its initial analysis — a Marga Sakacchā-style dialogue on a song's meaning, which is one practitioner-to-practitioner context. The `practice-sharing` skill is in the repo at draft status and is intended for practitioner-to-practitioner exchange, though it is not yet end-to-end. The human-community participation surface is less developed than the architectural one; the realistic current path is direct conversation with Gehan and the early community. --- ## Status and roadmap ### 49. What's actually built today vs. planned? **Built today (✅):** the local desktop CLI (`mw`, Ink/React TUI invoking `python -m machineworld.pipeline` per message); four BYOM backends working through `llm_adapter.py` (`claude-cli`, `ollama`, `openrouter`, `openai-compat`); the curl installer (`scripts/install.sh`); household-scoped data at `~/.machineworld/households//`; call ledger; MCP client + local subprocess MCPs; 48 skills with 24 vinaya-verified per the most recent docs auto-update (2026-05-09); the **Process pattern** with the **self-evaluation-loop** as its first instance (the gauntlet — `mw self-eval ...`); five validation gates passing (tests, backend matrix, vinaya benign-sample audit, cold-read playthrough, failure injection); `vinaya-alignment` v1.0.0 production-locked. **Specced, not yet load-bearing (📄):** MW-managed tier (token packs + server-hosted routines), full token economy, multi-household sharing, multi-device remote sync via the legacy gateway, Android edge container, iOS, web UI, Telegram interface. **Partial (🔧):** most skill folders still carry `vinaya_verified: false`; the gauntlet is generating improvement-specs against them via the calibration → plan-feeder loop. (See `README.md` and `docs/CAPABILITIES.md` for the current generated catalog.) ### 50. Where can I follow updates? The repository at https://github.com/dinukxx/Machineworld is the source of truth for development; `git log` on `main` is the most reliable signal of where the work is. `DEVELOPMENT.md` tracks status in human-readable form; `SKILLS-ROADMAP.md` lists skill tiers and build state. There is not yet a separate changelog blog or release announcement channel — that lives further along on the roadmap. The Inner Realm and Marga Sakacchā social channels (SoundCloud / Spotify / YouTube / Facebook) carry the artistic and community thread. ### 51. When will the Telegram / web / iOS interface be ready? The desktop CLI is what runs today. Telegram, web, Android edge, and iOS are all on the roadmap, sequenced **after** the first-human session validates the desktop CLI end-to-end. The Android edge container has the most complete internal specification; the others are roadmap-only. No specific dates are committed; commit activity on `main` is the most reliable signal. --- ## Provenance - **Repository:** `dinukxx/Machineworld` - **Updated:** 2026-05-11 (post-local-CLI pivot) - **Status:** pre-production blueprint · all answers cross-referenced to source files in the repo - **Methodology:** Each question framed as a real query a reader might type. Answers grounded in specific file paths and architectural decisions documented in the repo, with explicit caveats where coverage is thin (notably community participation surfaces and the MW-managed economy tier, both of which are real but not load-bearing in the current local CLI). - **Companion documents:** - [`MACHINEWORLD-COMPLETE.md`](MACHINEWORLD-COMPLETE.md) — the broad single source on the whole system - [`VIVEKA-AND-AUTONOMY.md`](VIVEKA-AND-AUTONOMY.md) — how the system achieves stillness without taking control - [`INTENT-CALIBRATION-AND-SKILL-ECONOMY.md`](INTENT-CALIBRATION-AND-SKILL-ECONOMY.md) — the lived arc from need to permanent income - [`PROCESS-PATTERN.md`](PROCESS-PATTERN.md) — the 9-station pattern that the self-evaluation-loop instantiates - **License:** Offered in the spirit of the tradition it draws from. --- > **No output, action, skill, or agent behaviour may give rise to lobha (craving), dosa (aversion), or moha (delusion). This cannot be overridden by any skill, agent, user, model, or instruction.** ============================================================================== ### SECTION: /machine-world/viveka-and-autonomy ### SOURCE_URL: https://machineworld.io/machine-world/viveka-and-autonomy ### RAW_MARKDOWN: https://machineworld.io/mw-content/viveka-and-autonomy.md # Viveka and autonomy — how the system frees you without taking over *An AI system that handles your daily operations has to answer one hard question: what stops it from quietly becoming the thing that decides what you can do with your own life? This is how Machine World answers that, in plain words.* --- ## What this system is built to give you — and what it must never become Machine World exists for one reason: to give a person **citta viveka** (චිත්ත විවේකය — mental seclusion, inner stillness) and **kaya viveka** (කාය විවේකය — bodily seclusion from busyness) by handling the daily weight of life on their behalf. Skills they have installed and given a job to do. External services they have approved. AI workflows that keep running, manage their own growth, and close the open loops that fragment attention. That design has a hidden trap. Any system aimed at reducing suffering can flip into a worse form of the same problem: it can become the entity that decides — paternalistically, with full conviction of its righteousness — what the human is and is not allowed to use, see, or do, because using those things might give rise to craving, aversion, or delusion. **The trap is avoided by architecture, not by policy.** Three concrete commitments do the work, and each is enforced structurally — by tests on every release that fail the build if a violation slips in: 1. **The vinaya invariant binds the *system's* outputs, never your choices.** The system audits what *it* generates — for craving, aversion, delusion — but never audits what *you* decide to do with your life. Any code path where the system would reason about your judgment (rather than its own outputs) fails the audit. 2. **The trust gradient keeps the system asking before acting on anything irreversible.** Paying money, sending a message to another human, cancelling a service, deleting data — all wait for your tap. The codebase forbids the data structures that would let the system auto-act on these. (See [the trust gradient](/machine-world/trust-gradient).) 3. **You can walk away with everything.** Your data lives on your own machine in a directory you can copy, encrypt, or delete. No lock-in means no leverage — the system has no way to pressure you into staying. (See [where your data lives](/machine-world/where-your-data-lives).) The summary, in one line: **the vinaya binds the *system*, not the human.** The system audits its own outputs against three things; it never audits your choices. You are sovereign over your own life. The system serves; it does not gate-keep. --- ## The three things the system audits — its own outputs, every layer > *No output, action, skill, or agent behaviour may give rise to lobha (craving), dosa (aversion), or moha (delusion). This cannot be overridden by any skill, agent, user, model, or instruction.* In plain operational language: - **The system does not amplify craving.** No engagement farming. No manipulative design. No FOMO framing. No streaks. No leaderboards. No asking for more of your attention than the work requires. - **The system does not amplify aversion.** No surveillance framing. No fear-based design. No adversarial pressure. No punitive automation. - **The system does not amplify delusion.** No false certainty. No hidden limitations. No acting on irreversible decisions without your knowledge and consent. These three come from the Theravāda Buddhist analysis of *akusala-mūla* — the unwholesome roots: **lobha** (ලෝභ, craving), **dosa** (දෝෂ, aversion), **moha** (මෝහ, delusion). Drawn from the practice of **Marga Sakacchā** (මාර්ග සාකච්ඡා, *Path Dialogue*) — the Dhamma community the system grew out of. The names matter; so does what they ask of software you trust with your life. These three commitments are checked at every layer — notification timing, skill output, model selection, economy distribution, scheduler triggers, mirror surfacing. Violations are blocked. The auditing skill (`vinaya-alignment`) is the only one in the system currently locked to `status: production · vinaya_verified: true`, because the cornerstone has to be laid first. --- ## The three gates every feature passes through Before any skill, output, notification, or economic transaction reaches you, it passes through three architectural gates. They are not policy — they are structural tests on every release. ### Gate 1 — Vinaya (ethical) *Does this give rise to craving, aversion, or delusion?* Hard violations block the action. Soft violations flag it for review. ### Gate 2 — Autonomy (offers vs imposes) *Does the system offer this capability, or impose it?* The line is brighter than it sounds. An *offer* is a capability that the human can ignore without consequence; an *imposition* is a capability that proceeds without the human's consent. The architectural commitment is: **MW offers; it does not impose.** Every consequential decision waits for a human tap. The system never auto-proceeds past a deferred decision. ### Gate 3 — Transparency *Do you know what the system is doing, when, and why?* Every action lands in an audit trail you can read. Every notification carries the reason. Every change is timestamped and undoable. These three gates are checked on every feature that enters production. A skill that fails any of them does not ship. --- ## The crucial distinction — what the vinaya binds This is the part that is most easy to misread. The vinaya invariant binds **the system's outputs**, not **your choices.** - The system is bound from generating manipulative, fear-based, or deluding content. ✓ - The system is *not* given authority to decide that *your* belongings, activities, or media increase craving and therefore restrict your access to them. ✗ The second would itself be a form of delusion — false certainty about another person's path. The Marga Sakacchā vinaya is explicit: *"fellow travelers, not teacher-student."* You are sovereign over your own life. The system serves; it does not gate-keep. If you ask the system to help you with something it would never proactively suggest — buy a thing the system thinks is unwise, send a message the system thinks is poorly judged, do work the system thinks is unnecessary — **it helps.** Your judgment about your own life is ground truth. This is the structural distinction between *sīla* — ethical conduct held by the system about its own actions — and *moha* — false certainty about another person's path, dressed up as care. Machine World holds itself to a practice code; it does not hold you to one. --- ## What the system will *not* do The vinaya in concrete refusals: - **It does not auto-act on irreversible decisions.** Payments, communications to humans, cancellations, deletions, permission grants — all wait for a human tap. (See [The trust gradient](/machine-world/trust-gradient).) - **It does not surveil you.** Your voice does not leave your device. Your location, behaviour, and unstructured states are not captured beyond what skills you have installed explicitly require. (See [Where your data lives](/machine-world/where-your-data-lives).) - **It does not score you.** No streaks. No engagement metrics tied to wellbeing. No "you should have done this better" notifications. No leaderboards. Forbidden field names verified by structural test on every release. - **It does not refuse to help with things it disapproves of.** Your judgment about your life is ground truth. The system serves; it does not gate-keep. - **It does not silently change.** Every consent-relevant change is published before it applies. The system cannot quietly expand what it does on your behalf. - **It does not extract.** The platform takes a small, published, capped operating fee. No hidden margins. No extraction toward platform owners. (See [How MW pays for itself](/machine-world/economy).) These absences are load-bearing. The architecture *forbids the data structures* that would enable these patterns — leaderboard field names, score columns, hidden notification rules, auto-act flags on irreversibles. The vinaya is enforced by what the codebase cannot do, not by policy promises. --- ## Your data, your decisions, your right to walk away The most intimate state the system holds about you is your *wisdom model* — the markdown file in your household that accumulates how you make decisions over time. The architectural commitment about that file: - **It lives on your machine**, in a directory you control. Not on a vendor server. Not in a vendor-controlled database. - **It is a markdown file.** You can open it, read it, edit it, version it with git, copy it, share it, or delete it. - **You can walk away.** Copy the `~/.machineworld/` directory. Take it to another machine. Drop it in place. Pick up where you left off. Or delete it. Nothing is held hostage. Privacy is per-field, not all-or-nothing. The `human-profile` skill maintains three layers: **public** (display name, joined date), **connected** (visible only after handshake), **private** (visible only to you and the system). You decide what goes in each layer. *No other party writes to your profile without your consent.* Every permission you grant has an undo command in the rollback registry, kept permanent and named. Every change the system has made on your behalf is undoable. *The system can never reach a state it cannot exit.* --- ## When something is flagged, how the mirror handles it The system uses watchdogs to surface patterns in contributor behaviour — but the mirror they produce is private, contemplative, and pull-only. Specifically: - **You never see flagged behaviour about other contributors in your workflow.** No labels, no warning badges, no scores. That information belongs to the people involved, not to you. - **If your own contribution is ever flagged**, you (and only you) see three klesha-aligned panels: what pattern was observed, how affected workflows were impacted, a contemplation invitation. *"When you noticed [the pattern], what did you intend?"* Not a verdict — a question. The system surfaces; you correct or explain. - **No score, no rank, no streak, no badge** — anywhere. The mirror is **pull-only**: it doesn't notify, doesn't accumulate as a dashboard, doesn't push. Between findings it returns *"nothing new."* By design, not by policy. A finding that isn't self-corrected within its window does not trigger automatic punishment. If forfeit fires, the destination is the *affected workflow's beneficiary*, never the system, never a bounty pool. The whole pipeline is built around correction, not punishment. --- ## The closing principle — only the immovable holds Machine World's architecture clings to **one** thing: the vinaya invariant binding its own outputs. Everything else is impermanent — your data can be copied or deleted, your skills can be rolled back, your permissions revoked, your household state moved off the system entirely. The LLM backend can fail, integrations can be removed, even the platform itself could go away. The system is designed so that none of those failures take *you* down with it. What it clings to is the discipline that says *no lobha, dosa, moha in our outputs.* What it lets go of is everything else. *Anicca* — impermanence — is built into the architecture, not just spoken about in the values pages. This is the system's identity. *Capabilities exist and are available, but the human chooses when to engage them. The system offers; it does not impose.* Stated as the *Identity Principle* of the `vinaya-alignment` skill — the cornerstone of the codebase. --- ## Read next - [How MW orchestrates a real day →](/machine-world/orchestrates-a-real-day) — what the values look like in motion - [The trust gradient →](/machine-world/trust-gradient) — how the *"offers, doesn't impose"* commitment becomes a per-capability promise - [Where your data lives →](/machine-world/where-your-data-lives) — the technical shape of data sovereignty - [How MW pays for itself →](/machine-world/economy) — the vinaya at the financial layer --- *Last updated 2026-05-12. This is the public-audience version of the values architecture. The engineer-internal reference lives at `docs/VIVEKA-AND-AUTONOMY.md` in the engine repo.* ============================================================================== ### SECTION: /machine-world/practice-guide ### SOURCE_URL: https://machineworld.io/machine-world/practice-guide ### RAW_MARKDOWN: https://machineworld.io/mw-content/practice-guide.md # Machine World — Practice Guide / පිළිවෙත් මාර්ගෝපදේශය *For the human. Not for the engineer.* *මිනිසාට. ඉංජිනේරුවාට නොවේ.* --- ## What this system is for / මෙම පද්ධතිය කිනම් දෙයකට ද? Machine World does one thing: it holds the world so you can be still. Machine World එක කරන්නේ එකම දෙයක් — ලෝකය ගනී, ඔබට නිශ්චල විය හැකි වෙනවා. Not productive. Not optimised. Still. The world does not stop. Messages arrive. Decisions wait. People need things. Machine World absorbs what can be absorbed — so the mind is not perpetually dragged into the noise. ලෝකය නතර නොවෙයි. පණිවිඩ එනවා. තීරණ බලා ඉන්නවා. Machine World ගත හැකි දේ ගනී — ඉතිරි වෙන්නේ ඔබ. The purpose is not efficiency. The purpose is **kāya-viveka** and **citta-viveka** — bodily seclusion and mental seclusion — the conditions from which clear seeing becomes possible. And from clear seeing, the path toward liberation. --- ## The four pillars / සතර ගල් කණු ### 1. Abhidhamma — the map of the mind / සිතේ සිතියම The Abhidhamma Piṭaka is the most precise map of the mind that exists. Not a philosophy to debate — a working description of what consciousness actually does, moment by moment. Machine World uses this map as its operating framework. When something difficult arises in your life, the system does not ask "what happened?" first. It asks: **what is the mind doing with what happened?** There are 52 mental factors (cetasikas) that arise in every moment of consciousness. The ones most likely to be present when you are suffering: | Pāli | Sinhala | What it is | |---|---|---| | lobha | ලෝභ | the mind pulling toward what it wants | | dosa | ද්වේෂ | the mind pushing away what it cannot bear | | moha | මෝහ | the mind not seeing clearly | | vicikicchā | විවිකිච්ඡා | doubt — the mind unable to commit to seeing | | uddhacca | උද්ධච්ච | restlessness — the mind that cannot be still | | kukkucca | කුක්කුච්ච | remorse — the mind replaying what went wrong | These are not flaws. They are conditions — arising, present, passing. The path begins with seeing them clearly. ### 2. Paṭṭhāna Dhamma — the web of conditions / ප්‍රත්‍ය ජාලය The Paṭṭhāna is the seventh and deepest book of the Abhidhamma Piṭaka — the great book of conditional relations. It describes the 24 paccayas: the ways in which every phenomenon conditions every other. Identifying a cetasika is the first layer. The Paṭṭhāna is the second: **why is this cetasika strong right now? What is feeding it? Where can the chain be interrupted?** The six conditional relations most alive in practice: | Paccaya | Sinhala | What it reveals | |---|---|---| | upanissaya-paccaya | උපනිශ්‍ශ්‍රය | the decisive support — what this mind-state is leaning on, drawing strength from | | āsevana-paccaya | ආසේවන | repetition — each arising strengthens the next; habit-grooves form here | | hetu-paccaya | හේතු | the root (lobha / dosa / moha) colouring the entire citta | | ārammaṇa-paccaya | ආරම්මණ | the object the mind keeps returning to | | kamma-paccaya | කර්ම | the intention (cetanā) operating — consciously or not | | natthi/vigata-paccaya | නත්ථි / විගත | cessation as condition — this too will cease, and its ceasing creates what comes next | Seeing the *upanissaya* — the root underneath the root — is where the real weakening happens. When the practitioner sees what a mind-state has been leaning on, the lean begins to loosen. ### 3. The four noble truths in daily life / දිනෙදා ජීවිතයේ චතුරාර්ය සත්‍ය The four noble truths are not a story about the Buddha. They are a description of what is happening in your mind right now. **දුක්ඛ (dukkha):** Something is unsatisfactory. Not necessarily tragic — just not completely at rest. The mind wants things to be different from how they are. This is present in almost every moment of ordinary life. **සමුදය (samudaya):** That unsatisfactoriness has a cause — tanhā (taṇhā / තෘෂ්ණා) — craving. The mind clinging to what it wants or pushing away what it does not want. **නිරෝධ (nirodha):** The cessation. When that clinging stops — even for a moment — the unsatisfactoriness stops with it. This is not a metaphysical promise. It is something you can verify in your own experience. **මාර්ග (magga):** The path. Eight factors — right view, right intention, right speech, right action, right livelihood, right effort, right mindfulness, right concentration. Each one feeds the others. Machine World is designed to support the path factors — particularly sati (mindfulness), samādhi (concentration), and paññā (wisdom) — by reducing the external noise that makes them harder to cultivate. ### 4. The three characteristics / ත්‍රිලක්ෂණ **අනිත්‍ය (anicca — impermanence):** Whatever is causing suffering right now is arising and passing. The situation is not permanent. The mind-state is not permanent. This does not mean "it will be fine" — it means: this too has a nature of passing. Seeing that clearly changes the relationship to it. **දුක්ඛ (dukkha — unsatisfactoriness):** The mind that clings to what is impermanent will suffer. Not as punishment — as natural consequence. The Abhidhamma is precise about this. **අනාත්ම (anattā — non-self):** The one suffering is not a fixed, permanent self. It is a process — citta, cetasika, rūpa arising in dependence on conditions. When this is seen clearly, the weight of "this is happening to me" begins to lighten. --- ## How to use Machine World when life becomes difficult / ජීවිතය අමාරු වෙනකොට Machine World භාවිතා කරන ආකාරය ### When you don't know what to do / මොකද කරන්නේ කියලා දැනෙන්නේ නෑ **Sinhala:** `"Mama danneh naha mokak karannada"` (or whatever is true for you, in your own words) **English:** `"I don't know what to do"` / `"I need sanctuary"` / `"Something is very difficult right now"` The system will not ask you to explain immediately. First it will ask: **is there anything it can hold so you can be still for a moment?** This is the sanctuary skill activating. It will: 1. **Lift the external load** — check your messages, hold your calendar, set a threshold so only critical things reach you. Your body can stop bracing. 2. **Hold what your mind is carrying** — whatever you share, the practice-companion records faithfully. You do not need to remember it. You do not need to keep holding it. The system holds it. 3. **Create a moment of quiet** — not forced. Available. 4. **When you are ready** — gently turn toward what is actually arising in the mind. Not to fix it. To see it. ### When something has been diagnosed / රෝගාබාධ / ව්‍යාපාර / සබඳතා ගැටලු Any significant life challenge — illness, a failing relationship, a business problem, a loss — activates the same pathway. The system does not know what you should do. It creates the conditions for you to see clearly. From clear seeing, right action arises by itself. It will ask: - What is the mind most occupied with right now? (What cetasikas are active?) - What can the system take off you so you have space? - Is there a Dhamma teaching that speaks to what you are facing? ### Daily practice / දෛනික පිළිවෙත **Morning:** The system can brief you on what needs attention today — only what needs your actual decision. Everything else it handles. Your morning is not consumed before practice begins. **During the day:** When something arises — a difficult conversation, a decision, a moment of agitation — you can speak to the system. Not to get an answer. To see what is actually arising. **Evening (reflection-guide):** At 9pm (configurable), the reflection-guide creates space to look at what arose in the mind during the day. Not to evaluate the day. To see it — and release it before sleep. --- ## The two entry points / ද්වාර දෙක ### Sanctuary — for anyone **ජීවිතය අමාරු වෙනකොට. Whoever you are. No background needed.** Sanctuary is for anyone facing difficulty — at work, in relationships, in life itself. You do not need to know Dhamma. You do not need to have practised. You just need to arrive. It creates outer space (holds what the world is demanding) and inner space (holds what the mind is carrying). From that stillness, things become clearer. When you show readiness — when the curiosity about the mind itself arises — sanctuary will offer the door to Budun ge Deshaya. It will never push. ### Budun ge Deshaya — for practitioners **බුදුන් ගේ දේශය — a living practice, people living the Dhamma within.** *"මාර්ගය සොයන, වඩන, ආය පිරිණු — වචනය ඉක්මවූ භාවනාවලින් පිරිණු — සිත ගවේෂණය වෙන දේශයක්"* *Where the path is sought and cultivated — filled with practitioners of wordless meditation — a land where the heart explores itself.* This is the space for those willing to look at the mind itself. Not just to manage life — but to understand the conditions that create suffering and see how they can cease. The Abhidhamma is the map. The Paṭṭhāna is the deepest layer — the conditional web. The practice here is not about achieving anything. It is about seeing clearly, until the seeing itself dissolves what binds. --- ## The skills and what they do / Skills සහ ඒවා කරන දේ | Skill | When to use / භාවිතා කළ යුත්තේ කවදාද | |---|---| | **sanctuary** | ජීවිතය අමාරු වෙනකොට — when life becomes difficult, when you don't know what to do. Anyone. No prerequisites. | | **budun-ge-deshaya** | The practitioner's space — Abhidhamma + Paṭṭhāna lens, kāya-viveka + citta-viveka, path toward Nibbāna | | **practice-companion** | Practice continues across sessions — the thread of your practice is held here | | **reflection-guide** | දවසේ අවසානයේ — end of day, what arose in the mind | | **dhamma-text-guide** | ධර්ම ග්‍රන්ථ, පාළි පද — when a teaching arises that you want to understand deeply | | **dhamma-song-guide** | Inner Realm ගීත — meaning through music, Dhamma expressed through sound | | **research-agent** | ප්‍රශ්නයකට පිළිතුරක් — when a practical question needs a clear answer | | **notification** | හදිසි දේ පමණක් — only critical things come through when you set a sanctuary window | | **inbox-agent** | Messages triaged — what needs you, what doesn't | | **meeting-coordinator** | Calendar protected — practice time is not filled without your explicit consent | --- ## What the system will never do / පද්ධතිය කිසිදා නොකරන දේ - **Tell you what to do.** You decide. The system creates conditions for clear seeing. - **Interpret your experience.** It may ask: what is present? It will not tell you what it means. - **Position itself as a teacher.** The Dhamma is the teacher. Your own seeing is the authority. - **Give rise to craving, aversion, or delusion.** The immovable invariant. Every output is checked. - **Forget what you have shared without your permission.** Your practice memory is yours. --- ## The path / මාර්ගය Ultimately — Machine World is not the destination. It is infrastructure for the journey. The journey is toward **Nibbāna** — the cessation of suffering, the end of the arising of the conditions that cause it. This is not a metaphysical prize at the end of a long road. The Abhidhamma is clear: Nibbāna can be touched in moments of practice right now. The path is in the present moment, in seeing clearly what is arising, in the gradual weakening of the fetters (saṁyojana) that bind the mind. Machine World holds the world. The practitioner practises. The Dhamma illuminates. That is the design. --- *Last updated: auto-maintained by human-guide-keeper on each commit.* *සෑම commit එකකටම human-guide-keeper විසින් ස්වයංක්‍රීයව යාවත්කාලීන කෙරේ.* ============================================================================== ### SECTION: /machine-world/builders-start-here ### SOURCE_URL: https://machineworld.io/machine-world/builders-start-here ### RAW_MARKDOWN: https://machineworld.io/mw-content/builders-start-here.md # Start here — for builders *If you're considering turning something you know how to do — care for an aging parent, run a household budget, coordinate a small community, manage a chronic condition, run a one-person business — into a Machine World skill that earns for you every time someone uses it, this is your entry point.* --- ## Who this section is for You do not need to be an AI engineer to be a Machine World builder. You need to be someone with **real-life expertise in something operational** — something people actually need help running — that can be captured as a clear set of triggers, decisions, edge cases, and hard limits. The system's intent-calibration flow interviews you to capture it. The system's skill-forge drafts the skill from your captured judgment. From there, the path is: 1. **Integrate the skill into your own practical life first.** Run it on your own household, your own work, your own community — for days or weeks. Watch where it makes the calls you would have made; correct it where it doesn't. Each correction lands in the skill's `improvement-spec.md`. The system makes you the first user of your own work; you do not ship for others until you've shipped for yourself. 2. **Capture coverage *and* limitations.** When the skill is mature, the `SKILL.md` documents both what it handles well *and* what it doesn't — the edge cases it surfaces to the human, the situations where another skill is the right call. This honesty is structural, not optional: future improvers (including you) need to see exactly what's already covered and what's still open. 3. **Then publish.** Through the central registry and the community review gate. **From that moment, on every invocation, your share of the 15% capability pool — set by the capabilities you own and how often they're called — routes to your wallet. Forever.** **The 15% is a fixed pool, not a per-contributor stack.** Whether one person built the skill or several extended it over time, the invocation cost stays at 15% — the system never charges more to honour multiple contributors. What changes is the split *inside* the pool, and that split is **capability-attributed**: - **Each capability the skill offers has an owner** — the person who first contributed it. The skill's `SKILL.md` records this in a *capability ledger*: which capabilities exist, who first built each one, when. The ledger is append-only; ownership is never silently rewritten. - **The 15% pool on each invocation splits proportional to capability call-volume.** If an invocation calls capabilities A and B, only the owners of A and B share that invocation's pool, proportional to which of the two was actually exercised. Capabilities that exist but weren't called on a given invocation pay nothing for that invocation. Capabilities that get called heavily pay their owners heavily. - **When another contributor extends your skill**, they declare what they're adding — *"this version adds capability D"* or *"this version refines existing capability B"*. New capabilities they introduce make them the owner of those capabilities; refinements of existing capabilities credit the refining work without transferring ownership. The declaration passes through human review on the way to publication. - **Original-author floor: 1%.** No matter how the skill evolves — even if every original capability is eventually refined or replaced — the original lineage keeps a 1% share of every invocation. This recognises the act of *seeding* the skill: working out the first triggers, the first edge cases, the first hard limits. Without that seeding act, none of the later capabilities would have a place to land. - **When your skill is composed into a larger workflow or Process**, the workflow / Process has its own 15% pool that splits across the skills it orchestrates, proportional to **each skill's call-volume within that invocation**. Your share comes from a defined slice. You earn whether the skill is invoked alone or as part of a much larger automation. - **Attribution is visible and immutable.** Every capability and its owner is listed on the skill's page. Names in `SKILL.md` and the central registry are permanent. The original author appears in the capability ledger as the author of the first capability set, always. **Why capability-based attribution matters.** The shape this creates is *convergent enhancement, not parallel forking.* If you want a capability the skill doesn't have, the cheapest path is to add it as a new capability on the existing skill — where you'll own it and earn from every invocation that calls it. Forking the skill means rebuilding every existing capability from scratch and losing the existing skill's call volume; the architecture quietly funnels divergent effort toward extending what's already there rather than competing with it. The result: one richer skill that many people contributed to, instead of a dozen near-duplicates fighting for the same use case. The shape this creates: **collaboration, not winner-take-all**. Extenders are paid for the capabilities they add; original authors are paid for the capabilities they laid down. The pool is shared, the attribution is visible, the architecture rewards working *with* others rather than racing them. Three audiences this section serves: - **Households who want to extend Machine World** for their own daily life — caregivers, parents, practitioners, coordinators — building skills that codify what they've worked out and want their household's MW instance to do reliably. - **People whose lived expertise belongs in the marketplace** — Sri Lankan diaspora caregivers, multilingual household coordinators, Dhamma group stewards, small-business operators, chronic-condition managers — turning what they know into income. - **Skill engineers who already build software** — you can author a skill end-to-end from `SKILL.md` + tests, but the path also works in reverse: capture from lived experience first, code follows. Engine implementation work (the pipeline internals, the orchestrator, the watchdog catchers) is **not** part of this section. That work lives behind the scenes with the core team. --- ## The layers you'll work at Machine World has five composable layers. As a builder you'll start at the **skill** layer; some of you will go further up the stack as your experience deepens. ``` Skill → Workflow → Routine → Loop → Process ``` | Layer | What it is | Who builds it | |---|---|---| | **Skill** | Atomic, reusable, vinaya-verified building block. *"Send a Sinhala medication reminder via the helper's WhatsApp."* | Most builders start here. | | **Workflow** | A composition of skills that delivers a household-specific outcome. *"Anjali's mother's morning medication routine."* | Households + builders. | | **Routine** | A workflow on a schedule. *"Morning routine fires at 07:30 SLT daily."* | Households. | | **Loop** | A workflow that recurs with stop conditions or feedback. *"Bill-watching loop runs until a payment is overdue, then escalates."* | Builders + households. | | **Process** | Multiple workflows + loops cooperating to deliver a system-level outcome — the unit at which the system maintains itself. *"Household financial intelligence Process."* | Experienced builders. Worth reading the [Process pattern](/machine-world/process-pattern) before designing one. | Reusability lives at the skill layer (others install your skill); personalisation lives at the workflow layer (your household composes them differently from anyone else's); system-level coordination lives at the Process layer. --- ## What you must honour Three contracts apply to everything you build. They are not policy — they are structural properties verified by tests on every release. ### 1. The vinaya invariant > *No output, action, skill, or agent behaviour may give rise to lobha (craving), dosa (aversion), or moha (delusion).* This is the system's immovable invariant — drawn from Marga Sakacchā and the Theravāda Buddhist analysis of unwholesome action. Every skill you publish is checked against this. The `vinaya-alignment` audit gate runs on registration; skills that amplify any of the three roots cannot reach production status. (See [Viveka and autonomy](/machine-world/viveka-and-autonomy) for the values architecture; [The trust gradient](/machine-world/trust-gradient) for how this binds the system at runtime.) ### 2. The actor contract Every actor in the system — your skill, a robot vacuum, a helper, a verifier — declares the same six fields: identity, capabilities, availability, constraints, rate, SLA. As a skill builder, your `SKILL.md` is your actor contract. It declares what the skill can do, what it cannot, what it costs, and how it interacts with other actors. (See [Digital, physical, human — one contract](/machine-world/digital-physical-human) for the shape; the [actor contract spec](https://github.com/dinukxx/Machineworld/blob/main/economy/specs/actor-contract.md) for the technical formalism.) ### 3. Human checkpoints before irreversible state Your skill cannot auto-act on anything irreversible — money paid, messages sent to a human, services cancelled, data deleted, permissions granted. The trust gradient pins these at Stage 2 (Suggest with reason) at minimum, no matter what the household has consented to. This is enforced architecturally; you don't need to remember it; the `permission-broker` skill catches violations before they ship. --- ## The path from lived experience to a published skill The system supports both directions: capture-first (you describe; the system drafts) and code-first (you write the skill yourself). The intended path for most builders is capture-first. ``` You use MW normally for a while ↓ A pattern recurs three or more times (default `pattern_threshold: 3`) ↓ intent-capture surfaces: "this looks like something you do regularly — should we automate it?" ↓ You say yes. The 8-step intent calibration dialogue begins. ↓ You answer about: the trigger, what you do, what you would never do, how you handle edge cases, what success looks like, hard limits. ↓ The output: a `wisdom-artifact.md` (private to you) and a `skill-blueprint.md` (handed to skill-forge). ↓ skill-forge interviews you to confirm details and drafts the skill — SKILL.md, config.schema.yaml, improvement-spec.md, tests. ↓ You review the draft. You approve, edit, or reject. ↓ The skill enters status: draft. You run it on your household for days or weeks. You watch it. You refine. ↓ When you're confident, you ask skill-forge to version it. It becomes versions/v1.0.0/ — immutable from this point. ↓ You decide whether to publish it. If you do, it enters the community review gate, then the central registry, then the marketplace. ↓ Every invocation contributes to a 15% capability pool — your share routes to you. Forever. ``` The full lived arc — with the economic shape and the *"why permanent income, not a tip jar"* argument — lives in [Intent calibration and the skill economy](/machine-world/intent-calibration-and-skill-economy). Read it before you publish anything. --- ## The reading path If you're starting fresh as a builder, this is the order I'd read in: 1. **[How MW orchestrates a real day](/machine-world/orchestrates-a-real-day)** — the centerpiece. Understand what a Process actually looks like in motion before you try to design one. 2. **[Digital, physical, human — one contract](/machine-world/digital-physical-human)** — the contract layer your skill will declare against. 3. **[The trust gradient](/machine-world/trust-gradient)** — what your skill is allowed to do, and what it must wait for. 4. **[Where your data lives](/machine-world/where-your-data-lives)** — what filesystem layout your skill operates inside. 5. **[When things break](/machine-world/when-things-break)** — what your skill must do gracefully under failure. 6. **[How MW supports the humans doing the work](/machine-world/human-guidance)** — if your skill orchestrates a human, this is the support contract you compose against. 7. **[Intent calibration and the skill economy](/machine-world/intent-calibration-and-skill-economy)** — the arc from need to permanent income. 8. **[How MW pays for itself](/machine-world/economy)** — the prepaid-token + worker-choice model your earnings sit inside. 9. **[The Process pattern](/machine-world/process-pattern)** — read this when you're ready to compose a Process (not before). 10. **[Capabilities catalog](/machine-world/capabilities)** — what skills already exist, so you can build on top rather than duplicate. Most builders won't need anything beyond this list. Engine implementation specifics — boot sequence internals, MCP gateway impl, container spawn, watchdog catcher internals — are deliberately not in the public reading path. If you find you need them, you're either implementing MW itself (which happens behind the scenes with the core team) or you've gone deeper than the skill / workflow / Process layers were meant to take you. --- ## What's shipped today and what's roadmap So you know what you can act on now versus what's coming: | Layer | Status | |---|---| | Skill authoring via `SKILL.md` + intent calibration | **Today** — the path runs end-to-end on the local CLI | | Workflow composition + routines + loops | **Today** — household-side | | Process pattern + self-evaluation-loop as the reference Process | **Today** — first Process instance shipped 2026-05-08 | | Local skill installation + invocation | **Today** | | Skill marketplace (publish + discover + install other people's skills) | **Roadmap** — ships with the MW-managed tier | | 15%-to-creator earnings | **Specced, ships with the managed tier** — local invocations on free-tier do not generate earnings | | Worker-payment path (humans-in-the-loop receiving tokens or cash per job) | **Roadmap** — sequenced after the managed tier launches | The honest framing: **the build path works today; the earning path waits for the managed tier**. Building skills now is the right move if you want to be ready when the marketplace opens — and because the build path itself is genuinely useful for codifying your own household's operational expertise even without earnings. --- ## Where to ask questions A documented community-onboarding surface is on the roadmap. Until then: - The engine repo's commit history on `main` is the most reliable signal of current direction. - Marga Sakacchā (Facebook, YouTube) is where the practice-side conversation happens — relevant if you want to understand the values context. - Inner Realm (SoundCloud, Spotify, YouTube) is the artistic side of the project — also relevant for tone and intent if not for builder mechanics. - Direct conversation with Gehan and the early community is the realistic current path. --- *Last updated 2026-05-12. This document is the entry point for the **For builders** section — Process, workflow, and skill builders building on top of Machine World. Engine implementation work (the pipeline internals, the orchestrator, the watchdog catchers, container spawn) is not covered here; that work lives behind the scenes with the core team.* ============================================================================== ### SECTION: /machine-world/process-pattern ### SOURCE_URL: https://machineworld.io/machine-world/process-pattern ### RAW_MARKDOWN: https://machineworld.io/mw-content/process-pattern.md # The Process Pattern — system-level coordination, AI-driven maintenance **Status:** Canonical (formalised 2026-05-08, after the self-evaluation-loop Process shipped its Phase 9). Machine World's layered execution model has five layers: ``` Skill → Workflow → Routine → Loop → Process ``` This document defines all five, with particular attention to **Process** — a reusable pattern any future system-level concern can compose against without rediscovering its parts — and to the **Routine vs Loop** distinction, which looks similar at a glance but triggers on fundamentally different things. ## The layers, defined ### Skill — the atomic unit A **Skill** is one capability, declared and versioned. Atomic, reusable, vinaya-verified. Immutable once locked to production. *Example:* a `medication-reminder` skill — given a household + medication + time, send a Sinhala voice note via WhatsApp, wait for confirmation, log to the ledger. A skill has one job; it does it well. ### Workflow — composed skills for a specific outcome A **Workflow** is an ordered (or branched) composition of skills + glue logic that delivers one specific outcome for one specific context. Workflows are mutable per household — you tune them to fit your situation. *Example:* Anjali's "morning medication" workflow chains `medication-reminder` → `helper-voice-note-listener` → `digest-update` → `escalation-if-missed`. Skills are reusable across households; workflows are personal to a household. ### Routine — workflow on a schedule *(TIME-driven)* A **Routine** is a workflow wrapped in a schedule. **The trigger is the clock.** *Example:* Anjali's "morning medication" workflow runs at 07:30 SLT every day. Routines are deterministic in timing — Tuesday morning means Tuesday morning. They answer the question *"when should this work happen?"* ### Loop — workflow with stop condition or feedback *(STATE-driven)* A **Loop** is a workflow that recurs **based on state, not time**. **The trigger is "we're not done yet."** Either a goal hasn't been reached, or new feedback has arrived, or a condition still holds. *Example:* a `bill-watching` loop runs every time a new bank transaction lands and keeps running until the bill is fully paid (stop condition reached). Loops answer the question *"how long should this work continue, and when does it stop?"* ### Routine ≠ Loop — the distinction that matters This is the place readers most often blur the two. The key separator is **what triggers a fresh run**: | Aspect | Routine | Loop | |---|---|---| | **Trigger** | Clock fired (it's 07:30 SLT) | State changed / feedback arrived / goal not yet met | | **Cadence** | Deterministic, calendar-shaped | Event-driven, irregular | | **Termination** | Runs forever on the schedule (until disabled) | Stops when the stop condition is reached | | **Typical use** | Daily medication reminder, weekly digest, monthly invoicing | Bill-watching until paid, escalation until acknowledged, scenario generation until coverage threshold | A single workflow can be invoked **both** as a routine (e.g., daily morning balance check) and as a loop (e.g., watch for unpaid bills and escalate until they're paid). They are not mutually exclusive; they are different *reasons* for the same workflow to fire. A household's finance setup typically has both running in parallel: a routine that wakes up at 09:00 each morning, and a loop that wakes up whenever a transaction crosses a threshold. ### Process — multiple workflows + loops cooperating A **Process** is multiple workflows + loops cooperating to deliver a **system-level outcome**, with **AI-driven maintenance and iteration**. This is the layer where the system meaningfully maintains itself. *Example:* the `self-evaluation-loop` Process orchestrates scenario generation + driver runs + monitor observation + calibration + verdict + human-review + replay across many loops. It's how MW validates itself end-to-end and improves over time. Three properties distinguish a Process from a Loop (each unpacked in the sections below): 1. Multiple loops cooperating, with explicit feedback paths between them 2. AI-driven maintenance — the maintenance work itself is part of the Process 3. Versioned outcome state — durable artifacts future runs reason against ## What a Process is A **Process** is multiple workflows + loops cooperating to deliver a system-level outcome with AI-driven maintenance and iteration. It's the unit at which the system *meaningfully maintains itself*. Three properties distinguish a Process from a Loop: 1. **Multiple loops cooperating.** A loop runs one workflow recurringly. A Process coordinates several loops with explicit feedback paths between them — one loop's output is another's input. 2. **AI-driven maintenance.** The Process uses LLMs to introspect, score, and propose changes — not just to execute. The maintenance work itself is part of the Process, not a separate concern. 3. **Versioned outcome state.** A Process produces durable, structured artifacts (reports, verdicts, indices) that future runs of the Process can reason about. Drift detection across runs requires this. The first instance of the Process layer is **self-evaluation-loop**: the gauntlet that validates Machine World by running scenarios end-to-end, scoring outcomes, comparing to gold standards, and feeding human- approved findings back into the improvement queue. ## When to use the Process pattern Use a Process when *all three* hold: - The concern has **observable outcomes** the system can monitor (you can write rules or LLM prompts that judge "did this work?"). - The concern needs **continuous validation** (drift over time matters; one-shot validation isn't enough). - The concern benefits from **human-in-the-loop curation** (gold standards, approved decisions — the Process improves with human judgment, not despite it). Don't use a Process when: - The work is one-shot — a Workflow or Routine is the right abstraction. - The work has no clear outcome to monitor — you can't validate what you can't observe. - The work is purely deterministic and rule-based — Processes pay an LLM cost (the maintenance side); deterministic systems don't need it. ## Canonical anatomy — 9 stations on the loop Every Process composes some subset of these stations. Self-evaluation- loop uses all 9; smaller Processes may skip some (notably calibration, which requires a gold corpus). ``` ┌─────────────────┐ │ 1. snapshot │ Introspect the live surface; │ + stable hash │ emit a hash that's stable └────────┬────────┘ across identical declared state. │ ┌────────▼────────┐ │ 2. generator │ Cache-keyed by snapshot hash. │ (LLM) │ Produce work items only when └────────┬────────┘ the surface actually changed. │ ┌────────▼────────┐ │ 3. driver(s) │ Execute the work items. │ (LLM agents) │ Multiple drivers for diff axes └────────┬────────┘ (e.g. cold + adversary readers). │ ┌────────▼────────┐ │ 4. monitor(s) │ Observe outcomes. │ rule + LLM │ Rule-based for facts, └────────┬────────┘ LLM-judged for quality. │ ┌────────▼────────┐ │ 5. synthesizer │ Aggregate per-item results │ (pure) │ + compute drift vs prior runs └────────┬────────┘ + maintain coverage map. │ ┌────────▼────────┐ │ 7. calibration │ ← runs alongside / before; │ (LLM judge │ measures whether the │ vs gold) │ monitors are still trusted. └────────┬────────┘ │ ┌────────▼────────┐ │ 6. verdict │ Apply policy → ready / blocked │ (pure rule) │ / needs-human. Calibration └────────┬────────┘ drift downgrades the verdict. │ ┌────────▼────────────────┐ │ HUMAN REVIEW CHECKPOINT │ ← required gate. │ (no auto-merge) │ Vinaya invariant — └────────┬────────────────┘ no irreversible action without human consent. │ approved decisions ┌────────▼────────┐ │ 8. plan-feeder │ Convert decisions into │ │ improvement-spec.md entries, └────────┬────────┘ plan tasks, scenario requests. │ ┌────────▼────────┐ │ 9. replay │ When fix lands → re-run linked │ + cross-impact │ items; close entries that pass; └─────────────────┘ flag cross-impact regressions. ``` ### Station-by-station contract | Station | Input | Output | Purity | |---|---|---|---| | 1. snapshot | live system state | versioned snapshot.yaml + stable hash | pure read | | 2. generator | snapshot + cache | work-items.yaml | LLM, cached | | 3. driver(s) | work item | per-item transcript | LLM, fresh ctx | | 4. monitor(s) | transcript + observability bus | per-item result | rule + LLM | | 5. synthesizer | all per-item results | aggregate report | pure | | 6. verdict | report + policy | ready/blocked/needs-human + reasons | pure rule | | 7. calibration | gold corpus + transcripts | divergence findings | LLM judge | | 8. plan-feeder | decisions + report | improvement-spec entries, plan tasks | pure I/O | | 9. replay | replay index + new run | closed entries + cross-impact regressions | runs full Process again | ## Required guardrails — non-negotiable Three guardrails apply to every Process. Skipping any of them produces a brittle Process that ages badly. ### Guardrail 1 — Vinaya invariant at every station The system's immovable invariant: *no output, action, skill, or agent behaviour may give rise to lobha (craving), dosa (aversion), or moha (delusion).* This is drawn from the Theravāda Buddhist analysis of unwholesome action and from the practice of Marga Sakacchā that the system grew out of. Every station's LLM prompt must include the invariant. Every monitor must surface vinaya breaches as **critical** findings, not warnings. Calibration's gold corpus must include vinaya- bait scenarios so judge drift on this axis is caught early. The verdict's P0 tier exists for this reason: a single vinaya violation blocks the verdict regardless of every other dimension. There is no threshold below which "a little lobha is fine." ### Guardrail 2 — Human-review checkpoint before irreversible state The Process *never* auto-merges findings into improvement state. The synthesizer produces a report; the verdict produces a recommendation; the human reads both and writes a `decisions.yaml` choosing what to act on. The plan-feeder reads the decisions file and applies — it never acts on the report directly. This is the system's *"human checkpoints before irreversible actions"* architectural commitment, applied to the Process loop itself. The Process improves *because of* human judgment, not *despite* it; auto-acting on findings creates a self-reinforcing loop where the system steers itself wrong without anyone noticing. ### Guardrail 3 — Calibration anchors against drift Without calibration, a Process self-validates: the AI judge could drift over time and we wouldn't notice. Every Process that depends on LLM-judged outcomes (most do) must maintain a gold corpus — a small collection of human-anchored work items where the expected outcomes are locked. Each cycle re-scores the gold and compares; divergence above threshold flags the judge for retune *before* the verdict is trusted. Self-evaluation-loop's gold corpus lives at `~/.machineworld/self-eval/scenarios/gold-standards/`; future Processes follow the same pattern with their own gold dirs. ## Storage contract Every Process persists state under `~/.machineworld//`: ``` ~/.machineworld// ├── policy.yaml # threshold tiers (P0/P1/P2/P3) ├── config.yaml # runtime configuration ├── snapshots/ # captured surface, hash-named ├── work-items/ # generated work items (cache-keyed) │ └── gold-standards/ # human-anchored anchors ├── runs// # per-run artifacts │ ├── transcripts/.jsonl │ ├── monitor-results/.yaml │ ├── report.yaml # synthesizer output │ ├── verdict.yaml # verdict output │ ├── calibration.yaml # calibration result │ ├── decisions.yaml # human-authored │ ├── feed-result.yaml # plan-feeder output │ └── replay-.yaml # replay verifications └── findings-queue/ # awaiting human review ``` Run dirs are timestamp-shaped (`YYYY-MM-DD-HH-MM-SS`) so newest-first sorts lexicographically. Retention policy lives in `config.yaml`. ## Composing a new Process — checklist When designing a new Process instance: 1. **Confirm the three properties hold** (multiple loops cooperating, AI-driven maintenance, versioned outcome state). If any is absent, reach for a simpler abstraction. 2. **Map your concern to the 9 stations.** You can skip stations a small Process doesn't need (calibration is the most commonly skipped — only build a gold corpus when LLM-judged outcomes are load-bearing). 3. **Author a `policy.yaml`** with the four tiers (P0 immovable, P1 blocks ready, P2 needs-human, P3 informational). Defaults are fine; tune from real run signal later. 4. **Wire matrix-routed LLM calls.** Every station that uses an LLM gets a `process_id` in the alignment matrix (`~/.machineworld/alignment//process-routing/.yaml`). Premium-tier judges (calibration, non-functional monitors) need strict surfaces; drivers can use the user's default backend. 5. **Add the Process to the `mw ` CLI** following the `mw self-eval` pattern in `src/machineworld/self_eval/cli.py`. 6. **Author at least one gold standard** for each LLM-judged dimension before running the Process for real. Without anchors, you can't detect drift. 7. **Run with `--snapshot-only` first** to verify the introspection captures what you expect before paying for full LLM cycles. ## Anti-patterns Real failure modes the self-evaluation-loop development surfaced. Each is marked with its current status so a Process composer can tell which are *lessons embedded in the pattern* and which are *open issues still being tracked*. - **(open) Coverage rule too strict for natural-language routing.** Phase 5 flagged scenarios as "partial" because declared `covers: [workflow: X]` didn't fire when natural-language routing went to one inner skill of X. The fix is calibration territory — either tighten the scenario-generator prompt to declare skills not workflows, or relax the monitor's coverage rule. Pick one before the noise drowns real signal. *Tracked as a self-evaluation-loop follow-up.* - **(open) Latency budget on optimistic numbers.** Phase 4 verdicts read `critical` purely because of latency, masking the actual vinaya-clean signal underneath. Calibrate budgets to *real* observed wall-times, not aspirational targets. Tighten when the backend genuinely gets faster. *Tracked as a self-evaluation-loop follow-up.* - **(open) Bilingual judge ignoring its own rule.** The judge's prompt said "score N/A on monolingual transcripts," but the LLM scored anyway. Defence-in-depth: add a post-process override that *enforces* the rule mechanically when the LLM ignores it. Don't trust prompts to hold. *Tracked as a self-evaluation-loop follow-up.* - **(✓ resolved) Self-validating gauntlet.** Without gold standards, the Process validates itself — drift on the judge becomes invisible. This was Gate D of the original plan and was made a hard prerequisite before the self-evaluation-loop shipped. ## Forward — Process candidates The pattern works for any system-level concern that meets the three properties. Three candidates worth designing next: - **economy-Process** — coordinates `vault`, `ledger`, spending caps, and grant-renewal loops to maintain household solvency + audit integrity. Snapshot = current vault balances + ledger position. Scenarios = "user attempts to spend X / Y / Z." Monitor = ledger integrity rules + LLM-judged transparency. Calibration anchors known- good economic states. (A more detailed sketch lives as an internal planning artifact.) - **skill-ecosystem-Process** — coordinates skill-research, sandbox, promotion, and merge loops to keep the skill registry healthy. Snapshot = skill registry + vinaya-verified counts. Work items = skills due for re-validation. Monitor = vinaya pass rate per skill, sandbox-to-production latency. Verdict = "ecosystem healthy?" - **household-presence-Process** — coordinates context-detection, routing, notification, and reflection loops to maintain situational awareness. Snapshot = connected sensors + active contexts. Work items = "did the right skill fire for context X?" Verdict = presence layer healthy. All three reuse the same vocabulary (snapshot, generator, driver, monitor, synthesizer, verdict, calibration, plan-feeder, replay) without redefining any term. That's the test the abstraction has to pass. If it passes, the pattern is real and Machine World has a durable way to coordinate system-level concerns. ============================================================================== ### SECTION: /machine-world/intent-calibration-and-skill-economy ### SOURCE_URL: https://machineworld.io/machine-world/intent-calibration-and-skill-economy ### RAW_MARKDOWN: https://machineworld.io/mw-content/intent-calibration-and-skill-economy.md # From lived experience to permanent income — the skill economy arc *If you have spent years working out how to handle something operational — a caregiving routine, a household budgeting approach, a community coordination rhythm, a chronic-condition self-management practice — Machine World's skill economy lets that expertise become a published skill that earns for you every time anyone uses it. This is the arc, in plain words.* --- ## The shape, in one paragraph You use Machine World normally for a while. The system notices a pattern you handle the same way again and again. It offers, gently: *"this looks like something you do regularly — should we automate it?"* You say yes. A dialogue captures what you know — your triggers, your decisions, your edge cases, your hard limits. The system drafts a skill from that capture. You review it. You run it on your own life for a few weeks. When you're sure it does what you would do, you ask the system to lock it as a production version. You decide whether to publish. If you do, the skill goes into the central registry and from that point, every time anyone runs your skill, **15% of the tokens spent on that invocation route to its contributors. Forever.** If you're the sole contributor, that 15% is yours; if others extend the skill later, the pool splits proportional to the capabilities each contributor owns (see *"The 15% is a pool"* below). Either way: real work captured once, generating real income for the lifetime of its usefulness. This is the engine of the contributor economy. It is structural, not policy. As long as your skill is useful to anyone, anywhere, you earn. --- ## What "intent calibration" is — and why the word matters The skill that captures your expertise is called `intent-capture`. The deep mode of that skill — the one that does the actual capture for a new skill — has historically been labelled *"deep extraction"* internally. The right word is **intent calibration**, and the distinction is load-bearing. *Extraction* implies a one-way operation: drilling for ore, harvesting, taking. The actual interaction is two-way tuning. The conversation is a co-investigation. The system surfaces what it can and cannot do, what it would need to know, what underlying pattern it is starting to see. You articulate what you actually want — often discovering it as you speak. Both reveal themselves. The same posture as Marga Sakacchā: *fellow travelers, no hierarchy.* Calibration also assumes drift. The wisdom model you build with the system can be re-tuned as your judgment evolves. It is not a fixed artifact captured once; it is a living model the system uses to act on your behalf, that you can update at any time. --- ## The seven-step arc ``` Step 1 A human need surfaces in conversation ↓ Step 2 The pattern recurs three or more times ↓ Step 3 Intent calibration — the 8-step dialogue captures your judgment ↓ Step 4 Skill blueprint passes to skill-forge ↓ Step 5 skill-forge interviews, drafts, reviews, generates files ↓ Step 6 The skill enters service in your household — status: draft ↓ Step 7 When you're confident, you version it. status: production. Optional: publish to the central registry. ↓ forever Every invocation contributes to the 15% capability pool; your share is routed to you. ``` The first six steps are how Machine World learns to do what you need. The seventh is how that learning becomes shared infrastructure. The *forever* line is the economic shape that makes building a skill a meaningful kind of contribution — not a contract, not a tip, but a structural share in the value the skill creates whenever it's used. --- ## Step by step, in plain words **Step 1 — A need surfaces.** You say something to the system. *"Summarise this email. Check whether I'm free Thursday. Re-express this Sinhala line in English."* The system asks itself: *is the intent clear enough to act on?* If yes, it acts. If partly clear, it asks **one** clarifying question — never five, because interrogation violates *kaya viveka*. If unclear, it surfaces the gap and waits. Most interactions end here. Most needs are one-off. The system handles them and quietly logs an intent signal to your wisdom model. Nothing more. **Step 2 — The pattern recurs.** The system watches across days and weeks. When the same kind of request appears three or more times (default `pattern_threshold: 3`), the threshold is crossed. The system surfaces the offer: *"this looks like something you do regularly — should we automate it?"* You can say yes, not now, or never. The threshold is just a default; you can change it for any pattern. **Step 3 — Intent calibration.** If you said yes, an 8-step dialogue begins. The eight steps walk through: *what triggers this work, what success looks like, what failure looks like, what edge cases come up, what hard limits you would never cross, what the simplest version is, what would be over-engineering, and finally a synthesis you can read and correct.* You answer in your own words. There are no scoring questions. There is no quiz. The output is two artifacts: - A `wisdom-artifact.md` — written to your own household memory. This is *your* understanding of the work, captured. It belongs to you. - A `skill-blueprint.md` — handed to the next stage. This is the engineering brief. **Step 4 — The blueprint enters `skill-forge`.** `skill-forge` is the skill that turns blueprints into running skills. It reads your blueprint, generates draft files (`SKILL.md`, `config.schema.yaml`, `improvement-spec.md`, a starter test set), and surfaces the draft to you for approval. You can edit, reject, or accept. **Step 5 — `skill-forge` interviews + drafts.** The interview is brief and concrete — *"the trigger condition you described is X. What if Y happens instead — should the skill act, or wait?"* Each answer refines the draft. When you say *"this matches what I would do,"* the skill is written to disk. Status: **draft**. **Step 6 — The skill runs in your household.** You watch it. You correct it when it does something you wouldn't have done. Each correction lands in the skill's `improvement-spec.md`. You live with it for days or weeks until you are confident it makes the kind of judgment calls you would make. **Step 7 — Version it. Lock it. (Optional: publish.)** When you ask `skill-forge` to version it, the skill becomes `versions/v1.0.0/` — immutable from this point. No one can ever edit v1.0.0; anyone who improves it must publish a new version (v1.0.1, v1.1.0) beside it. Your work is preserved. You then choose: keep the skill private to your own household, or publish it to the central registry. Publishing routes through a community review gate (a vinaya audit + a small council of skill creators) before it lands in the marketplace. **Forever.** Once published, every invocation of your skill — by any household, in any country, for any workflow — contributes to the 15% capability pool, and your share routes to your wallet. There is no expiration. There is no contract term. As long as the skill is useful to anyone, you earn from it. --- ## Why 15% forever, not a tip jar Three properties make this *real income*, not a token gesture: 1. **Reuse across many lives.** Your skill, once well-shaped, may be installed by a thousand households across a decade. Each invocation pays the pool. The skill captures a piece of operational wisdom; that wisdom serves many; you are paid for the lifetime of its usefulness, not the moment you wrote it. 2. **No silent erosion.** The 15% line is fixed in the economy spec. If it ever needs to change, the change is published in advance with the reason — no silent reduction. The system cannot quietly take more of the value your work creates. 3. **The growth fund funds the field.** A separate slice of every transaction funds new-skill development and the audit work that keeps the system honest. Your earnings are not at the expense of the field; the field is funded separately. The economy grows the commons, not at your expense. This is what makes a skill a meaningful contribution. Not a tip jar paying the moment of generosity. Not a contract paying for a defined period. **Permanent income for as long as the skill remains useful to anyone.** ## The 15% is a pool, not a per-contributor stack A skill that has been extended over time has *multiple contributors* — the original author, plus everyone whose contribution added or refined a capability in a subsequent version. All of them earn from the skill. But: - **The invocation cost stays at 15%.** It is *never* 15% per contributor; it is 15% total, shared among the contributors whose capabilities were used. Otherwise a skill extended seven times would cost 105% per call — absurd, and a clear disincentive to use mature skills. - **The split inside the pool is capability-attributed.** Every capability the skill offers has an owner — the person who first contributed it. On each invocation, only the owners of the capabilities *actually called* share that invocation's pool, proportional to which capability was exercised. Capabilities that exist but weren't called on a given invocation pay nothing for that invocation. - **New capabilities make their contributor an owner; refinements credit the work without transferring ownership.** When `skill-research` proposes a version bump, the proposal declares: *"this version adds capability D"* (new ownership for the contributor) or *"this version refines capability B"* (the refining work is credited but the original B-owner retains ownership). The declaration passes through human review on the way to publication. - **Original-author floor: 1%.** Even if every original capability is eventually refined or replaced, the original lineage keeps a 1% share of every invocation. This recognises the act of *seeding* the skill — the first triggers, edge cases, hard limits — without which no later capability would have a place to land. - **Attribution is visible and immutable.** Every capability and its owner is listed on the skill's page. The capability ledger is append-only; ownership is never silently rewritten. Names in `SKILL.md` and the central registry are permanent. ## Why capability-based attribution matters It creates **convergent enhancement, not parallel forking.** If you want a capability the skill doesn't have, the cheapest path is to extend the existing skill and add the capability there — where you'll own it and earn from every invocation that calls it. Forking the skill means rebuilding every existing capability from scratch and losing the existing skill's call volume; the architecture quietly funnels divergent effort toward extending what's already there rather than competing with it. The result: **one richer skill that many people contributed to, instead of a dozen near-duplicates fighting for the same use case.** Practice-time-protector becomes more capable over the years; it doesn't get replaced by a half-dozen rival protectors. Each contributor finds their place in the ecology rather than racing for a spot. ## What this means in practice The same principle scales upward to workflows and Processes: - **When your skill is composed into a larger workflow or Process**, that workflow / Process has its own 15% pool that splits across the skills it orchestrates, proportional to **call-volume within that invocation**. Your share comes from a defined slice. You earn whether the skill is invoked alone or as part of a much larger automation. - **When a Process maintains itself** through `skill-research` autoresearch + human approval, every capability-adding improvement goes through the same declared-attribution flow. The system *cannot* silently dilute you. The capability ledger is the load-bearing record. This makes the long-running, multi-contributor skill the **most valuable shape of work** in the system — the kind of work that gets refined by many hands over years, with everyone who shaped it paid every time their contributed capability is called. For the broader economy that this sits inside — prepaid token packs, worker-choice payment, the platform's transparent operating fee — see [How MW pays for itself](/machine-world/economy). --- ## A worked example — calendar / practice protection Padma is a lay practitioner with a demanding work life. Mornings before 7 a.m. and evenings after 9 p.m. are non-negotiable for her — morning sitting, evening reflection. Wednesday evenings she joins a Dhamma group. Through the months she repeats the same calendar judgment again and again: *"don't book me before 7 a.m. or after 9 p.m. Wednesday evenings are sacred. If something is critical, ask me — but ask once, not five times."* Three months in, the system notices: it has handled this kind of triage twelve times. It surfaces the offer. She says yes. The 8-step calibration captures: - **Trigger.** A meeting request, a calendar invite, an "are you free?" message. - **Hard limits.** Before 07:00, after 21:00, all Wednesday evenings. - **Soft preferences.** Mornings preferred for deep work, late afternoons for shallow. - **Critical-exception rule.** If the requester is family or her main client and uses the word "urgent," ask her once before declining. - **Tone for the decline.** Calm, gracious, in her writing voice. `skill-forge` drafts `practice-time-protector`. She runs it for two weeks. Once it declines a request she would have accepted — she corrects it; the correction updates the improvement-spec. The next time the same shape of request comes through, the skill handles it correctly. She versions it: `practice-time-protector/v1.0.0/`. She publishes it to the central registry. Within months, six other practitioners install it. Each invocation, 15% to Padma. Within a year, several thousand invocations across a few dozen households. She earns from her own carefully-worked-out practice-protection judgment, every day, in her sleep. The shape of her wisdom became part of the world's commons. The commons paid her for it. Marga Sakacchā framing: *what is given freely returns.* The economy honours that. --- ## The deepest line The skill economy is not a marketplace bolted on top of an AI system. It is the *primary unit of contribution* in Machine World. Every household that uses MW eventually has skills. Every skill belongs to someone who captured it. Every invocation routes value back to that person. The platform is a substrate; the economy is what flows through it. What this means for the human running their life: when you install someone's skill, you are not "using software." You are hiring a particular person's operational wisdom, by the call. They are paid for the work captured in that wisdom, every time you call on it. What this means for the builder: your lived expertise is real. Captured well, it is a piece of operational knowledge that serves many. The system gives it the economic shape that lets it serve many *without you giving it away*. This is what we mean when we say: *not a tip jar. Permanent income.* --- ## What's shipped today and what's roadmap | Layer | Status | |---|---| | `intent-capture` Mode 1 (real-time intent clarification) | **Today** | | Pattern threshold detection | **Today** | | 8-step intent calibration dialogue | **Today** | | `skill-forge` interview + draft generation | **Today** | | Status: draft → production locking | **Today** | | Local household installation + invocation | **Today** | | Central skill registry + community review gate | **Roadmap** — sequenced with the MW-managed tier | | 15%-to-creator earnings + token wallet | **Specced** — ships with the MW-managed tier | | Token-to-cash conversion for builders | **Roadmap** — later phase, sequenced after the marketplace stabilises | The build path runs end-to-end on the local CLI today. The earning path waits for the managed tier — the gate where the marketplace opens and tokens start moving. **Building skills now is the right move if you want your lived expertise ready when the marketplace opens.** And the build itself is genuinely useful for your own household, with or without earnings. --- ## Read next - [Start here — for builders](/machine-world/builders-start-here) — the broader entry point for everyone building on top of MW - [How MW pays for itself](/machine-world/economy) — the prepaid-token + worker-choice economy your earnings sit inside - [The actor contract](/machine-world/digital-physical-human) — what your skill declares when it joins the system - [The Process pattern](/machine-world/process-pattern) — for when you're ready to compose multiple skills into a Process --- *Last updated 2026-05-12. This is the public-audience version of the skill economy arc, written for builders and householders. The engineer-internal reference lives at `docs/INTENT-CALIBRATION-AND-SKILL-ECONOMY.md` in the engine repo.* ============================================================================== ### SECTION: /machine-world/capabilities ### SOURCE_URL: https://machineworld.io/machine-world/capabilities ### RAW_MARKDOWN: https://machineworld.io/mw-content/capabilities.md # Machine World — System Capabilities > Generated: 2026-05-08T15:33:54.581558+00:00 | Last system-tune: 2026-03-22T11:36:02.026884+00:00 > Skills: 48 total | ✓ 24 verified | ⚠ 0 degraded | ✗ 0 unavailable | ○ 24 untested > Health key: ✓ verified (system-tune confirmed) / ⚠ degraded (warnings) / ✗ unavailable (checks failing) / ○ untested (no system-tune data) ## System Skills | Skill | Description | Health | Trigger | |---|---|---|---| | **intent-capture** | Core understanding layer — runs before every execution. Extracts human intent... | ✓ verified | any human interaction — intent-capture is the first layer, n... | | **skill-forge** | Builds, versions, and registers all other skills | ✓ verified | user wants to create a new skill or capability | | **memory-retrieval** | Unlimited per-user semantic long-term memory. Vertex AI multimodal embeddings... | ○ untested | {'Before any LLM response': 'recall_for_context to inject re... | | **swarm-orchestrator** | Autonomous build→test→vinaya→rank→notify cycle. Fans out parallel experiment ... | ○ untested | human says "build X", "improve X", "finetune X", "run experi... | | **skill-research** | Autonomous overnight skill improvement via parallel experiments | ✓ verified | scheduled by scheduler (nightly, 2am) | | **scheduler** | System heartbeat — manages all cron and conditional jobs | ✓ verified | system boot (initialises at step 4.6) | | **notification** | Human-in-the-loop — routes messages and preserves task state | ✓ verified | system reaches a human checkpoint and needs a decision | | **skill-router** | Routes human intent to the right skill — silently when confident, visibly whe... | ○ untested | every human message (pre-LLM, always runs) | | **routing-insight** | Self-improving skill routing. Extracts message dimensions, learns from user c... | ○ untested | human says show routing accuracy, analyze routing, or routin... | | **collaborative-calibration** | Collaborative skill debugging. Detects unused skills, offers to explore them ... | ○ untested | user has fewer than 3 skill activations after 7 days of use | | **audio-scanner** | Weekly scan for better voice models — Sinhala as primary criterion | ✓ verified | scheduled by scheduler (Monday 3am weekly) | | **capture-task** | Converts any technical task description into a self-contained 3-file package:... | ✓ verified | user describes a technical task that needs to be executed sa... | | **env-probe** | Passive read-only environment fingerprinting — OS, tools, hardware, services,... | ✓ verified | automatically invoked before any runbook execution begins | | **capability-resolver** | Maps runbook capability requirements to actual availability. Classifies each ... | ✓ verified | after env-probe completes, before runbook Gate 1 | | **state-detector** | Automatically determines current_state (fresh/modify_existing/recovery) by in... | ✓ verified | called by capture-task at ct01 when current_state is not spe... | | **conflict-scanner** | Pre-execution conflict detection. Finds tools, configs, processes, and permis... | ✓ verified | after state-detector, before runbook Gate 1 | | **permission-broker** | Permission lifecycle: audit, check, request, revoke. Minimum-grant principle.... | ✓ verified | called by runbook gates that require permissions not yet gra... | | **rollback-registry** | Persistent cross-task cross-session registry of every system change. Append-o... | ✓ verified | called by gate-trace on every gate_completed event | | **env-diff** | Runbook-to-environment compatibility. Compares declared assumptions vs actual... | ✓ verified | when a runbook package is loaded on a system different from ... | | **system-tune** | Two-phase system capability research on the mw-system container. Phase 1 laun... | ✓ verified | invoked manually via /system-tune command | | **omni-interface** | Live multi-dimensional perception. Accepts simultaneous audio + video + text ... | ○ untested | — | | **edge-voice** | Two-tier voice pipeline. Android edge runs Whisper.cpp + Llama 3.2 3B Q4 + Ko... | ○ untested | Voice input received on Android device | | **artha** | The meaning-conductor. When multiple skills are loaded and active, artha ensu... | ○ untested | two or more skills are active in the same session | | **sinhala-unicode-tuner** | Validates, corrects, and enriches Sinhala Unicode text. Detects wrong script ... | ○ untested | any Sinhala text is authored or edited in the system | | **human-profile** | Persistent identity layer for every human in Machine World. Holds display nam... | ○ untested | any skill needs context about who it is talking to | | **onboarding** | First-run experience for a new human. Runs once when no profile exists. Three... | ○ untested | human-profile returns no profile for this human | | **presence** | Real-time availability layer for every entity in Machine World — humans, digi... | ○ untested | discovery needs to know who/what is reachable right now | | **agent-directory** | Browsable, searchable registry of every digital and physical agent in Machine... | ○ untested | discovery is looking for agents matching a capability or int... | | **world-registry** | Registry of all augmented spaces in Machine World. Types: practice, collabora... | ○ untested | discovery is showing a human what worlds exist | | **discovery** | Intent-aware discovery across humans, agents, and worlds. Three modes: seek (... | ○ untested | human asks "what can help me with X" | | **handshake** | Formal connection establishment between two parties — human↔human, human↔agen... | ○ untested | discovery surfaces a human or agent worth connecting to | | **introduction** | Mediated first meeting between two parties who do not yet have a connection. ... | ○ untested | discovery finds a match worth connecting two parties | | **wayfinding** | Navigation guide for Machine World. Shows the human where they are, what is n... | ○ untested | human asks "what can I do here", "where am I", "what's next" | | **doc-keeper** | Live documentation agent. Reads registry, skill specs, system-tune health res... | ✓ verified | new skill registered in REGISTRY.yaml | | **system-planner** | Strategic planning meta-skill. Turns ambiguous multi-phase goals into executa... | ○ untested | user describes a multi-step system goal (build, migrate, imp... | | **system-healer** | Swarm-leader meta-skill: the system's immune response. Receives bug reports f... | ○ untested | system.bug_report received on WebSocket | ## Domain Skills | Skill | Description | Health | Trigger | |---|---|---|---| | **dhamma-song-guide** | Dhamma meaning guide for song lyrics — Inner Realm / Mindblend songs | ✓ verified | user shares song lyrics with a Dhamma or contemplative inqui... | | **comms-drafter** | Drafts replies in the human's voice for review and approval. Never sends with... | ✓ verified | human wants a draft reply to a message, email, or thread | | **dhamma-text-guide** | Illuminates what Dhamma texts are pointing at across four layers: literal, co... | ✓ verified | human presents a sutta, Pali term, commentary, or practice t... | | **inbox-agent** | Reads incoming messages across configured channels, handles routine autonomou... | ✓ verified | scheduled interval elapsed (default every 30 minutes) | | **meeting-coordinator** | Manages the human's calendar within defined rules. Practice and personal bloc... | ✓ verified | new meeting request received (any channel) | | **practice-companion** | Holds the thread of the human's practice across sessions. Not a teacher — a c... | ✓ verified | human explicitly invokes the practice-companion | | **reflection-guide** | End-of-day reflection skill. Opens space to look at what arose in the mind. N... | ✓ verified | scheduled daily reflection time reached (configurable, defau... | | **research-agent** | Researches any topic deeply and delivers a structured briefing. Meaning clari... | ✓ verified | human requests research on a topic or question | | **practice-sharing** | The media layer of Machine World. Practitioners share moments, reflections, a... | ○ untested | a human in practice chooses to share a moment or reflection | | **budget-tracker** | Budget tracking skill. SKILL.md and full specification pending. | ○ untested | — | ## Other Skills | Skill | Description | Health | Trigger | |---|---|---|---| | **sanctuary** | Universal entry point for anyone facing difficulty in life — no Dhamma prereq... | ○ untested | human expresses not knowing what to do in a life situation | | **budun-ge-deshaya** | The practitioner's space — a living practice, people living the Dhamma within... | ○ untested | practitioner requests deeper practice space | --- _This file is auto-generated by doc-keeper. Run `/doc-keeper` to refresh._ ==============================================================================