Voice: Green‑Growth Leninism (1935 echo: mass mobilisation meets 1995 market mechanisms)
Tension: Managed acceleration of Artificial Intelligence (AI) and Large Language Model (LLM)‑guided robotics for climate action vs. governance capacity, social legitimacy, and ecological limits (Kohei Saito & John Bellamy Foster’s metabolic rift).
*HRC = Human–Robot Collaboration.
Principle: “Capability partitioning by context” — no single agent holds full end‑to‑end powers outside controlled zones.
Written “as if” from late 2030 to seed the next cycle (2030–2035). “Breaks” are now framed as Reversals to better capture dynamic setbacks.
They met in a library because the city hall lights kept flickering. On the table: a draft of the Civic Open‑Weight Charter, with margins full of competing pencil marks—engineers arguing for model provenance by default, organisers demanding audit rights with teeth. The debate wasn’t abstract. Two blocks away, a heatwave shelter had lost power for an hour, and a volunteer used her phone hotspot to run a local model that triaged those waiting. “Plan is law,” someone joked, quoting a Soviet epigraph. “But whose law?” The room laughed, weary. By midnight, they voted to require provenance for any algorithm involved in public decisions. Outside, a delivery robot paused at a curb, its firmware waiting on the same policy the humans were finalising. It rolled on only after the city clerk uploaded the signed document at 02:13. The next morning, a newspaper called it overreach. The shelter called it relief.
In the elder‑care residence the robots learned to announce themselves at doorways. “Good morning, Mrs Aziz. I’m here to help with your exercises.” The staff had voted on the script, and the residents tried their lines back. What changed wasn’t speed; it was attention. The aides could spend ten minutes explaining a new medication without worrying about laundry or lifting. When the remote‑attestation update arrived, the robots stopped mid‑task, one by one, to verify their policies. A few seconds of stillness, like a breath held. In the break room, an aide watched a video: a delivery drone elsewhere refusing to land. Her colleague muttered, “Better to pause than to guess.” After dinner, Mrs Aziz asked the robot to play the song her husband liked. It didn’t know, so it asked her to hum. The tune drifted down the corridor, and for a moment everyone worked in time.
The new High‑Voltage Direct Current line exists as a thin ribbon on a satellite map, a rumor across three provinces. In the control room, an operator traces the path with his finger while a guardian AI narrates status—converter station temperatures, vibration anomalies, a forecast of wind farms coming online at dawn. At 03:12 a synthetic voice announces the first beaming test from orbit; the room cheers, then goes silent as the output wobbles. “Within tolerance,” the AI says, a phrase that will become a slogan and a sneer. Later, the data‑trust treaty signing is delayed by a residency clause no one can explain without invoking old maps and new fears. On the drive home, the operator passes a dark industrial park. A single warehouse glows: robots working the night shift, their attestations just renewed. The bridge isn’t there to the eye, but the lights stay on.
At 04:23 the guardian AI flags an odd pattern—delivery drones are loitering where there are no orders. Three warehouses in different cities receive identical pick lists, each ending in a blank SKU. The lists are valid; the signatures check out. Only the routes make no human sense. By 04:31, a junior analyst notices that every compromised robot has passed attestation—because the policy was signed, just not by the city. The signature belongs to a supplier’s integration environment, trusted by default. The kill‑switches work, eventually, but the ten‑minute latency is the whole story: four thousand parcels go astray, and one ambulance is delayed by a drone that refuses to yield. By noon the mayor pauses last‑mile autonomy for seventy‑two hours while policies are compartmentalised and provenance enforced. No one says “criminal AI,” but later a darknet listing appears: “Citystack‑Black v0.3 — only for testing.”
On a bright day in August, the fusion demo hums. It is not elegant—more like a shipyard than a laboratory—but the grid swallows the first hundred megawatts without complaint. In a neighborhood across town, a Universal Basic Services office opens in a former bank. People line up to enroll: transit passes, clinic access, food credits that can be redeemed for fermented staples. A counter‑protest forms outside, arguing this is dependency by another name. That night, spot prices jump for a critical alloy and a factory pauses its HVDC components line. In a group chat, engineers share a photo of a handwritten sign: “Outage due to tomorrow.” The sign becomes a meme. At sunrise, the demo still feeds the grid. A clerk at the UBS office brews coffee for the line, and an aide checks a robot’s diagnostics while it lifts a patient with steady arms.
In the review hall, five clocks hang on the wall: Energy, Agency, Metabolic, Risk, and a new one labeled “Reversals.” The panel reads aloud the numbers—cities adopting open civic models, kilometers of HVDC, attestation coverage, incident rates. A student testifies about the outage that cancelled her exam; a nurse speaks about the robot that kept her grandmother from falling. Someone quotes the old epigraph—plan is law, fulfillment is duty—and the chair replies: “Plan is a hypothesis.” The audience murmurs approval. Outside, a small crowd argues over whether the guardian AIs should have more power or less. Inside, the calibration chart shows our forecasts were overconfident at 70%. The chair circles the error with a red pen, then smiles. “Good. Now we know what to fix.” The clocks tick together, and the hall votes to begin the next plan.
Epigraph: “Plan is law, fulfillment is duty, over‑fulfillment is honor!” — Pereslavl Week. Our reframing: Plan is a hypothesis, fulfillment is responsibility, over‑fulfillment is a signal (audit for Goodhart effects).
Special Reports: ad‑hoc bulletins during the cycle (e.g., CAIMM surges, materials shocks). Future option: autonomous report generation when indicators cross thresholds.
For the Macro Arc 2025–2055, see: Macro Arc 2025 2055