EkoSphere
Systems Design — Full Breakdown
Systems Design — Full Breakdown
Jump to:
The System
•
Architecture
•
System Breakdown
•
Interdependency
•
Failure Case
•
Design Problems
•
My Approach
•
Results
•
Full Project Page
What Is the System?
EkoSphere is a state-driven simulation built around interdependent systems, where player decisions propagate across environmental, social, and infrastructural layers over time. The core of the game is not a single loop, it is a network of systems that continuously update a shared city state, where no decision exists in isolation.
The defining feature:
No system operates in isolation. Every player action propagates through the city state and reshapes future constraints, events, and outcomes.
System Architecture
At a high level, every player action flows through a closed loop where the city state acts as the shared memory across all systems:
Player Input
NbS placement, resource allocation, community decisions
↓
Placement System
evaluates context, modifiers, and spatial interactions
↓
City State Update
SET scores, resources, risk exposure, community sentiment
↓
System Interactions
cascading effects across all connected systems
↓
Event Resolution
climate hazards trigger based on accumulated city state
↓
Player Feedback ↺
new constraints, options, and decisions surface
Every action updates the city state, which changes future options, system behavior, and event outcomes. There is no isolated decision, everything is consequential.
Design Inspiration:
The systems architecture draws from games that use cascading state to teach real-world complexity. Eco demonstrated how player decisions in one system ripple into others across an entire ecosystem. Terra Nil showed how placement-based decisions could communicate systemic tradeoffs without explicit explanation. IXION and Floodland both use interdependent resource and event systems where no single variable can be managed in isolation: the same principle underpins EkoSphere's city state model.
System Breakdown
A. City State System - The Backbone
All systems read from and write to a shared CityState data model. This is what makes EkoSphere a simulation rather than a game with isolated mechanics: every system shares the same persistent world state.
CITY STATE TRACKS
SET Scores
→
Social, Environmental, Technological - updated by every NbS placement and event outcome
Resource Availability
→
budget, implementation capacity, and intervention tokens remaining
Infrastructure Distribution
→
spatial layout of placed NbS and their interaction zones
Community Sentiment
→
accumulated social response to player decisions - biased and imperfect by design
Risk Exposure
→
flood, heat, and infrastructure stress risk - shaped by prior investment spread
Historical Decisions
→
persistent record of all past choices - the city is cumulative, never reset
The city is persistent and cumulative, no reset between scenarios. This ensures long-term consequence, system continuity, and emergent outcomes that no single decision could have predicted.
Design intent: The system exists to ensure every decision persists and compounds over time.
B. NbS Placement System
Players select and place Nature-Based Solutions into a tile-based environment. The same solution placed in different contexts produces different outcomes. The system rewards spatial thinking, not pattern matching.
SAME INPUT, DIFFERENT OUTCOMES
Wetland placed near water source:
+Environmental · +Flood mitigation · +Social accessibility
Same wetland in dense urban zone:
−Social (displacement risk) · +Environmental (reduced effectiveness) · −Tech infrastructure
+Environmental · +Flood mitigation · +Social accessibility
Same wetland in dense urban zone:
−Social (displacement risk) · +Environmental (reduced effectiveness) · −Tech infrastructure
The placement evaluation pipeline:
Placement
→
Context Eval
→
Modifiers
→
SET Impact
→
City State Update
Design intent: The system teaches through context, not instruction.
C. SET Scoring System
Instead of a single win condition, EkoSphere evaluates player decisions across three axes simultaneously. Scores are relative and the system is designed to surface tradeoffs, not reward optimization.
| Dimension | What It Tracks | Design Purpose |
|---|---|---|
| Social (S) | Community wellbeing, accessibility, equity | Captures human impact of technical decisions |
| Environmental (E) | Biodiversity, flood resilience, climate health | Tracks ecological consequence over time |
| Technological (T) | Infrastructure efficiency, energy systems | Measures built environment capacity |
Evaluation ≠ optimization. Scores are presented comparatively to prompt discussion, and not to tell players they won or lost.
Design intent: The system exposes tradeoffs rather than rewarding optimization.
D. Scenario & Event System
Gameplay unfolds through progressive scenarios that build toward climate events. Events are emergent results of the accumulated city state. Preparedness is something players build, not something they unlock.
| Event Type | Triggered By | Outcome Shaped By |
|---|---|---|
| Flooding | Low environmental investment, poor water management | Wetland placement, drainage infrastructure |
| Heatwave | High tech density, low green coverage | Green roofs, community gardens, shade infrastructure |
| Infrastructure Stress | Over-investment in a single system axis | Balance across S/E/T dimensions |
Design intent: The system converts past decisions into future consequences.
E. Resource & Constraint System
EkoSphere replaces traditional economies with constraint-driven systems that create decision pressure without enabling optimization.
Active Constraints
Limited placement capacity per turn
Time pressure before climate events
Funding tied to prior outcomes
Implementation limits per scenario
Design Goal
Create scarcity without encouraging optimization
Force prioritization without a correct answer
Maintain decision pressure into late scenarios
Reward systems thinking over efficient play
F. Community Feedback System
Community responses add non-technical pressure into decision-making. The system is intentionally imperfect because feedback is partial, biased, and sometimes misleading, reflecting the social complexity of real urban planning.
DESIGN INTENT
Community feedback may oppose a technically sound intervention. Players must decide when to follow sentiment and when to challenge it and there is no objectively correct response. Social systems are not rational systems.
G. Progression System
There are no traditional levels or linear advancement. Progression is driven by the evolving city state and players progress by accumulating decisions, not by completing discrete objectives.
PROGRESSION IS DRIVEN BY
Scenario Sequence
→
challenges escalate based on accumulated city state, not a fixed difficulty curve
City State Evolution
→
every prior decision shapes what is possible and what is constrained next
Player Decisions
→
no two playthroughs produce the same city, emergent outcomes reward different strategies
H. Technical Architecture & Tooling
Built in Unity using a modular, data-driven architecture designed for rapid iteration and scalable balancing across all systems.
Technical Stack
C# data models for CityState and NbS data
ScriptableObjects for tunable system parameters
Modular components for placement, scoring, events
Custom editor tools for real-time parameter tuning
Tooling Goals
Designer-side tuning without engineering involvement
Rapid iteration between playtest sessions
Consistent system behavior across edge cases
Scalable balancing as new systems are added
H2. AI-Generated End-of-Run Reports
Built an end-of-run report feature using the OpenAI API. Player decisions across the session were converted into structured JSON and passed to the API, which generated personalized in-game reports summarizing city outcomes, tradeoff patterns, and systemic consequences.
1
Decision Capture
Every player action was logged into a structured decision record, including NbS type, placement location, scenario context, and resulting SET impact, building a per-session dataset.
2
JSON Serialization & API Pipeline
Decision records were serialized to JSON and passed to the OpenAI API with a structured prompt. Owned the full pipeline: request handling, prompt architecture, response parsing, and in-game display.
3
Personalized Report Output
The API generated contextual reports reflecting each player's unique decision pattern. This eliminated manual report writing during playtests and gave players immediate, meaningful feedback on their systemic choices.
System Interdependency
The defining characteristic of EkoSphere is that no system operates independently. Every system feeds into at least one other, creating a closed loop with cascading consequences.
NbS Placement
context-aware
SET Scoring
multi-axis eval
Event System
climate hazards
updates ↓
evaluates ↓
triggered by ↓
City State
shared data model — every system reads from and writes to this
↓ informs
↓ constrains
↓ drives
Community
feedback system
Resources
constraints
Progression
state-driven
CASCADE EXAMPLE
NbS placement → updates SET scores → affects event outcomes → alters resource availability → constrains future placement → shapes progression state → changes what scenarios are possible next
System Failure Case: Dominant Strategy Emergence
The clearest signal that a system needs redesign is when players stop making decisions and start executing a discovered optimal path. That's what happened in early EkoSphere playtests.
What Players Did
Maximized Environmental score exclusively
Ignored Social and Technological axes
Treated the system as an optimization problem
Stopped discussing tradeoffs entirely
Why It Happened
Scores were absolute: players could see a "highest" value
No cross-axis penalty for imbalance
No future system consequences tied to SET distribution
The tradeoff logic was invisible
ROOT CAUSE → FIX → RESULT
Absolute scores
→
Removed total score visibility, shifted to relative comparison across dimensions
No cross-axis penalty
→
Linked SET imbalance directly to event outcomes - over-investment in one axis creates vulnerability in others
Invisible consequences
→
Made cascading effects visible through UI feedback so players could trace SET imbalance to future event risk
After the fix: no single optimal strategy existed. Players diversified decisions, tradeoffs became necessary, and the dominant strategy collapsed because maximizing one axis now visibly hurt performance in the next scenario.
Design Problems
Three design problems emerged across 12+ structured playtests that worked against the system's core intent:
Optimization Over Exploration
Players treated SET scores as win bars and focused on maximizing a single axis. The system's tradeoff logic collapsed into a dominant strategy.
Cognitive Overload
Too much information surfaced at once. Players couldn't read system relationships clearly, so they defaulted to ignoring them rather than engaging.
Disconnected Feedback
Placements felt isolated and outcomes felt arbitrary. Players couldn't trace consequences back to decisions, so the interdependency was invisible.
My Approach
DESIGN PROBLEM → DESIGN RESPONSE
Optimization loops
→
Shifted from absolute scores to relative, comparative presentation, removed win bars entirely
Cognitive overload
→
Replaced text-heavy explanations with placement-based learning and visual feedback cues
Disconnected feedback
→
Ensured every system feeds visibly into another - consequences always traceable to decisions
1
Shifted from Absolute to Relative Systems
Removed numerical win targets and reframed SET scores as comparative tools. Players now see how their city compares across dimensions rather than chasing a total: this immediately reduced optimization behavior and increased discussion of tradeoffs during playtests.
2
Made Systems Legible Through Interaction
Replaced text explanations of system behavior with placement-based feedback: players learn what a wetland does by placing it and watching the city state update, not by reading a tooltip. Iterated on visual cues and UI layout across 12+ playtests to find the minimum information needed for systems to feel readable.
3
Introduced Visible System Feedback Loops
Restructured every system to visibly feed into at least one other. The goal was ensuring no decision felt isolated: every action needed a traceable consequence that players could observe. This made the interdependency feel designed rather than arbitrary.
4
Balanced Through Structured Playtesting
Led 12+ structured playtests across student and researcher groups, tracking confusion points, decision patterns, and engagement signals. Used player confusion as a design signal: when players couldn't explain why something happened, that was a system legibility problem, not a player problem.
Results
28%
increase in player engagement through systems iteration
20%
increase in replayability as players explored different system paths
12+
structured playtests across student and researcher groups
BEFORE
Players optimized single SET axis scores
System relationships were invisible
Outcomes felt arbitrary, not consequential
Cognitive overload led to disengagement
AFTER
Players explored tradeoffs across all three axes
System interactions became visible and discussable
Consequences felt traceable and meaningful
Engagement sustained across full playtests
PLAYER BEHAVIOR INSIGHT
During playtests, players began verbally debating tradeoffs mid-session:
"This helps flooding but hurts community access."
"We're over-investing in tech, we'll be exposed to the heat event."
This indicated the system had successfully shifted players from solving to reasoning. When players start arguing about tradeoffs rather than calculating optimal moves, the system is working as designed.
"This helps flooding but hurts community access."
"We're over-investing in tech, we'll be exposed to the heat event."
This indicated the system had successfully shifted players from solving to reasoning. When players start arguing about tradeoffs rather than calculating optimal moves, the system is working as designed.
Key Design Takeaways
Systems must be legible to be meaningful
Complexity without clarity is just confusion. The interdependency only creates value when players can trace it — which means the UI and feedback design is as important as the system architecture itself.
Tradeoffs are more valuable than optimization
Players learn more when systems conflict than when they stack. Designing for tradeoffs means deliberately preventing dominant strategies — which requires knowing exactly where your system is exploitable.
Feedback loops drive engagement
Players stay engaged when actions lead to consequences that lead to new decisions. The moment a player can predict exactly what will happen, the loop breaks. Maintaining uncertainty within a legible system is the core design challenge.
Simulation doesn't mean realism
The systems had to be simplified enough to be actionable while remaining complex enough to feel real. Every abstraction was a deliberate design choice — what to surface, what to hide, and what to make feel emergent even when it isn't.
← Back to Full EkoSphere Page