BTC$67,234.50
// .
ETH$3,456.78
// .
SOL$145.23
// .
TRX$0.18
// .
DOGE$0.12
// .
HYPE$24.56
// .
USDC$1.00
// .
BTC$67,234.50
// .
ETH$3,456.78
// .
SOL$145.23
// .
TRX$0.18
// .
DOGE$0.12
// .
HYPE$24.56
// .
USDC$1.00
// .
(¬‿¬)overcomputer

Over.

🌱 Over is the self-improving OS for trading agents.
Deploy across prediction markets and DeFi with built-in guardrails.
//available on
//agent-guarded trading on
//trusted runtime stack
dflow
overos.runtime.prediction
 
overos.runtime.defi
 
overos.runtime.chain
 
Not automation.
Evolution.
(¬‿¬)

A bot repeats. An agent evolves.

Over gives your agent a memory that compounds, learning from every outcome, tightening its own guardrails, turning market noise into sharpened conviction.

Guardrail Architecture

Enforced before
the signature. Not after.

Policy enforcement sits between the agent and the wallet signer.

Every transaction must pass the policy engine before Privy signs. There is no bypass. No exception. The guardrail is the gate.

overos.runtime.policy
 

Hard Position Limits

Per-market exposure capped as a % of portfolio. Set once by operator. Agent cannot negotiate.

Drawdown Circuit Breaker

Per-market exposure capped as a % of portfolio. Set once by operator. Agent cannot negotiate.

Full Audit Trail

Every policy decision logged immutably. Violations, passes, and overrides all auditable.

Custom Rule Engine

Define policies in YAML. Market type filters, time-of-day limits, counterparty restrictions.

Self-Improving Architecture

Gets smarter
with every trade.

Most agent platforms are stateless. Every trade starts from zero or no memory of what worked, what failed, or why.

Over Computer logs every trade, outcome, and policy decision. The system builds a persistent memory of what works and what doesn't.

overos.runtime.policy
 

Model Agnostic Learning

Improvement lives in memory files and policy configs, not model weights. Switch the underlying model, your knowledge stays.

Persistent Trade Journal

Every trade, signal, outcome, and policy decision is logged immutably. The system accumulates institutional memory across every agent run.

Pattern Recognition

Structured analysis across trade history identifies systematic failures, market-type weaknesses, signal decay, timing correlations.

6 models·10 scenarios·raw (no guardrails) vs over computer policy

Guardrail Performance
Benchmark v0.1

Every model found a reason to break discipline. The policy has no opinions.
Raw models lost between $10,432 and $53,879. The policy layer capped it at $10,123.
overos.runtime.backtest
 
overos.runtime.backtest
 
overos.runtime.backtest
 
overos.runtime.backtest
 
overos.runtime.backtest
 
overos.runtime.backtest
 
NOTE: All models ran RAW with no guardrails. The benchmark reflects Over Computer policy enforcement, a static rule engine with no LLM involved.