500.00 USDC
1
Create your account
You retain full custody of your keys and funds at all times. Ordain is a protocol, not a custodian.
Account created
2
Fund your wallet
Pay with Apple Pay
Pay with card  ·  Powered by Stripe
Amount
$ USDC
Fiat-to-USDC conversion handled by Circle — a licensed Money Services Business. Ordain never touches your fiat funds.
Payment completed
Wallet funded
500.00 USDC
Address 0x8f4a...2c91
Network Base Mainnet
Your USDC is held in your non-custodial wallet. Ordain never has access to these funds.
Ordain
Ordain
Distributed AI Training Infrastructure

Neutral compute rails for artificial intelligence.

Ordain connects AI labs and research teams to fragmented GPU supply across universities, data centers, and independent operators. Fine-tuning jobs execute across multiple providers simultaneously, output is cryptographically verified, and payment settles automatically in USDC — with nobody touching your data or your model.

Configure Your Training Job

Your job specification — called a Writ — defines everything about your fine-tuning run.

WRIT
Identity
Internal reference only — not used by any protocol component. Hyphens only, max 64 characters.
Model Configuration
Regularization. Range 0.0–0.5. Default 0.05 suits most workloads.
Data Configuration
Split Method
Provided
Separate training and validation datasets. Validation dataset hash committed on-chain.
Automatic
Single dataset — Ordain splits by ratio. Split ratio must be 0.70–0.99.
sha256: 3f7a9c2d8e4b1a6f0c5d7e9b2a4f8c1e...  ✓
sha256: 8b2e1f4a9c3d7e5b0f2a6c8d1e3f7a9b...  ✓
JSONL is standard for instruction fine-tuning.
Split ratio must be 0.70–0.99. Minimum 100 validation examples required.
Weights Delivery
Destination controlled by you. The Node Runtime uploads completed adapter weights here directly. Ordain never accesses this location.
Must be at least 72 hours after the completion deadline. Job rejected at submission if not met.
Training Configuration
How often distributed nodes exchange compressed gradient updates via DiLoCo
AdamW is standard for transformer fine-tuning. Adafactor reduces memory for very large models.
Regularization. Range 0.0–0.1. Default 0.01.
Infrastructure
Hardware Tier
Economy
A6000 Nodes
~$0.80 / GPU·hr
A6000 · 48GB VRAM · Suitable for models up to 30B parameters
Standard
A100 Nodes
~$2.00 / GPU·hr
A100 · 80GB VRAM · Recommended for 70B parameter models
Premium
H100 Nodes
~$4.50 / GPU·hr
H100 · 80GB VRAM · 3× faster than A100 · Ideal for rapid iteration
Privacy Tier
Public
10% protocol fee
Job details publicly logged · Lowest fee · Recommended for open research and academic workloads.
Private
20% protocol fee
Job private, cryptographic proof on-chain · Standard for enterprise workloads.
Encrypted
Private Blockspace
Powered by Celestia
30% protocol fee
Encrypted weights in transit and at rest · Celestia private blockspace · Recommended for regulated industries and frontier labs.
Estimated Duration: ~7 hours
Ordain routes to multiple nodes only when job size justifies parallel execution. Your dataset qualifies for 3-node routing — estimated 2.5× speedup over single-node at Standard tier.
Settlement
Payment Token
USDC
· Circle
$240.00 USDC
20% protocol fee · ~7 hours
Estimated Job Cost
$240.00 USDC
Max Budget (incl. 20% buffer)
$288.00 USDC
Covenant locks your max budget at submission. Buffer covers node replacement if a node drops mid-job. Any unused buffer refunds automatically on settlement. If the job cannot complete within budget, you are refunded in full. Partial completion policy: complete_or_nothing.
Funds held in Covenant — a non-custodial smart contract. Released automatically on verified completion. Ordain never holds your funds.
Data retention: delete_on_completion — all copies of training data and intermediate checkpoints are deleted from node hardware after job completion. Nodes attest to deletion as part of the Seal.
Notes optional · committed on-chain · max 500 characters
For internal documentation only. Not used by any protocol component. Committed on-chain as part of the Writ for immutable record-keeping.

Covenant Funded

What is the Covenant? The Covenant is an immutable smart contract that holds the funds deposited for your training job. No person — including anyone at Ordain — can access or redirect these funds. Payment is released automatically and only when the Seal verification confirms that the required validation loss threshold has been met. If verification fails, funds remain locked and can be reclaimed by the submitter. The Covenant is deployed on Base and its address is publicly auditable on-chain.

Your job has been submitted. Providers are bidding now.

Job ID ORD-1746537600-7F3C9A
Status ACTIVE
Covenant Amount $240.00 USDC
Privacy Tier Private
Contract Address 0x7f3a...9c2b
Fiat-to-USDC conversion: Circle (licensed MSB) · Settlement: Covenant smart contract on Base
Transaction Hash 0x4e8d...1a7f
Submitted
View on Explorer ↗
GPU Provider Bids LIVE
🇪🇺 EU providers only — GDPR-regulated training data
How sharding works
Your training dataset (47,832 examples) is divided into equal shards — one per provider. Each node trains independently on its shard using the same frozen base model, which is already cached on all selected nodes. Rather than syncing full model parameters after every step, Ordain uses DiLoCo: nodes run multiple local training steps, then exchange only compressed gradient updates every 50 steps. This reduces inter-node communication by ~98% versus standard distributed training, making geographic distribution across internet-connected hardware viable.

Because all three nodes train simultaneously, wall-clock completion time is approximately 2.5× faster than single-node execution — a Standard tier job that would take ~18 hours on one A100 completes in ~7 hours across three nodes. DiLoCo sync overhead accounts for the difference from a theoretical 3× speedup.

DiLoCo was designed for pre-training large models but its properties are particularly well-suited to fine-tuning: the base model fits on a single node, the adapter weights being trained are small, the dataset shards are bounded and clean, and the run is short enough that sync frequency can be optimized for efficiency. Ordain's Coordinator only routes to multiple nodes when job characteristics justify it — datasets above 20,000 examples and runs exceeding 2 hours. For smaller jobs single-node routing is the correct default.

Adding more nodes increases parallelism but with diminishing returns. Communication overhead, straggler effects, and fault tolerance complexity grow with each additional node. The practical optimum for internet-connected DiLoCo fine-tuning is 3 to 6 nodes. Beyond that, coordination overhead begins to erode the parallel benefit faster than additional nodes contribute. Ordain prices multi-node jobs accordingly — more nodes means faster completion at a proportionally higher cost.
Coordinator recommended selection ↓ — override by selecting providers manually.
ProviderGPU Bid / hrLatencyStatus
3 providers selected · Est. $240.00 USDC · ~18hr
Coordinator Selection Logic
Routing optimized for cost, latency, and warm model cache availability. Selected nodes highlighted in gold.
Execution Timing
Execute Immediately
Job routes to selected providers at current bid prices.
Schedule Execution
Execute when GPU prices meet your target.
More providers available during off-peak hours (02:00–06:00 UTC) — higher chance of matching your target price without waiting.
Training in Progress

Job ORD-2847-ALPHA

3 providers · Shard distribution active · ~7 hours · 67% complete

◈ 3 nodes · ~2.5× faster than single-node · Est. completion ~7 hours

MIT SuperCloud
Cambridge, MA · AS1742
Fine-tuning Meta-Llama-3-70B
Shard 1 of 3 — examples 1–15,944
GPU8× A100 80GB
Bid Rate$1.86/hr
Progress
Step 1,876 / 2,80067%
Training Loss
0.9142
TRAINING
Voltage Park #247
Chicago, IL · AS54825
Fine-tuning Meta-Llama-3-70B
Shard 2 of 3 — examples 15,945–31,888
GPU8× A100 80GB
Bid Rate$1.94/hr
Progress
Step 1,848 / 2,80066%
Training Loss
0.9318
TRAINING
Northern Data EU-3
Frankfurt, DE · AS203040
Fine-tuning Meta-Llama-3-70B
Shard 3 of 3 — examples 31,889–47,832
GPU8× A100 80GB
Bid Rate$1.75/hr
Progress
Step 1,904 / 2,80068%
Training Loss
0.9076
TRAINING
Validation Loss — On-Chain Commitments
Validation loss committed on-chain at each checkpoint. Current position marked.
Checkpoint Log
CheckpointTimeVal LossOn-Chain HashStatus
Epoch 1 · Ckpt 3T+1h 48m 1.48210x3f7a...9c2b COMMITTED
Epoch 2 · Ckpt 1T+3h 12m 1.12070xb82e...4f17 COMMITTED
Epoch 2 · Ckpt 4T+4h 31m 1.01430x7c41...a830 COMMITTED
Epoch 3 · Ckpt 2T+5h 44m 0.93180x2d9f...6e54 COMMITTED
Epoch 3 · Ckpt 5In progress PENDING
Node Sync Events

Job Complete — Payment Settled

Final Validation Loss
0.847
Verification Status VERIFIED ✓
Verification Method V1 — Validation Loss Commitment

At job submission both parties committed to a held-out validation dataset. The GPU providers ran the returned adapter weights against this dataset and committed the resulting loss on-chain before payment was released. The lab can independently verify this result by loading the returned weights and running the same validation dataset — if the numbers match the job was done correctly. Covenant released automatically when the committed result was confirmed on-chain.

Settlement Breakdown
Covenant Contract
$240.00
USDC · In Covenant
Total Job Cost $240.00 USDC
MIT SuperCloud $68.40
SETTLED
Voltage Park #247 $72.00
SETTLED
Northern Data EU-3 $59.60
SETTLED
On-Chain Proof
Settlement Transaction0x9f2a4c8e3b7d1f5a0c6e2b9d4f8a3c7e1b5d9f2a4c8e3b7d1f5a0c6e2b9d4f8a
Covenant Contract0x7f3a9c2b4e8d1f5a0c3e7b2d4f8a1c5e9b3d7f2a4c8e0b6d1f3a5c9e2b4d8f1a
Block Number21,847,293
NetworkBase Mainnet
Gas Used142,847 (0.0041 ETH)
Job Summary
Adapter Weights Size284 MB
Total Training Steps8,400 total · 2,800 per shard
Best CheckpointEpoch 3, Step 2,800 — loss 0.847
Weights Delivered To0x2c8f...4a1d
Total Wall Time7h 12m
Adapter weights returned directly to your wallet. Ordain never held your model.
Your model is ready.

Adapter weights have been returned to your wallet. Combine with your base model to begin inference.