Compute Protocol
Inference
Live Sync
Selected Architecture
Llama 3.3 70B
DeepSeek V3.2
Mistral Large
Qwen 2.5 VL
Llama 3.2 90B Vision
Compute Cost
$0.002 / 1K Tokens
Distributed GPU Node
DEPINfer Inference Engine V1.0 Connected. I am utilizing a distributed RTX 4090 cluster via io.net. How can I assist your compute workload today?
Latency
184ms
io.net Edge
Token Speed
48.2 tk/s
Cluster ID: RU-421
Provider Link
DEPINFER
Referral Pipeline Active