SX Bet’s WebSocket API delivers real-time updates on orderbook changes, trade executions, market status, and live scores. All channels are powered by Centrifugo and require an API key. Centrifugo provides official client SDKs for JavaScript, Python, Go, Dart, Swift, Java, and C#.Rather than polling REST endpoints, subscribe to the channels relevant to your workflow. The recommended pattern for most use cases is: fetch current state via REST, then subscribe to stay updated — this avoids gaps between your initial snapshot and the live feed.
1
Authenticate
Pass your API key via the getToken callback. Token refresh is handled automatically.
2
Connect
Create a Centrifuge client pointed at the WebSocket URL.
3
Subscribe
Create a subscription for each channel you need and attach a publication handler.
To connect, you need a realtime token from the relayer. Fetch it from /user/realtime-token/api-key and pass your API key in the x-api-key header. In the client SDK, provide a getToken callback that returns this token.The SDK calls getToken again whenever the current token expires, so passing a function instead of a static token is all that’s needed to keep the connection alive. See Common failures → Auth for how to signal permanent auth failure vs. a transient fetch error.
Create one Centrifuge client per process and reuse it for all subscriptions. All channel subscriptions are multiplexed over the same connection.
import { Centrifuge } from "centrifuge";const RELAYER_URL = "https://api.sx.bet"; // Mainnet — use https://api.toronto.sx.bet for testnetconst WS_URL = "wss://realtime.sx.bet/connection/websocket"; // Mainnet — use wss://realtime.toronto.sx.bet/connection/websocket for testnetasync function fetchToken(apiKey) { const res = await fetch(`${RELAYER_URL}/user/realtime-token/api-key`, { headers: { "x-api-key": apiKey }, }); if (!res.ok) throw new Error(`Token endpoint returned ${res.status}`); const { token } = await res.json(); return token;}const client = new Centrifuge(WS_URL, { getToken: () => fetchToken(YOUR_API_KEY),});client.connect();
All channel subscriptions are multiplexed over the single connection. If you need more than 512 subscriptions, create additional client instances — each connection supports up to 512 channels.
Use client.newSubscription(channel, options) to create a subscription, attach event handlers, then call .subscribe():If you need at-least-once delivery across reconnects, pass positioned: true and recoverable: true together for channels that support recovery. See Recovery & reliability for details.
const sub = client.newSubscription("markets:global");sub.on("publication", (ctx) => { console.log(ctx.data);});sub.subscribe();
Pass options to newSubscription to control reliability behavior:
Flag
Type
Description
positioned
boolean
Enables stream position tracking for the subscription. This lets the client keep its current offset and allows the server to signal when the stream position has become invalid.
recoverable
boolean
Enables automatic recovery for the subscription. On resubscribe, the client sends its last known stream position and the server tries to replay missed publications from history.
Both flags are required together to get at-least-once delivery across reconnects. Think of it in two steps: positioned bookmarks your place in the stream while you’re connected; recoverable uses that bookmark to fetch the missed pages when you come back.
// Correct — both flags togetherconst sub = client.newSubscription("order_book:market_abc123", { positioned: true, recoverable: true,});
For channels without history where you always re-seed from REST on reconnect (e.g., best_odds:global, parlay_markets:global), both flags can be omitted.
After a reconnect, the subscribed event fires with context that tells you whether your local state is still consistent:The subscribed context includes:
wasRecovering: the client attempted to recover from a previous stream position
recovered: the server successfully replayed all missed publications
positioned: the subscription has stream position tracking enabled
recoverable: the subscription supports automatic recovery
sub.on("subscribed", (ctx) => { if (ctx.wasRecovering && ctx.recovered) { // Reconnected and gap was filled via history replay. // No need to re-fetch from REST — all missed messages were replayed. } else if (ctx.wasRecovering && !ctx.recovered) { // Reconnected but history was pruned before recovery could complete. // Too much time passed — re-seed your local state from REST. } else { // Fresh connect (first connection, or after a clean disconnect). // Seed initial state from REST, then rely on the subscription for updates. }});
wasRecovering
recovered
State
What to do
true
true
Recovered
History replay filled the gap — no action needed
true
false
Unrecovered
History was pruned — re-seed from REST
false
—
Fresh connect
First connection or clean reconnect — seed from REST
For namespaces with history enabled, Centrifugo provides at-least-once delivery within the recovery window — missed messages are replayed from server-side history on reconnect. The recovery window is 5 minutes, but may be shorter if the namespace’s message cap is reached first. After the window expires, wasRecovering: true, recovered: false fires and you must re-seed from REST.best_odds and parlay_markets do not have history enabled — recovery is not available on those channels.Note on epoch: Recovery can fail even after a short disconnect if the server no longer has the missed publications in history or if the saved stream position is no longer valid. This is rare, but when it happens recovered will be false and you must re-seed from REST.
At-least-once delivery means a message may occasionally be replayed more than once during recovery. Every publication includes a messageId in ctx.tags — use it to deduplicate on the client side:
const seen = new Set();const MAX_SEEN = 10_000;sub.on("publication", (ctx) => { const id = ctx.tags?.messageId; if (id !== undefined) { if (seen.has(id)) return; seen.add(id); if (seen.size > MAX_SEEN) { seen.delete(seen.values().next().value); } } applyUpdate(ctx.data);});
For long-running processes, bound the size of your dedup set (e.g., keep only the last 1,000 IDs) to avoid unbounded memory growth.
Each namespace with history enabled maintains a server-side log of recent publications. You can fetch this directly with sub.history() — useful for seeding initial state or auditing recent activity without a separate REST call.
The response contains a publications array and the current stream offset and epoch. Each entry in publications has data, offset, tags, and info fields — access the payload via pub.data, the same as ctx.data in a live publication event.
History fetches are bounded by the per-namespace caps in Namespace history capabilities and the global limit of 1,000 items per request. Calling sub.history() on a channel with no history enabled returns error code 108.
Subscribe with positioned: true, recoverable: true and seed from REST inside the subscribed handler. The handler fires on every connect and tells you whether recovery filled the gap — so you only hit REST when you actually need to:
let ready = false;const buffer = [];const sub = client.newSubscription(`order_book:market_${marketHash}`, { positioned: true, recoverable: true,});sub.on("publication", (ctx) => { if (!ready) { buffer.push(ctx.data); } else { applyUpdate(ctx.data); }});sub.on("subscribed", async (ctx) => { if (ctx.wasRecovering && ctx.recovered) { // Centrifugo replayed all missed messages — no REST call needed. ready = true; return; } // Fresh connect or failed recovery (history gap too large / expired). // Reset and re-seed from REST. ready = false; buffer.length = 0; const res = await fetch(`https://api.sx.bet/orders?marketHashes=${marketHash}`); applySnapshot(await res.json()); // Drain buffered publications on top of the snapshot. // applyUpdate deduplicates by entity ID so any // overlap between the snapshot and the buffer is handled safely. for (const data of buffer) applyUpdate(data); buffer.length = 0; ready = true;});sub.subscribe();client.connect();
Use the client events to understand what happened:
connecting: fired on the initial connect() and on retryable reconnects. The event includes a code and reason.
connected: fired when the transport is established and the client is ready.
disconnected: fired only when the client reaches terminal disconnected state. After this, the SDK will not reconnect automatically.
error: fired for internal errors that do not necessarily cause a state transition, such as transport errors during initial connect or reconnect, or connection token refresh errors.
Use subscription events to understand what happened:
subscribing: fired on the initial subscribe() and on retryable resubscribe paths.
subscribed: fired when the subscription becomes active.
unsubscribed: fired only when the subscription reaches terminal unsubscribed state. After this, the SDK will not resubscribe automatically.
publication: fired whenever a new message arrives on the subscription while it is active.
error: fired for internal subscription errors that do not necessarily cause a state transition, such as temporary subscribe errors or subscription token related errors.
temporary transport loss moves the client back to connecting
reconnectable subscription interruptions move the subscription back to subscribing
calling client.disconnect() or hitting a terminal disconnect condition moves the client to disconnected
calling sub.unsubscribe() or hitting a terminal subscription condition moves the subscription to unsubscribed
Recovery outcome is reported on the next subscribed event via wasRecovering and recovered. See Recovery & reliability for how to interpret those fields.
import { Centrifuge } from "centrifuge";const client = new Centrifuge("wss://realtime.sx.bet/connection/websocket", { getToken: () => fetchToken(YOUR_API_KEY), // see Getting started above});const sub = client.newSubscription("markets:global");sub.on("publication", (ctx) => { for (const market of ctx.data) { console.log(`${market.marketHash}: status=${market.status}`); }});sub.subscribe();client.connect();
In most cases, you do not need to write custom retry logic around these errors. The SDK already handles reconnect and resubscribe automatically when the condition is retryable. The codes below are most useful for telemetry, debugging, and contacting support if an issue persists.
The getToken callback is called on initial connect and whenever the token needs to be refreshed. How you throw from it controls what the SDK does next:
import { Centrifuge, UnauthorizedError } from "centrifuge";const client = new Centrifuge(WS_URL, { getToken: async () => { const res = await fetch(`${RELAYER_URL}/user/realtime-token/api-key`, { headers: { "x-api-key": apiKey }, }); if (res.status === 401 || res.status === 403) { throw new UnauthorizedError(); // permanent — stops all reconnect attempts } if (!res.ok) throw new Error(`Status ${res.status}`); // transient — retries with backoff const { token } = await res.json(); return token; },});
If your realtime-token endpoint returns 401 or 403, throw UnauthorizedError so the connection stops retrying and moves to terminal disconnected. For transient failures like 429 or 5xx, throw a normal error so the SDK keeps retrying.The server may also issue a terminal auth disconnect such as code 3500 ("invalid token"). In that case, the client stops reconnecting automatically.
If Centrifugo detects that recovery cannot continue from the current stream position, it may either resubscribe the affected subscription or reconnect the client, depending on where the problem is detected. This can surface as unsubscribe code 2500 or disconnect code 3010, both with reason "insufficient state".This is not terminal by itself. The next subscribed event tells you whether the replay succeeded:
wasRecovering: true, recovered: true: replay filled the gap
wasRecovering: true, recovered: false: replay could not fill the gap, so re-seed from REST
If you see insufficient state frequently, it usually indicates a stream continuity problem rather than a client bug.
The client reconnects automatically after most disconnects. It does not reconnect for built-in terminal disconnect codes in the 3500-3999 range.Common terminal examples include:
The server buffers up to 1 MB per connection. If your publication handler is slow, that buffer fills faster than it drains and the server closes the connection. In Centrifugo this can surface as disconnect code 3008 ("slow"), which is reconnectable but indicates your consumer cannot keep up.Keep handlers fast: receive the message and hand it off to a queue or async task immediately. Unexpected disconnects that are not auth-related are often caused by a saturated buffer.For the full list of built-in unsubscribe and disconnect codes, see Centrifugo client protocol codes.