TypeScript SDK
Use the emit.run npm package for producers, workers, push wake handlers, and realtime streams
TypeScript SDK (emit.run)
The official TypeScript package is optimized for API-key workflows:
- Jobs API
- Worker lifecycle helpers
- Realtime sockets
- Claimed-job helper with batched
job.log.*
Package: https://www.npmjs.com/package/emit.run
Install
npm install emit.runNode.js 18+ is recommended.
Create a client
The client is space-scoped at initialization.
import { EmitClient } from "emit.run";
const client = new EmitClient({
spaceId: process.env.EMIT_SPACE_ID!,
apiKey: process.env.EMIT_API_KEY!,
baseUrl: process.env.EMIT_BASE_URL ?? "https://emit.run/api/v1",
});Producer examples
Create one job
const job = await client.jobs.create({
name: "transcode",
payload: {
inputUrl: "s3://raw/video.mp4",
outputKey: "outputs/video-720p.mp4",
},
maxRetries: 5,
timeoutSeconds: 900,
});
console.log(job.id, job.spaceId);jobs.create() returns minimal references ({ id, spaceId }). Use jobs.get(jobId) when you need the full record.
Create many jobs in one request
const jobs = await client.jobs.create([
{
name: "thumbnail",
payload: { inputUrl: "s3://raw/video.mp4", width: 320 },
},
{
name: "thumbnail",
payload: { inputUrl: "s3://raw/video.mp4", width: 640 },
},
]);Create a delayed job
await client.jobs.create({
name: "send-report",
payload: { reportId: "rpt_01" },
scheduledFor: "2026-03-20T09:00:00.000Z",
});Worker examples
Claimed job helper
When a job is claimed (polling or push), you receive a rich job instance with:
- Lifecycle helpers (
ack,progress,checkpoint,event,complete,fail,kill,keepalive) - Read helpers (
getDetails,getProgress,getLogs) - Logger helpers (
job.log.trace/debug/info/warn/error/fatal)
job.log.* batches logs and sends NDJSON (gzip when available) to POST /jobs/:id/logs.
Polling worker loop
type TranscodePayload = {
inputUrl: string;
outputKey: string;
};
type TranscodeResult = {
outputUrl: string;
durationMs: number;
};
await client.workers.runPolling<TranscodePayload, TranscodeResult>({
types: ["transcode"],
count: 5,
autoAck: true,
autoFailOnError: true,
autoComplete: false,
keepaliveIntervalMs: 60_000,
logger: {
source: "worker-gpu-a",
maxBatchSize: 100,
flushIntervalMs: 1_000,
gzip: true,
},
onJob: async (job) => {
job.log.info("Starting transcode", {
metadata: { jobId: job.id, attempt: job.attemptNumber },
});
await job.progress({
percent: 10,
message: "Downloading source",
subProgress: {
download: { percent: 10, message: "Connecting" },
},
});
// ...your business logic...
await job.checkpoint({ offsetSeconds: 120 });
await job.event({ phase: "ffmpeg", codec: "h264" });
await job.complete({
outputUrl: `s3://final/${job.payload.outputKey}`,
durationMs: 42_000,
});
},
});Push wake handler (job.pending)
const handlePush = client.workers.createPushHandler({
autoAck: true,
autoFailOnError: true,
autoComplete: true,
onJob: async (job) => {
job.log.info("Woken by push", { metadata: { jobId: job.id } });
return { ok: true };
},
});
export async function onRequestPost(context: { request: Request }) {
const payload = await context.request.json();
const result = await handlePush(payload);
return Response.json(result);
}Push is a wake signal. Claim + ack remains canonical.
Realtime examples
Job stream
const connection = client.realtime.connectJob("job_123", {
token: process.env.EMIT_PUBLIC_PROGRESS_TOKEN,
autoReconnect: true,
onEvent: (event) => {
if (event.type === "progress") {
console.log(event.data);
}
if (event.type === "completed") {
connection.close();
}
},
});Space stream
client.realtime.connectSpace(client.spaceId, {
token: process.env.EMIT_DASHBOARD_KEY,
autoReconnect: true,
onEvent: (event) => {
console.log(event.type, event.jobId, event.status, event.seq);
},
});Defaults and optional options
EmitClient options
| Option | Required | Default | Notes |
|---|---|---|---|
spaceId | Yes | - | Space bound to this client instance |
apiKey | Yes | - | String or async resolver |
baseUrl | No | https://emit.run/api/v1 | Use for staging/self-hosted |
apiKeyHeader | No | x-api-key | x-api-key or authorization |
headers | No | {} | Merged into every request |
credentials | No | same-origin | Passed to fetch |
fetch | No | globalThis.fetch | Custom runtime fetch |
webSocketFactory | No | globalThis.WebSocket | Needed in non-browser runtimes |
workers.runPolling options
| Option | Required | Default | Notes |
|---|---|---|---|
onJob | Yes | - | Claimed job handler |
count | No | 1 | Poll batch size |
types | No | none | Name/type poll filter |
pollIntervalMs | No | 1000 | Base idle delay |
maxBackoffMs | No | 15000 | Backoff cap |
autoAck | No | true | Acks before handler |
autoFailOnError | No | true | Fails job when handler throws |
autoComplete | No | false | Completes with handler return value |
keepaliveIntervalMs | No | disabled | Sends keepalive on interval |
stopOnError | No | false | Stops loop on errors |
onError | No | none | Error callback |
logger | No | defaults below | Logger behavior |
signal | No | none | Abort signal |
Logger options (logger)
| Option | Required | Default | Notes |
|---|---|---|---|
flushIntervalMs | No | 1000 | Time-based flush interval |
maxBatchSize | No | 100 | Flush when queue reaches this size |
maxBufferedEntries | No | 10000 | Queue cap (drops oldest on overflow) |
gzip | No | true | gzip when runtime supports it |
source | No | none | Default source for job.log.* |
Push worker options
workers.handlePush and workers.createPushHandler use the same execution options as polling (onJob, autoAck, autoFailOnError, autoComplete, keepaliveIntervalMs, stopOnError, onError, logger).
Realtime options
| Option | Required | Default | Notes |
|---|---|---|---|
token | No | client apiKey | Optional override; omit on connectJob for anonymous progress-only stream |
autoReconnect | No | false | Reconnect on close/error |
reconnectDelayMs | No | 1000 | Initial reconnect delay |
maxReconnectDelayMs | No | 15000 | Reconnect cap |
signal | No | none | Abort closes connection |
onOpen | No | none | Open callback |
onEvent | No | none | Event callback |
onError | No | none | Error callback |
onClose | No | none | Close callback |
For connectSpace, since and streamInstanceId are optional reconnect cursor hints.
Endpoint mapping quick reference
| SDK API | HTTP endpoint |
|---|---|
jobs.create(body) | POST /api/v1/spaces/:spaceId/jobs |
jobs.poll(body) | POST /api/v1/spaces/:spaceId/jobs/poll |
jobs.claim(jobId) | POST /api/v1/jobs/:jobId/claim |
jobs.ack(jobId) | POST /api/v1/jobs/:jobId/ack |
jobs.progress(jobId, body) | POST /api/v1/jobs/:jobId/progress |
jobs.complete(jobId, body) | POST /api/v1/jobs/:jobId/complete |
jobs.fail(jobId, body) | POST /api/v1/jobs/:jobId/fail |
jobs.kill(jobId, body) | POST /api/v1/jobs/:jobId/kill |
jobs.keepalive(jobId) | POST /api/v1/jobs/:jobId/keepalive |
realtime.connectJob(jobId) | GET /api/v1/realtime/:jobId |
realtime.connectSpace(spaceId) | GET /api/v1/realtime/spaces/:spaceId |
For raw HTTP endpoint details and auth scopes, see the API Reference.