emit.run

TypeScript SDK

Use the emit.run npm package for producers, workers, push wake handlers, and realtime streams

TypeScript SDK (emit.run)

The official TypeScript package is optimized for API-key workflows:

  • Jobs API
  • Worker lifecycle helpers
  • Realtime sockets
  • Claimed-job helper with batched job.log.*

Package: https://www.npmjs.com/package/emit.run


Install

npm install emit.run

Node.js 18+ is recommended.


Create a client

The client is space-scoped at initialization.

import { EmitClient } from "emit.run";

const client = new EmitClient({
  spaceId: process.env.EMIT_SPACE_ID!,
  apiKey: process.env.EMIT_API_KEY!,
  baseUrl: process.env.EMIT_BASE_URL ?? "https://emit.run/api/v1",
});

Producer examples

Create one job

const job = await client.jobs.create({
  name: "transcode",
  payload: {
    inputUrl: "s3://raw/video.mp4",
    outputKey: "outputs/video-720p.mp4",
  },
  maxRetries: 5,
  timeoutSeconds: 900,
});

console.log(job.id, job.spaceId);

jobs.create() returns minimal references ({ id, spaceId }). Use jobs.get(jobId) when you need the full record.

Create many jobs in one request

const jobs = await client.jobs.create([
  {
    name: "thumbnail",
    payload: { inputUrl: "s3://raw/video.mp4", width: 320 },
  },
  {
    name: "thumbnail",
    payload: { inputUrl: "s3://raw/video.mp4", width: 640 },
  },
]);

Create a delayed job

await client.jobs.create({
  name: "send-report",
  payload: { reportId: "rpt_01" },
  scheduledFor: "2026-03-20T09:00:00.000Z",
});

Worker examples

Claimed job helper

When a job is claimed (polling or push), you receive a rich job instance with:

  • Lifecycle helpers (ack, progress, checkpoint, event, complete, fail, kill, keepalive)
  • Read helpers (getDetails, getProgress, getLogs)
  • Logger helpers (job.log.trace/debug/info/warn/error/fatal)

job.log.* batches logs and sends NDJSON (gzip when available) to POST /jobs/:id/logs.

Polling worker loop

type TranscodePayload = {
  inputUrl: string;
  outputKey: string;
};

type TranscodeResult = {
  outputUrl: string;
  durationMs: number;
};

await client.workers.runPolling<TranscodePayload, TranscodeResult>({
  types: ["transcode"],
  count: 5,
  autoAck: true,
  autoFailOnError: true,
  autoComplete: false,
  keepaliveIntervalMs: 60_000,
  logger: {
    source: "worker-gpu-a",
    maxBatchSize: 100,
    flushIntervalMs: 1_000,
    gzip: true,
  },
  onJob: async (job) => {
    job.log.info("Starting transcode", {
      metadata: { jobId: job.id, attempt: job.attemptNumber },
    });

    await job.progress({
      percent: 10,
      message: "Downloading source",
      subProgress: {
        download: { percent: 10, message: "Connecting" },
      },
    });

    // ...your business logic...

    await job.checkpoint({ offsetSeconds: 120 });
    await job.event({ phase: "ffmpeg", codec: "h264" });

    await job.complete({
      outputUrl: `s3://final/${job.payload.outputKey}`,
      durationMs: 42_000,
    });
  },
});

Push wake handler (job.pending)

const handlePush = client.workers.createPushHandler({
  autoAck: true,
  autoFailOnError: true,
  autoComplete: true,
  onJob: async (job) => {
    job.log.info("Woken by push", { metadata: { jobId: job.id } });
    return { ok: true };
  },
});

export async function onRequestPost(context: { request: Request }) {
  const payload = await context.request.json();
  const result = await handlePush(payload);
  return Response.json(result);
}

Push is a wake signal. Claim + ack remains canonical.


Realtime examples

Job stream

const connection = client.realtime.connectJob("job_123", {
  token: process.env.EMIT_PUBLIC_PROGRESS_TOKEN,
  autoReconnect: true,
  onEvent: (event) => {
    if (event.type === "progress") {
      console.log(event.data);
    }
    if (event.type === "completed") {
      connection.close();
    }
  },
});

Space stream

client.realtime.connectSpace(client.spaceId, {
  token: process.env.EMIT_DASHBOARD_KEY,
  autoReconnect: true,
  onEvent: (event) => {
    console.log(event.type, event.jobId, event.status, event.seq);
  },
});

Defaults and optional options

EmitClient options

OptionRequiredDefaultNotes
spaceIdYes-Space bound to this client instance
apiKeyYes-String or async resolver
baseUrlNohttps://emit.run/api/v1Use for staging/self-hosted
apiKeyHeaderNox-api-keyx-api-key or authorization
headersNo{}Merged into every request
credentialsNosame-originPassed to fetch
fetchNoglobalThis.fetchCustom runtime fetch
webSocketFactoryNoglobalThis.WebSocketNeeded in non-browser runtimes

workers.runPolling options

OptionRequiredDefaultNotes
onJobYes-Claimed job handler
countNo1Poll batch size
typesNononeName/type poll filter
pollIntervalMsNo1000Base idle delay
maxBackoffMsNo15000Backoff cap
autoAckNotrueAcks before handler
autoFailOnErrorNotrueFails job when handler throws
autoCompleteNofalseCompletes with handler return value
keepaliveIntervalMsNodisabledSends keepalive on interval
stopOnErrorNofalseStops loop on errors
onErrorNononeError callback
loggerNodefaults belowLogger behavior
signalNononeAbort signal

Logger options (logger)

OptionRequiredDefaultNotes
flushIntervalMsNo1000Time-based flush interval
maxBatchSizeNo100Flush when queue reaches this size
maxBufferedEntriesNo10000Queue cap (drops oldest on overflow)
gzipNotruegzip when runtime supports it
sourceNononeDefault source for job.log.*

Push worker options

workers.handlePush and workers.createPushHandler use the same execution options as polling (onJob, autoAck, autoFailOnError, autoComplete, keepaliveIntervalMs, stopOnError, onError, logger).

Realtime options

OptionRequiredDefaultNotes
tokenNoclient apiKeyOptional override; omit on connectJob for anonymous progress-only stream
autoReconnectNofalseReconnect on close/error
reconnectDelayMsNo1000Initial reconnect delay
maxReconnectDelayMsNo15000Reconnect cap
signalNononeAbort closes connection
onOpenNononeOpen callback
onEventNononeEvent callback
onErrorNononeError callback
onCloseNononeClose callback

For connectSpace, since and streamInstanceId are optional reconnect cursor hints.


Endpoint mapping quick reference

SDK APIHTTP endpoint
jobs.create(body)POST /api/v1/spaces/:spaceId/jobs
jobs.poll(body)POST /api/v1/spaces/:spaceId/jobs/poll
jobs.claim(jobId)POST /api/v1/jobs/:jobId/claim
jobs.ack(jobId)POST /api/v1/jobs/:jobId/ack
jobs.progress(jobId, body)POST /api/v1/jobs/:jobId/progress
jobs.complete(jobId, body)POST /api/v1/jobs/:jobId/complete
jobs.fail(jobId, body)POST /api/v1/jobs/:jobId/fail
jobs.kill(jobId, body)POST /api/v1/jobs/:jobId/kill
jobs.keepalive(jobId)POST /api/v1/jobs/:jobId/keepalive
realtime.connectJob(jobId)GET /api/v1/realtime/:jobId
realtime.connectSpace(spaceId)GET /api/v1/realtime/spaces/:spaceId

For raw HTTP endpoint details and auth scopes, see the API Reference.

On this page