Observability - TypeScript SDK
The observability section of the TypeScript developer guide covers the many ways to view the current state of your Temporal Application—that is, ways to view which Workflow Executions are tracked by the Temporal Platform and the state of any specified Workflow Execution, either currently or at points of an execution.
This section covers features related to viewing the state of the application, including:
Emit metrics
Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the SDK metrics reference.
Metrics can be scraped and stored in time series databases, such as:
Temporal also provides a dashboard you can integrate with graphing services like Grafana. For more information, see:
- Temporal's implementation of the Grafana dashboard
- How to export metrics in Grafana
Workers can emit metrics and traces. There are a few telemetry options that can be provided to Runtime.install
. The common options are:
metrics: { otel: { url } }
: The URL of a gRPC OpenTelemetry collector.metrics: { prometheus: { bindAddress } }
: Address on the Worker host that will have metrics for Prometheus to scrape.
To set up tracing of Workflows and Activities, use our opentelemetry-interceptors
package.
(For details, see the next section.)
telemetryOptions: {
metrics: {
prometheus: { bindAddress: '0.0.0.0:9464' },
},
logging: { forward: { level: 'DEBUG' } },
},
Set up tracing
Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows.
Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly.
The interceptors-opentelemetry
sample shows how to use the SDK's built-in OpenTelemetry tracing to trace everything from starting a Workflow to Workflow Execution to running an Activity from that Workflow.
The built-in tracing uses protobuf message headers (like this one when starting a Workflow) to propagate the tracing information from the client to the Workflow and from the Workflow to its successors (when Continued As New), children, and Activities.
All of these executions are linked with a single trace identifier and have the proper parent -> child
span relation.
Tracing is compatible between different Temporal SDKs as long as compatible context propagators are used.
Context propagation
The TypeScript SDK uses the global OpenTelemetry propagator.
To extend the default (Trace Context and Baggage propagators) to also include the Jaeger propagator, follow these steps:
-
npm i @opentelemetry/propagator-jaeger
-
At the top level of your Workflow code, add the following lines:
import { propagation } from '@opentelemetry/api';
import {
CompositePropagator,
W3CBaggagePropagator,
W3CTraceContextPropagator,
} from '@opentelemetry/core';
import { JaegerPropagator } from '@opentelemetry/propagator-jaeger';
propagation.setGlobalPropagator(
new CompositePropagator({
propagators: [
new W3CTraceContextPropagator(),
new W3CBaggagePropagator(),
new JaegerPropagator(),
],
}),
);
Similarly, you can customize the OpenTelemetry NodeSDK
propagators by following the instructions in the Initialize the SDK section of the README.md
file.
Log from a Workflow
Logging enables you to record critical information during code execution. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume.
The logger supports the following logging levels:
Level | Use |
---|---|
TRACE | The most detailed level of logging, used for very fine-grained information. |
DEBUG | Detailed information, typically useful for debugging purposes. |
INFO | General information about the application's operation. |
WARN | Indicates potentially harmful situations or minor issues that don't prevent the application from working. |
ERROR | Indicates error conditions that might still allow the application to continue running. |
The Temporal SDK core normally uses WARN
as its default logging level.
Logging from Activities
Activities run in the standard Node.js environment and may therefore use any Node.js logger directly.
The Temporal SDK however provides a convenient Activity Context logger, which funnels log messages to the Runtime's logger. Attributes from the current Activity context are automatically included as metadata on every log entries emitted using the Activity context logger, and some key events of the Activity's lifecycle are automatically logged (at DEBUG level for most messages; WARN for failures).
Using the Activity Context logger
import { log } from '@temporalio/activity';
export async function greet(name: string): Promise<string> {
log.info('Log from activity', { name });
return `Hello, ${name}!`;
}
Logging from Workflows
Workflows may not use regular Node.js loggers because:
- Workflows run in a sandboxed environment and cannot do any I/O.
- Workflow code might get replayed at any time, which would result in duplicated log messages.
The Temporal SDK however provides a Workflow Context logger, which funnels log messages to the Runtime's logger. Attributes from the current Workflow context are automatically included as metadata on every log entries emitted using the Workflow context logger, and some key events of the Workflow's lifecycle are automatically logged (at DEBUG level for most messages; WARN for failures).
Using the Workflow Context logger
import { log } from '@temporalio/workflow';
export async function myWorkflow(name: string): Promise<string> {
log.info('Log from workflow', { name });
return `Hello, ${name}!`;
}
The Workflow Context Logger tries to avoid reemitting log messages on Workflow Replays.
Limitations of Workflow logs
Internally, Workflow logging uses Sinks, and is consequently subject to the same limitations as Sinks. Notably, logged objects must be serializable using the V8 serialization.
What is the Runtime's Logger
A Temporal Worker may emit logs in various ways, including:
- Messages emitted using the Workflow Context Logger;
- Messages emitted using the Activity Context Logger;
- Messages emitted by the TypeScript SDK Worker itself;
- Messages emitted by the underlying Temporal Core SDK (native code).
All of these messages are internally routed to a single logger object, called the Runtime's Logger.
By default, the Runtime's Logger simply write messages to the console (i.e. the process's STDOUT
).
How to customize the Runtime's Logger
A custom Runtime Logger may be registered when the SDK Runtime
is instantiated. This is done only once per process.
To register a custom Runtime Logger, you must explicitly instantiate the Runtime, using the Runtime.install()
function.
For example:
import {
DefaultLogger,
makeTelemetryFilterString,
Runtime,
} from '@temporalio/worker';
// This is your custom Logger.
const logger = new DefaultLogger('WARN', ({ level, message }) => {
console.log(`Custom logger: ${level} — ${message}`);
});
Runtime.install({
logger,
// The following block is optional, but generally desired.
// It allows capturing log messages emitted by the underlying Temporal Core SDK (native code).
// The Telemetry Filter String determine the desired verboseness of messages emitted by the
// Temporal Core SDK itself ("core"), and by other native libraries ("other").
telemetryOptions: {
logging: {
filter: makeTelemetryFilterString({ core: 'INFO', other: 'INFO' }),
forward: {},
},
},
});
A common use case for this is to write log messages to a file to be picked up by a collector service, such as the Datadog Agent. For example:
import {
DefaultLogger,
makeTelemetryFilterString,
Runtime,
} from '@temporalio/worker';
import winston from 'winston';
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [new transports.File({ filename: '/path/to/worker.log' })],
});
Runtime.install({
logger,
// The following block is optional, but generally desired.
// It allows capturing log messages emitted by the underlying Temporal Core SDK (native code).
// The Telemetry Filter String determine the desired verboseness of messages emitted by the
// Temporal Core SDK itself ("core"), and by other native libraries ("other").
telemetryOptions: {
logging: {
filter: makeTelemetryFilterString({ core: 'INFO', other: 'INFO' }),
forward: {},
},
},
});
Implementing custom Logging-like features based on Workflow Sinks
Sinks enable one-way export of logs, metrics, and traces from the Workflow isolate to the Node.js environment.
Sinks are written as objects with methods. Similar to Activities, they are declared in the Worker and then proxied in Workflow code, and it helps to share types between both.
Comparing Sinks and Activities
Sinks are similar to Activities in that they are both registered on the Worker and proxied into the Workflow. However, they differ from Activities in important ways:
- A sink function doesn't return any value back to the Workflow and cannot be awaited.
- A sink call isn't recorded in the Event History of a Workflow Execution (no timeouts or retries).
- A sink function always runs on the same Worker that runs the Workflow Execution it's called from.
Declare the sink interface
Explicitly declaring a sink's interface is optional but is useful for ensuring type safety in subsequent steps:
packages/test/src/workflows/log-sink-tester.ts
import type { Sinks } from '@temporalio/workflow';
export interface CustomLoggerSinks extends Sinks {
customLogger: {
info(message: string): void;
};
}
Implement sinks
Implementing sinks is a two-step process.
Implement and inject the Sink function into a Worker
import { InjectedSinks, Worker } from '@temporalio/worker';
import { MySinks } from './workflows';
async function main() {
const sinks: InjectedSinks<MySinks> = {
alerter: {
alert: {
fn(workflowInfo, message) {
console.log('sending SMS alert!', {
workflowId: workflowInfo.workflowId,
workflowRunId: workflowInfo.runId,
message,
});
},
callDuringReplay: false, // The default
},
},
};
const worker = await Worker.create({
workflowsPath: require.resolve('./workflows'),
taskQueue: 'sinks',
sinks,
});
await worker.run();
console.log('Worker gracefully shutdown');
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- Sink function implementations are passed as an object into WorkerOptions.
- You can specify whether you want the injected function to be called during Workflow replay by setting the
callDuringReplay
option.
Proxy and call a sink function from a Workflow
packages/test/src/workflows/log-sample.ts
import * as wf from '@temporalio/workflow';
export async function logSampleWorkflow(): Promise<void> {
wf.log.info('Workflow execution started');
}
Some important features of the InjectedSinkFunction interface:
- Injected WorkflowInfo argument: The first argument of a Sink function implementation is a
workflowInfo
object that contains useful metadata. - Limited arguments types: The remaining Sink function arguments are copied between the sandbox and the Node.js environment using the structured clone algorithm.
- No return value: To prevent breaking determinism, Sink functions cannot return values to the Workflow.
Advanced: Performance considerations and non-blocking Sinks
The injected sink function contributes to the overall Workflow Task processing duration.
- If you have a long-running sink function, such as one that tries to communicate with external services, you might start seeing Workflow Task timeouts.
- The effect is multiplied when using
callDuringReplay: true
and replaying long Workflow histories because the Workflow Task timer starts when the first history page is delivered to the Worker.
How to provide a custom logger
Use a custom logger for logging.
Logging in Workers and Clients
The Worker comes with a default logger, which defaults to log any messages with level INFO
and higher to STDERR
using console.error
.
The following log levels are listed in increasing order of severity.
Customizing the default logger
Temporal uses a DefaultLogger
that implements the basic interface:
import { DefaultLogger, Runtime } from '@temporalio/worker';
const logger = new DefaultLogger('WARN', ({ level, message }) => {
console.log(`Custom logger: ${level} — ${message}`);
});
Runtime.install({ logger });
The previous code example sets the default logger to log only messages with level WARN
and higher.
Accumulate logs for testing and reporting
import { DefaultLogger, LogEntry } from '@temporalio/worker';
const logs: LogEntry[] = [];
const logger = new DefaultLogger('TRACE', (entry) => logs.push(entry));
log.debug('hey', { a: 1 });
log.info('ho');
log.warn('lets', { a: 1 });
log.error('go');
A common logging use case is logging to a file to be picked up by a collector like the Datadog Agent.
import { Runtime } from '@temporalio/worker';
import winston from 'winston';
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [new transports.File({ filename: '/path/to/worker.log' })],
});
Runtime.install({ logger });
Visibility APIs
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service.
How to use Search Attributes
The typical method of retrieving a Workflow Execution is by its Workflow Id.
However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments.
You can do this with Search Attributes.
- Default Search Attributes like
WorkflowType
,StartTime
andExecutionStatus
are automatically added to Workflow Executions. - Custom Search Attributes can contain their own domain-specific data (like
customerId
ornumItems
).- A few generic Custom Search Attributes like
CustomKeywordField
andCustomIntField
are created by default in Temporal's Docker Compose.
- A few generic Custom Search Attributes like
The steps to using custom Search Attributes are:
- Create a new Search Attribute in your Temporal Service using
temporal operator search-attribute create
or the Cloud UI. - Set the value of the Search Attribute for a Workflow Execution:
- On the Client by including it as an option when starting the Execution.
- In the Workflow by calling
UpsertSearchAttributes
.
- Read the value of the Search Attribute:
- On the Client by calling
DescribeWorkflow
. - In the Workflow by looking at
WorkflowInfo
.
- On the Client by calling
- Query Workflow Executions by the Search Attribute using a List Filter:
- With the Temporal CLI.
- In code by calling
ListWorkflowExecutions
.
Here is how to query Workflow Executions:
Use WorkflowService.listWorkflowExecutions
:
import { Connection } from '@temporalio/client';
const connection = await Connection.connect();
const response = await connection.workflowService.listWorkflowExecutions({
query: `ExecutionStatus = "Running"`,
});
where query
is a List Filter.
How to set custom Search Attributes
After you've created custom Search Attributes in your Temporal Service (using temporal operator search-attribute create
or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow.
Use WorkflowOptions.searchAttributes
.
search-attributes/src/client.ts
const handle = await client.workflow.start(example, {
taskQueue: 'search-attributes',
workflowId: 'search-attributes-example-0',
searchAttributes: {
CustomIntField: [2],
CustomKeywordField: ['keywordA', 'keywordB'],
CustomBoolField: [true],
CustomDatetimeField: [new Date()],
CustomStringField: [
'String field is for text. When queried, it will be tokenized for partial match. StringTypeField cannot be used in Order By',
],
},
});
const { searchAttributes } = await handle.describe();
The type of searchAttributes
is Record<string, string[] | number[] | boolean[] | Date[]>
.
How to upsert Search Attributes
You can upsert Search Attributes to add or update Search Attributes from within Workflow code.
Inside a Workflow, we can read from WorkflowInfo.searchAttributes
and call upsertSearchAttributes
:
search-attributes/src/workflows.ts
export async function example(): Promise<SearchAttributes> {
const customInt =
(workflowInfo().searchAttributes.CustomIntField?.[0] as number) || 0;
upsertSearchAttributes({
// overwrite the existing CustomIntField: [2]
CustomIntField: [customInt + 1],
// delete the existing CustomBoolField: [true]
CustomBoolField: [],
// add a new value
CustomDoubleField: [3.14],
});
return workflowInfo().searchAttributes;
}
How to remove a Search Attribute from a Workflow
To remove a Search Attribute that was previously set, set it to an empty array: []
.
import { upsertSearchAttributes } from '@temporalio/workflow';
async function yourWorkflow() {
upsertSearchAttributes({ CustomIntField: [1, 2, 3] });
// ... later, to remove:
upsertSearchAttributes({ CustomIntField: [] });
}