Skip to main content

Exporting LangSmith telemetry to your observability backend

Important

This section is only applicable for Kubernetes deployments.

Self-Hosted LangSmith instances produce telemetry data in the form of logs, metrics and traces. This section will show you how to access and export that data to an observability collector or backend.

This section assumes that you have monitoring infrastructure set up already, or you will set up this infrastructure and want to know how to configure LangSmith to collect data from it.

Infrastructure refers to:

Logs: OTel Example

All services that are part of the LangSmith self-hosted deployment write their logs to their node/container filesystem. This includes Postgres, Redis and Clickhouse if you are running the default in-cluter versions. In order to access these logs, you need to set up your collector to read from those files. Most popular collectors support reading logs from container filsystems.

Example file system integrations:

Metrics: OTel Example

LangSmith Services

The following LangSmith services expose metrics at an endpoint, in the Prometheus metrics format.

  • Backend: http://<backend_service_name>.<namespace>.svc.cluster.local:1984/metrics
  • Platform Backend: http://<platform_backend_service_name>.<namespace>.svc.cluster.local:1986/metrics
  • Playground: http://<playground_service_name>.<namespace>.svc.cluster.local:1988/metrics

It is recommended to use a Prometheus server or OpenTelemetry collector to scrape the endpoint, and export it to the backend of your choice.

Important

The following sections apply for in-cluster databases only. If you are using external databases, you will need to configure exposing and fetching metrics.

Redis

If you are using the in-cluster Redis instance from the Helm chart, LangSmith can expose metrics for you if you upgrade the chart with the following values:

redis:
metrics:
enabled: true

This will run a sidecar container alongside your redis container which will expose Prometheus metrics at: http://langsmith-<redis_name>.<namespace>.svc.cluster.local:9121/metrics

Postgres

Similarly, to expose Postgres metrics, upgrade the LangSmith Helm chart with the following values:

postgres:
metrics:
enabled: true

This will run a sidecar container alongside Postgres, exposing Prometheus metrics at http://langsmith-<postgres_name>.<namespace>.svc.cluster.local:9187/metrics

Note

You can modify the Redis and Postgres exporter configurations through the LangSmith Helm chart.

Clickhouse

The Clickhouse container can expose metrics directly, without the need for a sidecar. To expose the metrics endpoint, run the LangSmith Helm chart with the following values:

clickhouse:
metrics:
enabled: true

You can then scrape metrics at http://langsmith-<clickhouse_name>.<namespace>.svc.cluster.local:9363/metrics

Traces: OTel Example

The LangSmith Backend, Platform Backend, Playground and LangSmith Queue deployments have been instrumented using the OTEL SDK to emit traces adhering to the OpenTelemetry format. Tracing is toggled off by default, and can be enabled and customized with the following in your values.yaml file:

config:
tracing:
enabled: true
endpoint: "<your_collector_endpoint>"
useTls: true # Or false
env: "ls_self_hosted" # This value will be set as an "env" attribute in your spans
exporter: "http" # must be either http or grpc

This will export traces from all LangSmith backend services to the specified endpoint.

Important

You can override the tracing endpoint for individual services. The Python apps require an endpoint in the form http://host:port/v1/traces, while the Go app requires the same endpoint in the form host:port to send to the same collector.

Make sure to check the logs of your services. If the endpoint is set correctly, there should be no logs. Otherwise, error logs will be shown.


Was this page helpful?


You can leave detailed feedback on GitHub.