API#

Tracing#

ddtrace.patch_all(**patch_modules: bool) None#

Automatically patches all available modules.

In addition to patch_modules, an override can be specified via an environment variable, DD_TRACE_<module>_ENABLED for each module.

patch_modules have the highest precedence for overriding.

Parameters:

patch_modules (dict) –

Override whether particular modules are patched or not.

>>> patch_all(redis=False, cassandra=False)

ddtrace.patch(raise_errors: bool = True, patch_modules_prefix: str = 'ddtrace.contrib', **patch_modules: List[str] | bool) None#

Patch only a set of given modules.

Parameters:
  • raise_errors (bool) – Raise error if one patch fail.

  • patch_modules (dict) –

    List of modules to patch.

    >>> patch(psycopg=True, elasticsearch=True)
    

class ddtrace.Tracer(url: str | None = None, dogstatsd_url: str | None = None)#

Tracer is used to create, sample and submit spans that measure the execution time of sections of code.

If you’re running an application that will serve a single trace per thread, you can use the global tracer instance:

from ddtrace import tracer
trace = tracer.trace('app.request', 'web-server').finish()
on_start_span(func: Callable) Callable#

Register a function to execute when a span start.

Can be used as a decorator.

Parameters:

func – The function to call when starting a span. The started span will be passed as argument.

deregister_on_start_span(func: Callable) Callable#

Unregister a function registered to execute when a span starts.

Can be used as a decorator.

Parameters:

func – The function to stop calling when starting a span.

current_trace_context(*args, **kwargs) Context | None#

Return the context for the current trace.

If there is no active trace then None is returned.

get_log_correlation_context() Dict[str, str]#

Retrieves the data used to correlate a log with the current active trace. Generates a dictionary for custom logging instrumentation including the trace id and span id of the current active span, as well as the configured service, version, and environment names. If there is no active span, a dictionary with an empty string for each value will be returned.

configure(enabled: bool | None = None, hostname: str | None = None, port: int | None = None, uds_path: str | None = None, https: bool | None = None, sampler: BaseSampler | None = None, context_provider: DefaultContextProvider | None = None, wrap_executor: Callable | None = None, priority_sampling: bool | None = None, settings: Dict[str, Any] | None = None, dogstatsd_url: str | None = None, writer: TraceWriter | None = None, partial_flush_enabled: bool | None = None, partial_flush_min_spans: int | None = None, api_version: str | None = None, compute_stats_enabled: bool | None = None, appsec_enabled: bool | None = None, iast_enabled: bool | None = None) None#

Configure a Tracer.

Parameters:
  • enabled (bool) – If True, finished traces will be submitted to the API, else they’ll be dropped.

  • hostname (str) – Hostname running the Trace Agent

  • port (int) – Port of the Trace Agent

  • uds_path (str) – The Unix Domain Socket path of the agent.

  • https (bool) – Whether to use HTTPS or HTTP.

  • sampler (object) – A custom Sampler instance, locally deciding to totally drop the trace or not.

  • context_provider (object) – The ContextProvider that will be used to retrieve automatically the current call context. This is an advanced option that usually doesn’t need to be changed from the default value

  • wrap_executor (object) – callable that is used when a function is decorated with Tracer.wrap(). This is an advanced option that usually doesn’t need to be changed from the default value

  • priority_sampling – enable priority sampling, this is required for complete distributed tracing support. Enabled by default.

  • dogstatsd_url (str) – URL for UDP or Unix socket connection to DogStatsD

start_span(name: str, child_of: Span | Context | None = None, service: str | None = None, resource: str | None = None, span_type: str | None = None, activate: bool = False, span_api: str = 'datadog') Span#

Return a span that represents an operation called name.

Note that the trace() method will almost always be preferred over this method as it provides automatic span parenting. This method should only be used if manual parenting is desired.

Parameters:
  • name (str) – the name of the operation being traced.

  • child_of (object) – a Span or a Context instance representing the parent for this span.

  • service (str) – the name of the service being traced.

  • resource (str) – an optional name of the resource being tracked.

  • span_type (str) – an optional operation type.

  • activate – activate the span once it is created.

To start a new root span:

span = tracer.start_span("web.request")

To create a child for a root span:

root_span = tracer.start_span("web.request")
span = tracer.start_span("web.decoder", child_of=root_span)

Spans from start_span are not activated by default:

with tracer.start_span("parent") as parent:
    assert tracer.current_span() is None
    with tracer.start_span("child", child_of=parent):
        assert tracer.current_span() is None

new_parent = tracer.start_span("new_parent", activate=True)
assert tracer.current_span() is new_parent

Note: be sure to finish all spans to avoid memory leaks and incorrect parenting of spans.

trace(name: str, service: str | None = None, resource: str | None = None, span_type: str | None = None, span_api: str = 'datadog') Span#

Activate and return a new span that inherits from the current active span.

Parameters:
  • name (str) – the name of the operation being traced

  • service (str) – the name of the service being traced. If not set, it will inherit the service from its parent.

  • resource (str) – an optional name of the resource being tracked.

  • span_type (str) – an optional operation type.

The returned span must be finish’d or it will remain in memory indefinitely:

>>> span = tracer.trace("web.request")
    try:
        # do something
    finally:
        span.finish()

>>> with tracer.trace("web.request") as span:
        # do something

Example of the automatic parenting:

parent = tracer.trace("parent")     # has no parent span
assert tracer.current_span() is parent

child  = tracer.trace("child")
assert child.parent_id == parent.span_id
assert tracer.current_span() is child
child.finish()

# parent is now the active span again
assert tracer.current_span() is parent
parent.finish()

assert tracer.current_span() is None

parent2 = tracer.trace("parent2")
assert parent2.parent_id is None
parent2.finish()
current_root_span() Span | None#

Returns the root span of the current execution.

This is useful for attaching information related to the trace as a whole without needing to add to child spans.

For example:

# get the root span
root_span = tracer.current_root_span()
# set the host just once on the root span
if root_span:
    root_span.set_tag('host', '127.0.0.1')
current_span() Span | None#

Return the active span in the current execution context.

Note that there may be an active span represented by a context object (like from a distributed trace) which will not be returned by this method.

property agent_trace_url: str | None#

Trace agent url

flush()#

Flush the buffer of the trace writer. This does nothing if an unbuffered trace writer is used.

wrap(name: str | None = None, service: str | None = None, resource: str | None = None, span_type: str | None = None) Callable[[AnyCallable], AnyCallable]#

A decorator used to trace an entire function. If the traced function is a coroutine, it traces the coroutine execution when is awaited. If a wrap_executor callable has been provided in the Tracer.configure() method, it will be called instead of the default one when the function decorator is invoked.

Parameters:
  • name (str) – the name of the operation being traced. If not set, defaults to the fully qualified function name.

  • service (str) – the name of the service being traced. If not set, it will inherit the service from it’s parent.

  • resource (str) – an optional name of the resource being tracked.

  • span_type (str) – an optional operation type.

>>> @tracer.wrap('my.wrapped.function', service='my.service')
    def run():
        return 'run'
>>> # name will default to 'execute' if unset
    @tracer.wrap()
    def execute():
        return 'executed'
>>> # or use it in asyncio coroutines
    @tracer.wrap()
    async def coroutine():
        return 'executed'
>>> @tracer.wrap()
    @asyncio.coroutine
    def coroutine():
        return 'executed'

You can access the current span using tracer.current_span() to set tags:

>>> @tracer.wrap()
    def execute():
        span = tracer.current_span()
        span.set_tag('a', 'b')
set_tags(tags: Dict[str, str]) None#

Set some tags at the tracer level. This will append those tags to each span created by the tracer.

Parameters:

tags (dict) – dict of tags to set at tracer level

shutdown(timeout: float | None = None) None#

Shutdown the tracer and flush finished traces. Avoid calling shutdown multiple times.

Parameters:

timeout (int | float | None) – How long in seconds to wait for the background worker to flush traces before exiting or None to block until flushing has successfully completed (default: None)

class ddtrace.Span(name: str, service: str | None = None, resource: str | None = None, span_type: str | None = None, trace_id: int | None = None, span_id: int | None = None, parent_id: int | None = None, start: int | None = None, context: Context | None = None, on_finish: List[Callable[[Span], None]] | None = None, span_api: str = 'datadog')#
property start: float#

The start timestamp in Unix epoch seconds.

property duration: float | None#

The span duration in seconds.

finish(finish_time: float | None = None) None#

Mark the end time of the span and submit it to the tracer. If the span has already been finished don’t do anything.

Parameters:

finish_time – The end time of the span, in seconds. Defaults to now.

set_tag(key: str | bytes, value: Any | None = None) None#

Set a tag key/value pair on the span.

Keys must be strings, values must be stringify-able.

Parameters:
  • key (str) – Key to use for the tag

  • value (stringify-able value) – Value to assign for the tag

set_tag_str(key: str | bytes, value: str) None#

Set a value for a tag. Values are coerced to unicode in Python 2 and str in Python 3, with decoding errors in conversion being replaced with U+FFFD.

get_tag(key: str | bytes) str | None#

Return the given tag or None if it doesn’t exist.

get_tags() Dict[str | bytes, str]#

Return all tags.

set_tags(tags: Dict[str | bytes, str]) None#

Set a dictionary of tags on the given span. Keys and values must be strings (or stringable)

get_metric(key: str | bytes) int | float | None#

Return the given metric or None if it doesn’t exist.

get_metrics() Dict[str | bytes, int | float]#

Return all metrics.

set_traceback(limit: int = 30) None#

If the current stack has an exception, tag the span with the relevant error info. If not, set the span to the current python stack.

set_exc_info(exc_type: Any, exc_val: Any, exc_tb: Any) None#

Tag the span with an error tuple as from sys.exc_info().

property context: Context#

Return the trace context for this span.

finish_with_ancestors() None#

Finish this span along with all (accessible) ancestors of this span.

This method is useful if a sudden program shutdown is required and finishing the trace is desired.

class ddtrace.Pin(service: str | None = None, tags: Dict[str, str] | None = None, tracer: Tracer | None = None, _config: Dict[str, Any] | None = None)#

Pin (a.k.a Patch INfo) is a small class which is used to set tracing metadata on a particular traced connection. This is useful if you wanted to, say, trace two different database clusters.

>>> conn = sqlite.connect('/tmp/user.db')
>>> # Override a pin for a specific connection
>>> pin = Pin.override(conn, service='user-db')
>>> conn = sqlite.connect('/tmp/image.db')
property service: str#

Backward compatibility: accessing to pin.service returns the underlying configuration value.

static get_from(obj: Any) Pin | None#

Return the pin associated with the given object. If a pin is attached to obj but the instance is not the owner of the pin, a new pin is cloned and attached. This ensures that a pin inherited from a class is a copy for the new instance, avoiding that a specific instance overrides other pins values.

>>> pin = Pin.get_from(conn)
Parameters:

obj (object) – The object to look for a ddtrace.pin.Pin on

Return type:

ddtrace.pin.Pin, None

Returns:

ddtrace.pin.Pin associated with the object, or None if none was found

classmethod override(obj: Any, service: str | None = None, tags: Dict[str, str] | None = None, tracer: Tracer | None = None) None#

Override an object with the given attributes.

That’s the recommended way to customize an already instrumented client, without losing existing attributes.

>>> conn = sqlite.connect('/tmp/user.db')
>>> # Override a pin for a specific connection
>>> Pin.override(conn, service='user-db')
enabled() bool#

Return true if this pin’s tracer is enabled.

onto(obj: Any, send: bool = True) None#

Patch this pin onto the given object. If send is true, it will also queue the metadata to be sent to the server.

clone(service: str | None = None, tags: Dict[str, str] | None = None, tracer: Tracer | None = None) Pin#

Return a clone of the pin with the given attributes replaced.

class ddtrace.context.Context(trace_id=None, span_id=None, dd_origin=None, sampling_priority=None, meta=None, metrics=None, lock=None)#

Represents the state required to propagate a trace across execution boundaries.

trace_id: int | None#
span_id: int | None#
property sampling_priority: int | float | None#

Return the context sampling priority for the trace.

property dd_origin: str | None#

Get the origin of the trace.

property dd_user_id: str | None#

Get the user ID of the trace.

class ddtrace.sampler.DatadogSampler(rules: List[SamplingRule] | None = None, default_sample_rate: float | None = None, rate_limit: int | None = None)#

Default sampler used by Tracer for determining if a trace should be kept or dropped.

By default, this sampler will rely on dynamic sample rates provided by the trace agent to determine which traces are kept or dropped.

You can also configure a static sample rate via default_sample_rate to use for sampling. When a default_sample_rate is configured, that is the only sample rate used, the agent provided rates are ignored.

You may also supply a list of SamplingRule to determine sample rates for specific services or operation names.

Example rules:

DatadogSampler(rules=[
    SamplingRule(sample_rate=1.0, service="my-svc"),
    SamplingRule(sample_rate=0.0, service="less-important"),
])

Rules are evaluated in the order they are provided, and the first rule that matches is used. If no rule matches, then the agent sample rates are used.

Lastly, this sampler can be configured with a rate limit. This will ensure the max number of sampled traces per second does not exceed the supplied limit. The default is 100 traces kept per second. This rate limiter is only used when default_sample_rate or rules are provided. It is not used when the agent supplied sample rates are used.

sample(span: Span) bool#

Decide whether the provided span should be sampled or not

The span provided should be the root span in the trace.

Parameters:

span (ddtrace.span.Span) – The root span of a trace

Returns:

Whether the span was sampled or not

Return type:

bool

class ddtrace.sampler.SamplingRule(sample_rate: float, service: ~typing.Any = <object object>, name: ~typing.Any = <object object>)#

Definition of a sampling rule used by DatadogSampler for applying a sample rate on a span

matches(span: Span) bool#

Return if this span matches this rule

Parameters:

span (ddtrace.span.Span) – The span to match against

Returns:

Whether this span matches or not

Return type:

bool

sample(span: Span) bool#

Return if this rule chooses to sample the span

Parameters:

span (ddtrace.span.Span) – The span to sample against

Returns:

Whether this span was sampled

Return type:

bool

class ddtrace.propagation.http.HTTPPropagator#

A HTTP Propagator using HTTP headers as carrier.

static inject(span_context: Context, headers: Dict[str, str]) None#

Inject Context attributes that have to be propagated as HTTP headers.

Here is an example using requests:

import requests

from ddtrace.propagation.http import HTTPPropagator

def parent_call():
    with tracer.trace('parent_span') as span:
        headers = {}
        HTTPPropagator.inject(span.context, headers)
        url = '<some RPC endpoint>'
        r = requests.get(url, headers=headers)
Parameters:
  • span_context (Context) – Span context to propagate.

  • headers (dict) – HTTP headers to extend with tracing attributes.

static extract(headers: Dict[str, str]) Context#

Extract a Context from HTTP headers into a new Context.

Here is an example from a web endpoint:

from ddtrace.propagation.http import HTTPPropagator

def my_controller(url, headers):
    context = HTTPPropagator.extract(headers)
    if context:
        tracer.context_provider.activate(context)

    with tracer.trace('my_controller') as span:
        span.set_tag('http.url', url)
Parameters:

headers (dict) – HTTP headers to extract tracing attributes.

Returns:

New Context with propagated attributes.

OpenTelemetry API#

The dd-trace-py library provides an implementation of the opentelemetry api. When ddtrace OpenTelemetry support is configured, all operations defined in the OpenTelemetry trace api can be used to create, configure, and propagate a distributed trace. All operations defined the opentelemetry trace api are configured to use the ddtrace global tracer (ddtrace.tracer) and generate datadog compatible traces. By default all opentelemetry traces are submitted to a Datadog agent.

Configuration#

When using ddtrace-run, OpenTelemetry support can be enabled by setting the DD_TRACE_OTEL_ENABLED environment variable to True (the default value is False).

OpenTelemetry support can be enabled programmatically by setting DD_TRACE_OTEL_ENABLED=True and setting the ddtrace.opentelemetry.TracerProvider. These configurations must be set before any OpenTelemetry Tracers are initialized:

import os
# Must be set before ddtrace is imported!
os.environ["DD_TRACE_OTEL_ENABLED"] = "true"

from opentelemetry.trace import set_tracer_provider
from ddtrace.opentelemetry import TracerProvider

set_tracer_provider(TracerProvider())

...

Usage#

Datadog and OpenTelemetry APIs can be used interchangeably:

# Sample Usage
import opentelemetry
import ddtrace

oteltracer = opentelemetry.trace.get_tracer(__name__)

with oteltracer.start_as_current_span("otel-span") as parent_span:
    otel_span.set_attribute("otel_key", "otel_val")
    with ddtrace.trace("ddtrace-span") as child_span:
        child_span.set_tag("dd_key", "dd_val")

@oteltracer.start_as_current_span("span_name")
def some_function():
    pass
class ddtrace.opentelemetry.TracerProvider#

Entry point of the OpenTelemetry API and provides access to OpenTelemetry compatible Tracers. One TracerProvider should be initialized and set per application.

get_tracer(instrumenting_module_name: str, instrumenting_library_version: str | None = None, schema_url: str | None = None) OtelTracer#

Returns an opentelemetry compatible Tracer.

Runtime Metrics#

class ddtrace.runtime.RuntimeMetrics#

Runtime metrics service API.

This is normally started automatically by ddtrace-run when the DD_RUNTIME_METRICS_ENABLED variable is set.

To start the service manually, invoke the enable static method:

from ddtrace.runtime import RuntimeMetrics
RuntimeMetrics.enable()
static enable(tracer: Tracer | None = None, dogstatsd_url: str | None = None, flush_interval: float | None = None) None#

Enable the runtime metrics collection service.

If the service has already been activated before, this method does nothing. Use disable to turn off the runtime metric collection service.

Parameters:
  • tracer – The tracer instance to correlate with.

  • dogstatsd_url – The DogStatsD URL.

  • flush_interval – The flush interval.

static disable() None#

Disable the runtime metrics collection service.

Once disabled, runtime metrics can be re-enabled by calling enable again.

Dynamic Instrumentation#

Dynamic Instrumentation#

Configuration#

When using ddtrace-run, dynamic instrumentation can be enabled by setting the DD_DYNAMIC_INSTRUMENTATION_ENABLED variable, or programmatically with:

from ddtrace.debugging import DynamicInstrumentation

# Enable dynamic instrumentation
DynamicInstrumentation.enable()

...

# Disable the debugger
DynamicInstrumentation.disable()
ddtrace.debugging.DynamicInstrumentation#

alias of Debugger

Source Code Integration#

Datadog Source Code Integration is supported for Git by the addition of the repository URL and commit hash in the Python package metadata field Project-URL with name source_code_link.

Format of source_code_link: <repository url>#<commit hash>

setuptools#

The ddtrace provides automatic instrumentation of setuptools to embed the source code link into the project metadata. ddtrace has to be installed as a build dependency.

Packages with pyproject.toml can update the build system requirements:

[build-system]
requires = ["setuptools", "ddtrace"]
build-backend = "setuptools.build_meta"

The instrumentation of setuptools can be automatically enabled to embed the source code link with a one-line import in setup.py (before setuptools import):

import ddtrace.sourcecode.setuptools_auto
from setuptools import setup

setup(
    name="mypackage",
    version="0.0.1",
    #...
)