Integrations

aiobotocore

The aiobotocore integration will trace all AWS calls made with the aiobotocore library. This integration isn’t enabled when applying the default patching. To enable it, you must run patch_all(aiobotocore=True)

import aiobotocore.session
from ddtrace import patch

# If not patched yet, you can patch botocore specifically
patch(aiobotocore=True)

# This will report spans with the default instrumentation
aiobotocore.session.get_session()
lambda_client = session.create_client('lambda', region_name='us-east-1')

# This query generates a trace
lambda_client.list_functions()

aiopg

Instrument aiopg to report a span for each executed Postgres queries:

from ddtrace import Pin, patch
import aiopg

# If not patched yet, you can patch aiopg specifically
patch(aiopg=True)

# This will report a span with the default settings
async with aiopg.connect(DSN) as db:
    with (await db.cursor()) as cursor:
        await cursor.execute("SELECT * FROM users WHERE id = 1")

# Use a pin to specify metadata related to this connection
Pin.override(db, service='postgres-users')

algoliasearch

The Algoliasearch integration will add tracing to your Algolia searches.

from ddtrace import patch_all
patch_all()

from algoliasearch import algoliasearch
client = alogliasearch.Client(<ID>, <API_KEY>)
index = client.init_index(<INDEX_NAME>)
index.search("your query", args={"attributesToRetrieve": "attribute1,attribute1"})

Configuration

ddtrace.config.algoliasearch['collect_query_text']

Whether to pass the text of your query onto Datadog. Since this may contain sensitive data it’s off by default

Default: False

asgi

The asgi middleware for tracing all requests to an ASGI-compliant application.

To configure tracing manually:

from ddtrace.contrib.asgi import TraceMiddleware

# app = <your asgi app>
app = TraceMiddleware(app)

Then use ddtrace-run when serving your application. For example, if serving with Uvicorn:

ddtrace-run uvicorn app:app

If using Python 3.6, the legacy AsyncioContextProvider will have to be enabled before using the middleware:

from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer  # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())

The middleware also supports using a custom function for handling exceptions for a trace:

from ddtrace.contrib.asgi import TraceMiddleware

def custom_handle_exception_span(exc, span):
    span.set_tag("http.status_code", 501)

# app = <your asgi app>
app = TraceMiddleware(app, handle_exception_span=custom_handle_exception_span)

Configuration

ddtrace.config.asgi['distributed_tracing']

Whether to use distributed tracing headers from requests received by your Asgi app.

Default: True

ddtrace.config.asgi['service_name']

The service name reported for your ASGI app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'asgi'

aiohttp

The aiohttp integration traces all requests defined in the application handlers. Auto instrumentation is available using the trace_app function:

from aiohttp import web
from ddtrace import tracer, patch
from ddtrace.contrib.aiohttp import trace_app

# patch third-party modules like aiohttp_jinja2
patch(aiohttp=True)

# create your application
app = web.Application()
app.router.add_get('/', home_handler)

# trace your application handlers
trace_app(app, tracer, service='async-api')
web.run_app(app, port=8000)

Integration settings are attached to your application under the datadog_trace namespace. You can read or update them as follows:

# disables distributed tracing for all received requests
app['datadog_trace']['distributed_tracing_enabled'] = False

Available settings are:

  • tracer (default: ddtrace.tracer): set the default tracer instance that is used to trace aiohttp internals. By default the ddtrace tracer is used.
  • service (default: aiohttp-web): set the service name used by the tracer. Usually this configuration must be updated with a meaningful name.
  • distributed_tracing_enabled (default: True): enable distributed tracing during the middleware execution, so that a new span is created with the given trace_id and parent_id injected via request headers.

Third-party modules that are currently supported by the patch() method are:

  • aiohttp_jinja2

When a request span is created, a new Context for this logical execution is attached to the request object, so that it can be used in the application code:

async def home_handler(request):
    ctx = request['datadog_context']
    # do something with the tracing Context

asyncio

This integration provides automatic instrumentation to trace the execution flow of concurrent execution of asyncio.Task. Also it provides a legacy context provider for supporting tracing of asynchronous execution in Python < 3.7.

For asynchronous execution tracing in Python < 3.7 to work properly the tracer must be configured as follows:

import asyncio
from ddtrace import tracer
from ddtrace.contrib.asyncio import context_provider

# enable asyncio support
tracer.configure(context_provider=context_provider)

async def some_work():
    with tracer.trace('asyncio.some_work'):
        # do something

# launch your coroutines as usual
loop = asyncio.get_event_loop()
loop.run_until_complete(some_work())
loop.close()

In addition, helpers are provided to simplify how the tracing Context is handled between scheduled coroutines and Future invoked in separated threads:

  • set_call_context(task, ctx): attach the context to the given Task so that it will be available from the tracer.get_call_context()
  • ensure_future(coro_or_future, *, loop=None): wrapper for the asyncio.ensure_future that attaches the current context to a new Task instance
  • run_in_executor(loop, executor, func, *args): wrapper for the loop.run_in_executor that attaches the current context to the new thread so that the trace can be resumed regardless when it’s executed
  • create_task(coro): creates a new asyncio Task that inherits the current active Context so that generated traces in the new task are attached to the main trace

botocore

The Botocore integration will trace all AWS calls made with the botocore library. Libraries like Boto3 that use Botocore will also be patched.

This integration is automatically patched when using patch_all():

import botocore.session
from ddtrace import patch

# If not patched yet, you can patch botocore specifically
patch(botocore=True)

# This will report spans with the default instrumentation
botocore.session.get_session()
lambda_client = session.create_client('lambda', region_name='us-east-1')
# Example of instrumented query
lambda_client.list_functions()

boto2

Boto integration will trace all AWS calls made via boto2. This integration is automatically patched when using patch_all():

import boto.ec2
from ddtrace import patch

# If not patched yet, you can patch boto specifically
patch(boto=True)

# This will report spans with the default instrumentation
ec2 = boto.ec2.connect_to_region("us-west-2")
# Example of instrumented query
ec2.get_all_instances()

Bottle

The bottle integration traces the Bottle web framework. Add the following plugin to your app:

import bottle
from ddtrace import tracer
from ddtrace.contrib.bottle import TracePlugin

app = bottle.Bottle()
plugin = TracePlugin(service="my-web-app")
app.install(plugin)

Cassandra

Instrument Cassandra to report Cassandra queries.

patch_all will automatically patch your Cluster instance to make it work.

from ddtrace import Pin, patch
from cassandra.cluster import Cluster

# If not patched yet, you can patch cassandra specifically
patch(cassandra=True)

# This will report spans with the default instrumentation
cluster = Cluster(contact_points=["127.0.0.1"], port=9042)
session = cluster.connect("my_keyspace")
# Example of instrumented query
session.execute("select id from my_table limit 10;")

# Use a pin to specify metadata related to this cluster
cluster = Cluster(contact_points=['10.1.1.3', '10.1.1.4', '10.1.1.5'], port=9042)
Pin.override(cluster, service='cassandra-backend')
session = cluster.connect("my_keyspace")
session.execute("select id from my_table limit 10;")

Celery

The Celery integration will trace all tasks that are executed in the background. Functions and class based tasks are traced only if the Celery API is used, so calling the function directly or via the run() method will not generate traces. However, calling apply(), apply_async() and delay() will produce tracing data. To trace your Celery application, call the patch method:

import celery
from ddtrace import patch

patch(celery=True)
app = celery.Celery()

@app.task
def my_task():
    pass

class MyTask(app.Task):
    def run(self):
        pass

Configuration

ddtrace.config.celery['distributed_tracing']

Whether or not to pass distributed tracing headers to Celery workers.

Can also be enabled with the DD_CELERY_DISTRIBUTED_TRACING environment variable.

Default: False

ddtrace.config.celery['producer_service_name']

Sets service name for producer

Default: 'celery-producer'

ddtrace.config.celery['worker_service_name']

Sets service name for worker

Default: 'celery-worker'

Consul

Instrument Consul to trace KV queries.

Only supports tracing for the synchronous client.

patch_all will automatically patch your Consul client to make it work.

from ddtrace import Pin, patch
import consul

# If not patched yet, you can patch consul specifically
patch(consul=True)

# This will report a span with the default settings
client = consul.Consul(host="127.0.0.1", port=8500)
client.get("my-key")

# Use a pin to specify metadata related to this client
Pin.override(client, service='consul-kv')

Django

The Django integration traces requests, views, template renderers, database and cache calls in a Django application.

Enable Django tracing automatically via ddtrace-run:

ddtrace-run python manage.py runserver

Django tracing can also be enabled manually:

from ddtrace import patch_all
patch_all()

To have Django capture the tracer logs, ensure the LOGGING variable in settings.py looks similar to:

LOGGING = {
    'loggers': {
        'ddtrace': {
            'handlers': ['console'],
            'level': 'WARNING',
        },
    },
}

Configuration

ddtrace.config.django['distributed_tracing_enabled']

Whether or not to parse distributed tracing headers from requests received by your Django app.

Default: True

ddtrace.config.django['service_name']

The service name reported for your Django app.

Can also be configured via the DD_SERVICE_NAME environment variable.

Default: 'django'

ddtrace.config.django['cache_service_name']

The service name reported for your Django app cache layer.

Can also be configured via the DD_DJANGO_CACHE_SERVICE_NAME environment variable.

Default: 'django'

ddtrace.config.django['database_service_name']

A string reported as the service name of the Django app database layer.

Can also be configured via the DD_DJANGO_DATABASE_SERVICE_NAME environment variable.

Takes precedence over database_service_name_prefix.

Default: ''

ddtrace.config.django['database_service_name_prefix']

A string to be prepended to the service name reported for your Django app database layer.

Can also be configured via the DD_DJANGO_DATABASE_SERVICE_NAME_PREFIX environment variable.

The database service name is the name of the database appended with ‘db’. Has a lower precedence than database_service_name.

Default: ''

ddtrace.config.django['instrument_middleware']

Whether or not to instrument middleware.

Can also be enabled with the DD_DJANGO_INSTRUMENT_MIDDLEWARE environment variable.

Default: True

ddtrace.config.django['instrument_databases']

Whether or not to instrument databases.

Default: True

ddtrace.config.django['instrument_caches']

Whether or not to instrument caches.

Default: True

ddtrace.config.django['trace_query_string']

Whether or not to include the query string as a tag.

Default: False

ddtrace.config.django['include_user_name']

Whether or not to include the authenticated user’s username as a tag on the root request span.

Default: True

ddtrace.config.django['use_handler_resource_format']

Whether or not to use the legacy resource format “{method} {handler}”.

The default resource format for Django >= 2.2.0 is otherwise “{method} {urlpattern}”.

Example:

from ddtrace import config

# Enable distributed tracing
config.django['distributed_tracing_enabled'] = True

# Override service name
config.django['service_name'] = 'custom-service-name'

Migration from ddtrace<=0.33.0

The Django integration provides automatic migration from enabling tracing using a middleware to the method consistent with our integrations. Application developers are encouraged to convert their configuration of the tracer to the latter.

  1. Remove 'ddtrace.contrib.django' from INSTALLED_APPS in settings.py.
  2. Replace DATADOG_TRACE configuration in settings.py according to the table below.
  3. Remove TraceMiddleware or TraceExceptionMiddleware if used in settings.py.
  1. Enable Django tracing automatically via ddtrace-run` or manually by adding ddtrace.patch_all() to settings.py.

The mapping from old configuration settings to new ones.

DATADOG_TRACE Configuration
AGENT_HOSTNAME DD_AGENT_HOST environment variable or tracer.configure(hostname=)
AGENT_PORT DD_TRACE_AGENT_PORT environment variable or tracer.configure(port=)
AUTO_INSTRUMENT N/A Instrumentation is automatic
INSTRUMENT_CACHE config.django['instrument_caches']
INSTRUMENT_DATABASE config.django['instrument_databases']
INSTRUMENT_TEMPLATE N/A Instrumentation is automatic
DEFAULT_DATABASE_PREFIX config.django['database_service_name_prefix']
DEFAULT_SERVICE DD_SERVICE_NAME environment variable or config.django['service_name']
DEFAULT_CACHE_SERVICE config.django['cache_service_name']
ENABLED tracer.configure(enabled=)
DISTRIBUTED_TRACING config.django['distributed_tracing_enabled']
TRACE_QUERY_STRING config.django['trace_query_string']
TAGS DD_TAGS environment variable or tracer.set_tags()
TRACER N/A - if a particular tracer is required for the Django integration use Pin.override(Pin.get_from(django), tracer=)

Examples

Before:

# settings.py
INSTALLED_APPS = [
    # your Django apps...
    'ddtrace.contrib.django',
]

DATADOG_TRACE = {
    'AGENT_HOSTNAME': 'localhost',
    'AGENT_PORT': 8126,
    'AUTO_INSTRUMENT': True,
    'INSTRUMENT_CACHE': True,
    'INSTRUMENT_DATABASE': True,
    'INSTRUMENT_TEMPLATE': True,
    'DEFAULT_SERVICE': 'my-django-app',
    'DEFAULT_CACHE_SERVICE': 'my-cache',
    'DEFAULT_DATABASE_PREFIX': 'my-',
    'ENABLED': True,
    'DISTRIBUTED_TRACING': True,
    'TRACE_QUERY_STRING': None,
    'TAGS': {'env': 'production'},
    'TRACER': 'my.custom.tracer',
}

After:

# settings.py
INSTALLED_APPS = [
    # your Django apps...
]

from ddtrace import config, tracer
tracer.configure(hostname='localhost', port=8126, enabled=True)
config.django['service_name'] = 'my-django-app'
config.django['cache_service_name'] = 'my-cache'
config.django['database_service_name_prefix'] = 'my-'
config.django['instrument_databases'] = True
config.django['instrument_caches'] = True
config.django['trace_query_string'] = True
tracer.set_tags({'env': 'production'})

import my.custom.tracer
from ddtrace import Pin, patch_all
import django
patch_all()
Pin.override(Pin.get_from(django), tracer=my.custom.tracer)

Headers tracing is supported for this integration.

Elasticsearch

Instrument Elasticsearch to report Elasticsearch queries.

patch_all will automatically patch your Elasticsearch instance to make it work.

from ddtrace import Pin, patch
from elasticsearch import Elasticsearch

# If not patched yet, you can patch elasticsearch specifically
patch(elasticsearch=True)

# This will report spans with the default instrumentation
es = Elasticsearch(port=ELASTICSEARCH_CONFIG['port'])
# Example of instrumented query
es.indices.create(index='books', ignore=400)

# Use a pin to specify metadata related to this client
es = Elasticsearch(port=ELASTICSEARCH_CONFIG['port'])
Pin.override(es.transport, service='elasticsearch-videos')
es.indices.create(index='videos', ignore=400)

Falcon

To trace the falcon web framework, install the trace middleware:

import falcon
from ddtrace import tracer
from ddtrace.contrib.falcon import TraceMiddleware

mw = TraceMiddleware(tracer, 'my-falcon-app')
falcon.API(middleware=[mw])

You can also use the autopatching functionality:

import falcon
from ddtrace import tracer, patch

patch(falcon=True)

app = falcon.API()

To disable distributed tracing when using autopatching, set the DATADOG_FALCON_DISTRIBUTED_TRACING environment variable to False.

Supported span hooks

The following is a list of available tracer hooks that can be used to intercept and modify spans created by this integration.

  • request
    • Called before the response has been finished
    • def on_falcon_request(span, request, response)

Example:

import falcon
from ddtrace import config, patch_all
patch_all()

app = falcon.API()

@config.falcon.hooks.on('request')
def on_falcon_request(span, request, response):
    span.set_tag('my.custom', 'tag')

Headers tracing is supported for this integration.

Flask

The Flask integration will add tracing to all requests to your Flask application.

This integration will track the entire Flask lifecycle including user-defined endpoints, hooks, signals, and template rendering.

To configure tracing manually:

from ddtrace import patch_all
patch_all()

from flask import Flask

app = Flask(__name__)


@app.route('/')
def index():
    return 'hello world'


if __name__ == '__main__':
    app.run()

You may also enable Flask tracing automatically via ddtrace-run:

ddtrace-run python app.py

Configuration

ddtrace.config.flask['distributed_tracing_enabled']

Whether to parse distributed tracing headers from requests received by your Flask app.

Default: True

ddtrace.config.flask['service_name']

The service name reported for your Flask app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'flask'

ddtrace.config.flask['collect_view_args']

Whether to add request tags for view function argument values.

Default: True

ddtrace.config.flask['template_default_name']

The default template name to use when one does not exist.

Default: <memory>

ddtrace.config.flask['trace_signals']

Whether to trace Flask signals (before_request, after_request, etc).

Default: True

ddtrace.config.flask['extra_error_codes']

A list of response codes that should get marked as errors.

5xx codes are always considered an error.

Default: []

Example:

from ddtrace import config

# Enable distributed tracing
config.flask['distributed_tracing_enabled'] = True

# Override service name
config.flask['service_name'] = 'custom-service-name'

# Report 401, and 403 responses as errors
config.flask['extra_error_codes'] = [401, 403]

Flask Cache

The flask cache tracer will track any access to a cache backend. You can use this tracer together with the Flask tracer middleware.

To install the tracer, from ddtrace import tracer needs to be added:

from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache

and the tracer needs to be initialized:

Cache = get_traced_cache(tracer, service='my-flask-cache-app')

Here is the end result, in a sample app:

from flask import Flask

from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache

app = Flask(__name__)

# get the traced Cache class
Cache = get_traced_cache(tracer, service='my-flask-cache-app')

# use the Cache as usual with your preferred CACHE_TYPE
cache = Cache(app, config={'CACHE_TYPE': 'simple'})

def counter():
    # this access is traced
    conn_counter = cache.get("conn_counter")

futures

The futures integration propagates the current active Tracing Context between threads. The integration ensures that when operations are executed in a new thread, that thread can continue the previously generated trace.

The integration doesn’t trace automatically threads execution, so manual instrumentation or another integration must be activated. Threads propagation is not enabled by default with the patch_all() method and must be activated as follows:

from ddtrace import patch, patch_all

patch(futures=True)
# or, when instrumenting all libraries
patch_all(futures=True)

gevent

The gevent integration adds support for tracing across greenlets.

The integration patches the gevent internals to add context management logic. It also configures the global tracer instance to use a gevent context provider to utilize the context management logic.

If custom tracer instances are being used in a gevent application, then configure it with:

from ddtrace.contrib.gevent import context_provider

tracer.configure(context_provider=context_provider)

Enabling

The integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(gevent=True)

Note: these calls need to be performed before calling the gevent patching.

Example of the context propagation:

def my_parent_function():
    with tracer.trace("web.request") as span:
        span.service = "web"
        gevent.spawn(worker_function)


def worker_function():
    # then trace its child
    with tracer.trace("greenlet.call") as span:
        span.service = "greenlet"
        ...

        with tracer.trace("greenlet.child_call") as child:
            ...

Grpc

The gRPC integration traces the client and server using the interceptor pattern.

Enabling

The gRPC integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(grpc=True)

# use grpc like usual

Global Configuration

ddtrace.config.grpc["service"]

The service name reported by default for gRPC client instances.

This option can also be set with the DD_GRPC_SERVICE environment variable.

Default: "grpc-client"

ddtrace.config.grpc_server["service"]

The service name reported by default for gRPC server instances.

This option can also be set with the DD_SERVICE or DD_GRPC_SERVER_SERVICE environment variables.

Default: "grpc-server"

Instance Configuration

To configure the gRPC integration on an per-channel basis use the Pin API:

import grpc
from ddtrace import Pin, patch, Tracer

patch(grpc=True)
custom_tracer = Tracer()

# override the pin on the client
Pin.override(grpc.Channel, service='mygrpc', tracer=custom_tracer)
with grpc.insecure_channel('localhost:50051') as channel:
    # create stubs and send requests
    pass

To configure the gRPC integration on the server use the Pin API:

import grpc
from grpc.framework.foundation import logging_pool

from ddtrace import Pin, patch, Tracer

patch(grpc=True)
custom_tracer = Tracer()

# override the pin on the server
Pin.override(grpc.Server, service='mygrpc', tracer=custom_tracer)
server = grpc.server(logging_pool.pool(2))
server.add_insecure_port('localhost:50051')
add_MyServicer_to_server(MyServicer(), server)
server.start()

httplib

Patch the built-in httplib/http.client libraries to trace all HTTP calls.

Usage:

# Patch all supported modules/functions
from ddtrace import patch
patch(httplib=True)

# Python 2
import httplib
import urllib

resp = urllib.urlopen('http://www.datadog.com/')

# Python 3
import http.client
import urllib.request

resp = urllib.request.urlopen('http://www.datadog.com/')

httplib spans do not include a default service name. Before HTTP calls are made, ensure a parent span has been started with a service name to be used for spans generated from those calls:

with tracer.trace('main', service='my-httplib-operation'):
    resp = urllib.request.urlopen('http://www.datadog.com/')

The library can be configured globally and per instance, using the Configuration API:

from ddtrace import config

# disable distributed tracing globally
config.httplib['distributed_tracing'] = False

# change the service name/distributed tracing only for this HTTP connection

# Python 2
connection = urllib.HTTPConnection('www.datadog.com')

# Python 3
connection = http.client.HTTPConnection('www.datadog.com')

cfg = config.get_from(connection)
cfg['distributed_tracing'] = False

Headers tracing is supported for this integration.

Jinja2

The jinja2 integration traces templates loading, compilation and rendering. Auto instrumentation is available using the patch. The following is an example:

from ddtrace import patch
from jinja2 import Environment, FileSystemLoader

patch(jinja2=True)

env = Environment(
    loader=FileSystemLoader("templates")
)
template = env.get_template('mytemplate.html')

The library can be configured globally and per instance, using the Configuration API:

from ddtrace import config

# Change service name globally
config.jinja2['service_name'] = 'jinja-templates'

# change the service name only for this environment
cfg = config.get_from(env)
cfg['service_name'] = 'jinja-templates'

By default, the service name is set to None, so it is inherited from the parent span. If there is no parent span and the service name is not overridden the agent will drop the traces.

kombu

Instrument kombu to report AMQP messaging.

patch_all will not automatically patch your Kombu client to make it work, as this would conflict with the Celery integration. You must specifically request kombu be patched, as in the example below.

Note: To permit distributed tracing for the kombu integration you must enable the tracer with priority sampling. Refer to the documentation here: https://ddtrace.readthedocs.io/en/stable/advanced_usage.html#priority-sampling

Without enabling distributed tracing, spans within a trace generated by the kombu integration might be dropped without the whole trace being dropped.

from ddtrace import Pin, patch
import kombu

# If not patched yet, you can patch kombu specifically
patch(kombu=True)

# This will report a span with the default settings
conn = kombu.Connection("amqp://guest:guest@127.0.0.1:5672//")
conn.connect()
task_queue = kombu.Queue('tasks', kombu.Exchange('tasks'), routing_key='tasks')
to_publish = {'hello': 'world'}
producer = conn.Producer()
producer.publish(to_publish,
                 exchange=task_queue.exchange,
                 routing_key=task_queue.routing_key,
                 declare=[task_queue])

# Use a pin to specify metadata related to this client
Pin.override(producer, service='kombu-consumer')

Mako

The mako integration traces templates rendering. Auto instrumentation is available using the patch. The following is an example:

from ddtrace import patch
from mako.template import Template

patch(mako=True)

t = Template(filename="index.html")

Molten

The molten web framework is automatically traced by ddtrace when calling patch:

from molten import App, Route
from ddtrace import patch_all; patch_all(molten=True)

def hello(name: str, age: int) -> str:
    return f'Hello {age} year old named {name}!'
app = App(routes=[Route('/hello/{name}/{age}', hello)])

You may also enable molten tracing automatically via ddtrace-run:

ddtrace-run python app.py

Configuration

ddtrace.config.molten['distributed_tracing']

Whether to parse distributed tracing headers from requests received by your Molten app.

Default: True

ddtrace.config.molten['service_name']

The service name reported for your Molten app.

Can also be configured via the DD_SERVICE or DD_MOLTEN_SERVICE_NAME environment variables.

Default: 'molten'

Mongoengine

Instrument mongoengine to report MongoDB queries.

patch_all will automatically patch your mongoengine connect method to make it work.

from ddtrace import Pin, patch
import mongoengine

# If not patched yet, you can patch mongoengine specifically
patch(mongoengine=True)

# At that point, mongoengine is instrumented with the default settings
mongoengine.connect('db', alias='default')

# Use a pin to specify metadata related to this client
client = mongoengine.connect('db', alias='master')
Pin.override(client, service="mongo-master")

mysql-connector

The mysql integration instruments the mysql library to trace MySQL queries.

Enabling

The mysql integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(mysql=True)

Global Configuration

ddtrace.config.mysql["service"]

The service name reported by default for mysql spans.

This option can also be set with the DD_MYSQL_SERVICE environment variable.

Default: "mysql"

Instance Configuration

To configure the mysql integration on an per-connection basis use the Pin API:

from ddtrace import Pin
# Make sure to import mysql.connector and not the 'connect' function,
# otherwise you won't have access to the patched version
import mysql.connector

# This will report a span with the default settings
conn = mysql.connector.connect(user="alice", password="b0b", host="localhost", port=3306, database="test")

# Use a pin to override the service name for this connection.
Pin.override(conn, service='mysql-users')

cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")

Only the default full-Python integration works. The binary C connector, provided by _mysql_connector, is not supported.

Help on mysql.connector can be found on: https://dev.mysql.com/doc/connector-python/en/

mysqlclient/MySQL-python

The mysqldb integration instruments the mysqlclient and MySQL-python libraries to trace MySQL queries.

Enabling

The integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(mysqldb=True)

Global Configuration

ddtrace.config.mysqldb["service"]

The service name reported by default for spans.

This option can also be set with the DD_MYSQLDB_SERVICE environment variable.

Default: "mysql"

Instance Configuration

To configure the integration on an per-connection basis use the Pin API:

# Make sure to import MySQLdb and not the 'connect' function,
# otherwise you won't have access to the patched version
from ddtrace import Pin
import MySQLdb

# This will report a span with the default settings
conn = MySQLdb.connect(user="alice", passwd="b0b", host="localhost", port=3306, db="test")

# Use a pin to override the service.
Pin.override(conn, service='mysql-users')

cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")

This package works for mysqlclient or MySQL-python. Only the default full-Python integration works. The binary C connector provided by _mysql is not supported.

Help on mysqlclient can be found on: https://mysqlclient.readthedocs.io/

pylibmc

Instrument pylibmc to report Memcached queries.

patch_all will automatically patch your pylibmc client to make it work.

# Be sure to import pylibmc and not pylibmc.Client directly,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import pylibmc

# If not patched yet, you can patch pylibmc specifically
patch(pylibmc=True)

# One client instrumented with default configuration
client = pylibmc.Client(["localhost:11211"]
client.set("key1", "value1")

# Use a pin to specify metadata related to this client
Pin.override(client, service="memcached-sessions")

Pylons

The pylons trace middleware will track request timings. To install the middleware, prepare your WSGI application and do the following:

from pylons.wsgiapp import PylonsApp

from ddtrace import tracer
from ddtrace.contrib.pylons import PylonsTraceMiddleware

app = PylonsApp(...)

traced_app = PylonsTraceMiddleware(app, tracer, service='my-pylons-app')

Then you can define your routes and views as usual.

PynamoDB

The PynamoDB integration traces all db calls made with the pynamodb library through the connection API.

Enabling

The PynamoDB integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

import pynamodb
from ddtrace import patch, config
patch(pynamodb=True)

Global Configuration

ddtrace.config.pynamodb["service"]
The service name reported by default for the PynamoDB instance.
This option can also be set with the ``DD_PYNAMODB_SERVICE`` environment
variable.
Default: ``"pynamodb"``

PyODBC

The pyodbc integration instruments the pyodbc library to trace pyodbc queries.

Enabling

The integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(pyodbc=True)

Global Configuration

ddtrace.config.pyodbc["service"]

The service name reported by default for pyodbc spans.

This option can also be set with the DD_PYODBC_SERVICE environment variable.

Default: "pyodbc"

Instance Configuration

To configure the integration on an per-connection basis use the Pin API:

from ddtrace import Pin
import pyodbc

# This will report a span with the default settings
db = pyodbc.connect("<connection string>")

# Use a pin to override the service name for the connection.
Pin.override(db, service='pyodbc-users')

cursor = db.cursor()
cursor.execute("select * from users where id = 1")

pymemcache

Instrument pymemcache to report memcached queries.

patch_all will automatically patch the pymemcache Client:

from ddtrace import Pin, patch

# If not patched yet, patch pymemcache specifically
patch(pymemcache=True)

# Import reference to Client AFTER patching
import pymemcache
from pymemcache.client.base import Client

# Use a pin to specify metadata related all clients
Pin.override(pymemcache, service='my-memcached-service')

# This will report a span with the default settings
client = Client(('localhost', 11211))
client.set("my-key", "my-val")

# Use a pin to specify metadata related to this particular client
Pin.override(client, service='my-memcached-service')

Pymemcache HashClient will also be indirectly patched as it uses Client under the hood.

Pymongo

Instrument pymongo to report MongoDB queries.

The pymongo integration works by wrapping pymongo’s MongoClient to trace network calls. Pymongo 3.0 and greater are the currently supported versions. patch_all will automatically patch your MongoClient instance to make it work.

# Be sure to import pymongo and not pymongo.MongoClient directly,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import pymongo

# If not patched yet, you can patch pymongo specifically
patch(pymongo=True)

# At that point, pymongo is instrumented with the default settings
client = pymongo.MongoClient()
# Example of instrumented query
db = client["test-db"]
db.teams.find({"name": "Toronto Maple Leafs"})

# Use a pin to specify metadata related to this client
client = pymongo.MongoClient()
pin = Pin.override(client, service="mongo-master")

pymysql

The pymysql integration instruments the pymysql library to trace MySQL queries.

Enabling

The integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(pymysql=True)

Global Configuration

ddtrace.config.pymysql["service"]

The service name reported by default for pymysql spans.

This option can also be set with the DD_PYMYSQL_SERVICE environment variable.

Default: "mysql"

Instance Configuration

To configure the integration on an per-connection basis use the Pin API:

from ddtrace import Pin
from pymysql import connect

# This will report a span with the default settings
conn = connect(user="alice", password="b0b", host="localhost", port=3306, database="test")

# Use a pin to override the service name for this connection.
Pin.override(conn, service="pymysql-users")


cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")

Pyramid

To trace requests from a Pyramid application, trace your application config:

from pyramid.config import Configurator
from ddtrace.contrib.pyramid import trace_pyramid

settings = {
    'datadog_trace_service' : 'my-web-app-name',
}

config = Configurator(settings=settings)
trace_pyramid(config)

# use your config as normal.
config.add_route('index', '/')

Available settings are:

  • datadog_trace_service: change the pyramid service name
  • datadog_trace_enabled: sets if the Tracer is enabled or not
  • datadog_distributed_tracing: set it to False to disable Distributed Tracing

If you use the pyramid.tweens settings value to set the tweens for your application, you need to add ddtrace.contrib.pyramid:trace_tween_factory explicitly to the list. For example:

settings = {
    'datadog_trace_service' : 'my-web-app-name',
    'pyramid.tweens', 'your_tween_no_1\\nyour_tween_no_2\\nddtrace.contrib.pyramid:trace_tween_factory',
}

config = Configurator(settings=settings)
trace_pyramid(config)

# use your config as normal.
config.add_route('index', '/')

psycopg

The psycopg integration instruments the psycopg2 library to trace Postgres queries.

Enabling

The mysql integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(psycopg=True)

Global Configuration

ddtrace.config.psycopg["service"]

The service name reported by default for psycopg spans.

This option can also be set with the DD_PSYCOPG_SERVICE environment variable.

Default: "postgres"

Instance Configuration

To configure the psycopg integration on an per-connection basis use the Pin API:

from ddtrace import Pin
import psycopg2

db = psycopg2.connect(connection_factory=factory)
# Use a pin to override the service name.
Pin.override(db, service="postgres-users")

cursor = db.cursor()
cursor.execute("select * from users where id = 1")

redis

The redis integration traces redis requests.

Enabling

The redis integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(redis=True)

Global Configuration

ddtrace.config.redis["service"]

The service name reported by default for redis traces.

This option can also be set with the DD_REDIS_SERVICE environment variable.

Default: "redis"

Instance Configuration

To configure particular redis instances use the Pin API:

import redis
from ddtrace import Pin

client = redis.StrictRedis(host="localhost", port=6379)

# Override service name for this instance
Pin.override(client, service="my-custom-queue")

# Traces reported for this client will now have "my-custom-queue"
# as the service name.
client.get("my-key")

redis-py-cluster

Instrument rediscluster to report Redis Cluster queries.

patch_all will automatically patch your Redis Cluster client to make it work.

from ddtrace import Pin, patch
import rediscluster

# If not patched yet, you can patch redis specifically
patch(rediscluster=True)

# This will report a span with the default settings
client = rediscluster.StrictRedisCluster(startup_nodes=[{'host':'localhost', 'port':'7000'}])
client.get('my-key')

# Use a pin to specify metadata related to this client
Pin.override(client, service='redis-queue')

Requests

The requests integration traces all HTTP calls to internal or external services. Auto instrumentation is available using the patch function that must be called before importing the requests library. The following is an example:

from ddtrace import patch
patch(requests=True)

import requests
requests.get("https://www.datadoghq.com")

If you would prefer finer grained control, use a TracedSession object as you would a requests.Session:

from ddtrace.contrib.requests import TracedSession

session = TracedSession()
session.get("https://www.datadoghq.com")

The library can be configured globally and per instance, using the Configuration API:

from ddtrace import config

# disable distributed tracing globally
config.requests['distributed_tracing'] = False

# change the service name/distributed tracing only for this session
session = Session()
cfg = config.get_from(session)
cfg['service_name'] = 'auth-api'

Headers tracing is supported for this integration.

Sanic

The Sanic integration will trace requests to and from Sanic.

Enable Sanic tracing automatically via ddtrace-run:

ddtrace-run python app.py

Sanic tracing can also be enabled manually:

from ddtrace import patch_all
patch_all(sanic=True)

from sanic import Sanic
from sanic.response import text

app = Sanic(__name__)

@app.route('/')
def index(request):
    return text('hello world')

if __name__ == '__main__':
    app.run()

If using Python 3.6, the legacy AsyncioContextProvider will have to be enabled before using the middleware:

from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer  # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())

Configuration

ddtrace.config.sanic['distributed_tracing_enabled']

Whether to parse distributed tracing headers from requests received by your Sanic app.

Default: True

ddtrace.config.sanic['service_name']

The service name reported for your Sanic app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'sanic'

Example:

from ddtrace import config

# Enable distributed tracing
config.sanic['distributed_tracing_enabled'] = True

# Override service name
config.sanic['service_name'] = 'custom-service-name'

Starlette

The Starlette integration will trace requests to and from Starlette.

Enabling

The starlette integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
from starlette.applications import Starlette

patch(starlette=True)
app = Starlette()

If using Python 3.6, the legacy AsyncioContextProvider will have to be enabled before using the middleware:

from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer  # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())

Configuration

ddtrace.config.starlette['distributed_tracing']

Whether to parse distributed tracing headers from requests received by your Starlette app.

Can also be enabled with the DD_TRACE_STARLETTE_DISTRIBUTED_TRACING environment variable.

Default: True

ddtrace.config.starlette['analytics_enabled']

Whether to analyze spans for starlette in App Analytics.

Can also be enabled with the DD_TRACE_STARLETTE_ANALYTICS_ENABLED environment variable.

Default: None

ddtrace.config.starlette['service_name']

The service name reported for your starlette app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'starlette'

ddtrace.config.starlette['request_span_name']

The span name for a starlette request.

Default: 'starlette.request'

Example:

from ddtrace import config

# Enable distributed tracing
config.starlette['distributed_tracing'] = True

# Override service name
config.starlette['service_name'] = 'custom-service-name'

# Override request span name
config.starlette['request_span_name'] = 'custom-request-span-name'

SQLAlchemy

To trace sqlalchemy queries, add instrumentation to the engine class using the patch method that must be called before importing sqlalchemy:

# patch before importing `create_engine`
from ddtrace import Pin, patch
patch(sqlalchemy=True)

# use SQLAlchemy as usual
from sqlalchemy import create_engine

engine = create_engine('sqlite:///:memory:')
engine.connect().execute("SELECT COUNT(*) FROM users")

# Use a PIN to specify metadata related to this engine
Pin.override(engine, service='replica-db')

SQLite

The sqlite integration instruments the built-in sqlite module to trace SQLite queries.

Enabling

The integration is enabled automatically when using ddtrace-run or patch_all().

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(sqlite=True)

Global Configuration

ddtrace.config.sqlite["service"]

The service name reported by default for sqlite spans.

This option can also be set with the DD_SQLITE_SERVICE environment variable.

Default: "sqlite"

Instance Configuration

To configure the integration on an per-connection basis use the Pin API:

from ddtrace import Pin
import sqlite3

# This will report a span with the default settings
db = sqlite3.connect(":memory:")

# Use a pin to override the service name for the connection.
Pin.override(db, service='sqlite-users')

cursor = db.cursor()
cursor.execute("select * from users where id = 1")

Tornado

The Tornado integration traces all RequestHandler defined in a Tornado web application. Auto instrumentation is available using the patch function that must be called before importing the tornado library.

Note: Tornado 5 and 6 supported only for Python 3.7.

The following is an example:

# patch before importing tornado and concurrent.futures
from ddtrace import tracer, patch
patch(tornado=True)

import tornado.web
import tornado.gen
import tornado.ioloop

# create your handlers
class MainHandler(tornado.web.RequestHandler):
    @tornado.gen.coroutine
    def get(self):
        self.write("Hello, world")

# create your application
app = tornado.web.Application([
    (r'/', MainHandler),
])

# and run it as usual
app.listen(8888)
tornado.ioloop.IOLoop.current().start()

When any type of RequestHandler is hit, a request root span is automatically created. If you want to trace more parts of your application, you can use the wrap() decorator and the trace() method as usual:

class MainHandler(tornado.web.RequestHandler):
    @tornado.gen.coroutine
    def get(self):
        yield self.notify()
        yield self.blocking_method()
        with tracer.trace('tornado.before_write') as span:
            # trace more work in the handler

    @tracer.wrap('tornado.executor_handler')
    @tornado.concurrent.run_on_executor
    def blocking_method(self):
        # do something expensive

    @tracer.wrap('tornado.notify', service='tornado-notification')
    @tornado.gen.coroutine
    def notify(self):
        # do something

If you are overriding the on_finish or log_exception methods on a RequestHandler, you will need to call the super method to ensure the tracer’s patched methods are called:

class MainHandler(tornado.web.RequestHandler):
    @tornado.gen.coroutine
    def get(self):
        self.write("Hello, world")

    def on_finish(self):
        super(MainHandler, self).on_finish()
        # do other clean-up

    def log_exception(self, typ, value, tb):
        super(MainHandler, self).log_exception(typ, value, tb)
        # do other logging

Tornado settings can be used to change some tracing configuration, like:

settings = {
    'datadog_trace': {
        'default_service': 'my-tornado-app',
        'tags': {'env': 'production'},
        'distributed_tracing': False,
        'settings': {
            'FILTERS':  [
                FilterRequestsOnUrl(r'http://test\\.example\\.com'),
            ],
        },
    },
}

app = tornado.web.Application([
    (r'/', MainHandler),
], **settings)

The available settings are:

  • default_service (default: tornado-web): set the service name used by the tracer. Usually this configuration must be updated with a meaningful name. Can also be configured via the DD_SERVICE environment variable.
  • tags (default: {}): set global tags that should be applied to all spans.
  • enabled (default: True): define if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the APM agent.
  • distributed_tracing (default: True): enable distributed tracing if this is called remotely from an instrumented application. We suggest to enable it only for internal services where headers are under your control.
  • agent_hostname (default: localhost): define the hostname of the APM agent.
  • agent_port (default: 8126): define the port of the APM agent.
  • settings (default: {}): Tracer extra settings used to change, for instance, the filtering behavior.

Vertica

The Vertica integration will trace queries made using the vertica-python library.

Vertica will be automatically instrumented with patch_all, or when using the ddtrace-run command.

Vertica is instrumented on import. To instrument Vertica manually use the patch function. Note the ordering of the following statements:

from ddtrace import patch
patch(vertica=True)

import vertica_python

# use vertica_python like usual

To configure the Vertica integration globally you can use the Config API:

from ddtrace import config, patch
patch(vertica=True)

config.vertica['service_name'] = 'my-vertica-database'

To configure the Vertica integration on an instance-per-instance basis use the Pin API:

from ddtrace import Pin, patch, Tracer
patch(vertica=True)

import vertica_python

custom_tracer = Tracer()
conn = vertica_python.connect(**YOUR_VERTICA_CONFIG)

# override the service and tracer to be used
Pin.override(conn, service='myverticaservice', tracer=custom_tracer)