Integrations#
aioredis#
The aioredis integration instruments aioredis requests. Version 1.3 and above are fully supported.
Enabling#
The aioredis integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(aioredis=True)
Global Configuration#
- ddtrace.config.aioredis["service"]
The service name reported by default for aioredis instances.
This option can also be set with the
DD_AIOREDIS_SERVICE
environment variable.Default:
"redis"
Instance Configuration#
To configure the aioredis integration on a per-instance basis use the
Pin
API:
import aioredis
from ddtrace import Pin
myaioredis = aioredis.Aioredis()
Pin.override(myaioredis, service="myaioredis")
aiobotocore#
The aiobotocore integration will trace all AWS calls made with the aiobotocore
library. This integration is not enabled by default.
Enabling#
The aiobotocore integration is not enabled by default. Use
patch()
to enable the integration:
from ddtrace import patch
patch(aiobotocore=True)
Configuration#
- ddtrace.config.aiobotocore['tag_no_params']
This opts out of the default behavior of adding span tags for a narrow set of API parameters.
To not collect any API parameters,
ddtrace.config.aiobotocore.tag_no_params = True
or by setting the environment variableDD_AWS_TAG_NO_PARAMS=true
.Default:
False
- ddtrace.config.aiobotocore['tag_all_params']
Deprecated: This retains the deprecated behavior of adding span tags for all API parameters that are not explicitly excluded by the integration. These deprecated span tags will be added along with the API parameters enabled by default.
This configuration is ignored if
tag_no_parms
(DD_AWS_TAG_NO_PARAMS
) is set toTrue
.To collect all API parameters,
ddtrace.config.botocore.tag_all_params = True
or by setting the environment variableDD_AWS_TAG_ALL_PARAMS=true
.Default:
False
aiopg#
Instrument aiopg to report a span for each executed Postgres queries:
from ddtrace import Pin, patch
import aiopg
# If not patched yet, you can patch aiopg specifically
patch(aiopg=True)
# This will report a span with the default settings
async with aiopg.connect(DSN) as db:
with (await db.cursor()) as cursor:
await cursor.execute("SELECT * FROM users WHERE id = 1")
# Use a pin to specify metadata related to this connection
Pin.override(db, service='postgres-users')
algoliasearch#
The Algoliasearch integration will add tracing to your Algolia searches.
from ddtrace import patch_all
patch_all()
from algoliasearch import algoliasearch
client = alogliasearch.Client(<ID>, <API_KEY>)
index = client.init_index(<INDEX_NAME>)
index.search("your query", args={"attributesToRetrieve": "attribute1,attribute1"})
Configuration#
- ddtrace.config.algoliasearch['collect_query_text']
Whether to pass the text of your query onto Datadog. Since this may contain sensitive data it’s off by default
Default:
False
aredis#
The aredis integration traces aredis requests.
Enabling#
The aredis integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(aredis=True)
Global Configuration#
- ddtrace.config.aredis["service"]
The service name reported by default for aredis traces.
This option can also be set with the
DD_AREDIS_SERVICE
environment variable.Default:
"redis"
Instance Configuration#
To configure particular aredis instances use the Pin
API:
import aredis
from ddtrace import Pin
client = aredis.StrictRedis(host="localhost", port=6379)
# Override service name for this instance
Pin.override(client, service="my-custom-queue")
# Traces reported for this client will now have "my-custom-queue"
# as the service name.
async def example():
await client.get("my-key")
asgi#
The asgi middleware for tracing all requests to an ASGI-compliant application.
To configure tracing manually:
from ddtrace.contrib.asgi import TraceMiddleware
# app = <your asgi app>
app = TraceMiddleware(app)
Then use ddtrace-run when serving your application. For example, if serving with Uvicorn:
ddtrace-run uvicorn app:app
If using Python 3.6, the legacy AsyncioContextProvider
will have to be
enabled before using the middleware:
from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())
The middleware also supports using a custom function for handling exceptions for a trace:
from ddtrace.contrib.asgi import TraceMiddleware
def custom_handle_exception_span(exc, span):
span.set_tag("http.status_code", 501)
# app = <your asgi app>
app = TraceMiddleware(app, handle_exception_span=custom_handle_exception_span)
To retrieve the request span from the scope of an ASGI request use the span_from_scope
function:
from ddtrace.contrib.asgi import span_from_scope
def handle_request(scope, send):
span = span_from_scope(scope)
if span:
span.set_tag(...)
...
Configuration#
- ddtrace.config.asgi['distributed_tracing']
Whether to use distributed tracing headers from requests received by your Asgi app.
Default:
True
- ddtrace.config.asgi['service_name']
The service name reported for your ASGI app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'asgi'
aiohttp#
The aiohttp
integration traces requests made with the client or to the server.
The client is automatically instrumented while the server must be manually instrumented using middleware.
Client#
Enabling#
The client integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(aiohttp=True)
Global Configuration#
- ddtrace.config.aiohttp_client['distributed_tracing']
Include distributed tracing headers in requests sent from the aiohttp client.
This option can also be set with the
DD_AIOHTTP_CLIENT_DISTRIBUTED_TRACING
environment variable.Default:
True
Server#
Enabling#
Automatic instrumentation is not available for the server, instead
the provided trace_app
function must be used:
from aiohttp import web
from ddtrace import tracer, patch
from ddtrace.contrib.aiohttp import trace_app
# create your application
app = web.Application()
app.router.add_get('/', home_handler)
# trace your application handlers
trace_app(app, tracer, service='async-api')
web.run_app(app, port=8000)
Integration settings are attached to your application under the datadog_trace
namespace. You can read or update them as follows:
# disables distributed tracing for all received requests
app['datadog_trace']['distributed_tracing_enabled'] = False
Available settings are:
tracer
(default:ddtrace.tracer
): set the default tracer instance that is used to trace aiohttp internals. By default the ddtrace tracer is used.service
(default:aiohttp-web
): set the service name used by the tracer. Usually this configuration must be updated with a meaningful name.distributed_tracing_enabled
(default:True
): enable distributed tracing during the middleware execution, so that a new span is created with the giventrace_id
andparent_id
injected via request headers.
When a request span is created, a new Context
for this logical execution is attached
to the request
object, so that it can be used in the application code:
async def home_handler(request):
ctx = request['datadog_context']
# do something with the tracing Context
All HTTP tags are supported for this integration.
aiomysql#
The aiomysql integration instruments the aiomysql library to trace MySQL queries.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(aiomysql=True)
Instance Configuration#
To configure the integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
import asyncio
import aiomysql
# This will report a span with the default settings
conn = await aiomysql.connect(host="127.0.0.1", port=3306,
user="root", password="", db="mysql",
loop=loop)
# Use a pin to override the service name for this connection.
Pin.override(conn, service="mysql-users")
cur = await conn.cursor()
await cur.execute("SELECT 6*7 AS the_answer;")
aiohttp_jinja2#
The aiohttp_jinja2
integration adds tracing of template rendering.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(aiohttp_jinja2=True)
asyncio#
This integration provides context management for tracing the execution flow
of concurrent execution of asyncio.Task
.
This integration is only necessary in Python < 3.7 (where contextvars is not supported). For Python > 3.7 this works automatically without configuration.
For asynchronous execution tracing to work properly the tracer must be configured as follows:
import asyncio
from ddtrace import tracer
from ddtrace.contrib.asyncio import context_provider
# enable asyncio support
tracer.configure(context_provider=context_provider)
async def some_work():
with tracer.trace('asyncio.some_work'):
# do something
# launch your coroutines as usual
loop = asyncio.get_event_loop()
loop.run_until_complete(some_work())
loop.close()
In addition, helpers are provided to simplify how the tracing Context
is
handled between scheduled coroutines and Future
invoked in separated
threads:
set_call_context(task, ctx)
: attach the context to the givenTask
so that it will be available from thetracer.current_trace_context()
ensure_future(coro_or_future, *, loop=None)
: wrapper for theasyncio.ensure_future
that attaches the current context to a newTask
instance
run_in_executor(loop, executor, func, *args)
: wrapper for theloop.run_in_executor
that attaches the current context to the new thread so that the trace can be resumed regardless when it’s executed
create_task(coro)
: creates a new asyncioTask
that inherits the current activeContext
so that generated traces in the new task are attached to the main trace
asyncpg#
The asyncpg
integration traces database requests made using connection
and cursor objects.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(asyncpg=True)
Global Configuration#
- ddtrace.config.asyncpg['service']
The service name reported by default for asyncpg connections.
This option can also be set with the
DD_ASYNCPG_SERVICE
environment variable.Default:
postgres
Instance Configuration#
Service#
To configure the service name used by the asyncpg integration on a per-instance
basis use the Pin
API:
import asyncpg
from ddtrace import Pin
conn = asyncpg.connect("postgres://localhost:5432")
Pin.override(conn, service="custom-service")
botocore#
The Botocore integration will trace all AWS calls made with the botocore library. Libraries like Boto3 that use Botocore will also be patched.
Enabling#
The botocore integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(botocore=True)
Configuration#
- ddtrace.config.botocore['distributed_tracing']
Whether to inject distributed tracing data to requests in SQS, SNS, EventBridge, Kinesis Streams and Lambda.
Can also be enabled with the
DD_BOTOCORE_DISTRIBUTED_TRACING
environment variable.Default:
True
- ddtrace.config.botocore['invoke_with_legacy_context']
This preserves legacy behavior when tracing directly invoked Python and Node Lambda functions instrumented with datadog-lambda-python < v41 or datadog-lambda-js < v3.58.0.
Legacy support for older libraries is available with
ddtrace.config.botocore.invoke_with_legacy_context = True
or by setting the environment variableDD_BOTOCORE_INVOKE_WITH_LEGACY_CONTEXT=true
.Default:
False
- ddtrace.config.botocore['operations'][<operation>].error_statuses = "<error statuses>"
Definition of which HTTP status codes to consider for making a span as an error span.
By default response status codes of
'500-599'
are considered as errors for all endpoints.Example marking 404, and 5xx as errors for
s3.headobject
API calls:from ddtrace import config config.botocore['operations']['s3.headobject'].error_statuses = '404,500-599'
See HTTP - Custom Error Codes documentation for more examples.
- ddtrace.config.botocore['tag_no_params']
This opts out of the default behavior of collecting a narrow set of API parameters as span tags.
To not collect any API parameters,
ddtrace.config.botocore.tag_no_params = True
or by setting the environment variableDD_AWS_TAG_NO_PARAMS=true
.Default:
False
- ddtrace.config.botocore['tag_all_params']
Deprecated: This retains the deprecated behavior of adding span tags for all API parameters that are not explicitly excluded by the integration. These deprecated span tags will be added along with the API parameters enabled by default.
This configuration is ignored if
tag_no_parms
(DD_AWS_TAG_NO_PARAMS
) is set toTrue
.To collect all API parameters,
ddtrace.config.botocore.tag_all_params = True
or by setting the environment variableDD_AWS_TAG_ALL_PARAMS=true
.Default:
False
Example:
from ddtrace import config
# Enable distributed tracing
config.botocore['distributed_tracing'] = True
boto2#
Boto integration will trace all AWS calls made via boto2.
Enabling#
The boto integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(boto=True)
Configuration#
- ddtrace.config.boto['tag_no_params']
This opts out of the default behavior of collecting a narrow set of API parameters as span tags.
To not collect any API parameters,
ddtrace.config.boto.tag_no_params = True
or by setting the environment variableDD_AWS_TAG_NO_PARAMS=true
.Default:
False
- ddtrace.config.boto['tag_all_params']
Deprecated: This retains the deprecated behavior of adding span tags for all API parameters that are not explicitly excluded by the integration. These deprecated span tags will be added along with the API parameters enabled by default.
This configuration is ignored if
tag_no_parms
(DD_AWS_TAG_NO_PARAMS
) is set toTrue
.To collect all API parameters,
ddtrace.config.botocore.tag_all_params = True
or by setting the environment variableDD_AWS_TAG_ALL_PARAMS=true
.Default:
False
Bottle#
The bottle integration traces the Bottle web framework. Add the following plugin to your app:
import bottle
from ddtrace import tracer
from ddtrace.contrib.bottle import TracePlugin
app = bottle.Bottle()
plugin = TracePlugin(service="my-web-app")
app.install(plugin)
All HTTP tags are supported for this integration.
Configuration#
- ddtrace.config.bottle['distributed_tracing']
Whether to parse distributed tracing headers from requests received by your bottle app.
Can also be enabled with the
DD_BOTTLE_DISTRIBUTED_TRACING
environment variable.Default:
True
Example:
from ddtrace import config
# Enable distributed tracing
config.bottle['distributed_tracing'] = True
Cassandra#
Instrument Cassandra to report Cassandra queries.
patch_all
will automatically patch your Cluster instance to make it work.
from ddtrace import Pin, patch
from cassandra.cluster import Cluster
# If not patched yet, you can patch cassandra specifically
patch(cassandra=True)
# This will report spans with the default instrumentation
cluster = Cluster(contact_points=["127.0.0.1"], port=9042)
session = cluster.connect("my_keyspace")
# Example of instrumented query
session.execute("select id from my_table limit 10;")
# Use a pin to specify metadata related to this cluster
cluster = Cluster(contact_points=['10.1.1.3', '10.1.1.4', '10.1.1.5'], port=9042)
Pin.override(cluster, service='cassandra-backend')
session = cluster.connect("my_keyspace")
session.execute("select id from my_table limit 10;")
Celery#
The Celery integration will trace all tasks that are executed in the
background. Functions and class based tasks are traced only if the Celery API
is used, so calling the function directly or via the run()
method will not
generate traces. However, calling apply()
, apply_async()
and delay()
will produce tracing data. To trace your Celery application, call the patch method:
import celery
from ddtrace import patch
patch(celery=True)
app = celery.Celery()
@app.task
def my_task():
pass
class MyTask(app.Task):
def run(self):
pass
Configuration#
- ddtrace.config.celery['distributed_tracing']
Whether or not to pass distributed tracing headers to Celery workers.
Can also be enabled with the
DD_CELERY_DISTRIBUTED_TRACING
environment variable.Default:
False
- ddtrace.config.celery['producer_service_name']
Sets service name for producer
Default:
'celery-producer'
- ddtrace.config.celery['worker_service_name']
Sets service name for worker
Default:
'celery-worker'
CherryPy#
The Cherrypy trace middleware will track request timings. It uses the cherrypy hooks and creates a tool to track requests and errors
Usage#
To install the middleware, add:
from ddtrace import tracer
from ddtrace.contrib.cherrypy import TraceMiddleware
and create a TraceMiddleware object:
traced_app = TraceMiddleware(cherrypy, tracer, service="my-cherrypy-app")
Configuration#
- ddtrace.config.cherrypy['distributed_tracing']
Whether to parse distributed tracing headers from requests received by your CherryPy app.
Can also be enabled with the
DD_CHERRYPY_DISTRIBUTED_TRACING
environment variable.Default:
True
- ddtrace.config.cherrypy['service']
The service name reported for your CherryPy app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'cherrypy'
Example:: Here is the end result, in a sample app:
import cherrypy
from ddtrace import tracer, Pin
from ddtrace.contrib.cherrypy import TraceMiddleware
TraceMiddleware(cherrypy, tracer, service="my-cherrypy-app")
@cherrypy.tools.tracer()
class HelloWorld(object):
def index(self):
return "Hello World"
index.exposed = True
cherrypy.quickstart(HelloWorld())
Consul#
Instrument Consul to trace KV queries.
Only supports tracing for the synchronous client.
patch_all
will automatically patch your Consul client to make it work.
from ddtrace import Pin, patch
import consul
# If not patched yet, you can patch consul specifically
patch(consul=True)
# This will report a span with the default settings
client = consul.Consul(host="127.0.0.1", port=8500)
client.get("my-key")
# Use a pin to specify metadata related to this client
Pin.override(client, service='consul-kv')
Datadog Lambda#
The aws_lambda integration currently enables traces to be sent before an impending timeout in an AWS Lambda function instrumented with the Datadog Lambda Python package.
Enabling#
The aws_lambda integration is enabled automatically for AWS Lambda functions which have been instrumented with Datadog.
Global Configuration#
This integration is configured automatically. The datadog_lambda package
calls patch_all
when DD_TRACE_ENABLED
is set to true
.
It’s not recommended to call patch
for it manually. Since it would not do
anything for other environments that do not meet the criteria above.
Configuration#
Important
You can configure some features with environment variables.
- ddtrace.contrib.aws_lambda.DD_APM_FLUSH_DEADLINE_MILLISECONDS#
When to flush unfinished spans in an impending timeout.
Default: AWS Lambda function timeout limit.
For additional configuration refer to Instrumenting Python Serverless Applications by Datadog.
Django#
The Django integration traces requests, views, template renderers, database and cache calls in a Django application.
Enable Django tracing automatically via ddtrace-run
:
ddtrace-run python manage.py runserver
Django tracing can also be enabled manually:
from ddtrace import patch_all
patch_all()
To have Django capture the tracer logs, ensure the LOGGING
variable in
settings.py
looks similar to:
LOGGING = {
'loggers': {
'ddtrace': {
'handlers': ['console'],
'level': 'WARNING',
},
},
}
Configuration#
Important
Note that the in-code configuration must be run before Django is instrumented. This means that in-code configuration
will not work with ddtrace-run
and before a call to patch
or patch_all
.
- ddtrace.config.django['distributed_tracing_enabled']
Whether or not to parse distributed tracing headers from requests received by your Django app.
Default:
True
- ddtrace.config.django['service_name']
The service name reported for your Django app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'django'
- ddtrace.config.django['cache_service_name']
The service name reported for your Django app cache layer.
Can also be configured via the
DD_DJANGO_CACHE_SERVICE_NAME
environment variable.Default:
'django'
- ddtrace.config.django['database_service_name']
A string reported as the service name of the Django app database layer.
Can also be configured via the
DD_DJANGO_DATABASE_SERVICE_NAME
environment variable.Takes precedence over database_service_name_prefix.
Default:
''
- ddtrace.config.django['database_service_name_prefix']
A string to be prepended to the service name reported for your Django app database layer.
Can also be configured via the
DD_DJANGO_DATABASE_SERVICE_NAME_PREFIX
environment variable.The database service name is the name of the database appended with ‘db’. Has a lower precedence than database_service_name.
Default:
''
- ddtrace.config.django["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also be configured via the
DD_DJANGO_TRACE_FETCH_METHODS
environment variable.Default:
False
- ddtrace.config.django['instrument_middleware']
Whether or not to instrument middleware.
Can also be enabled with the
DD_DJANGO_INSTRUMENT_MIDDLEWARE
environment variable.Default:
True
- ddtrace.config.django['instrument_templates']
Whether or not to instrument template rendering.
Can also be enabled with the
DD_DJANGO_INSTRUMENT_TEMPLATES
environment variable.Default:
True
- ddtrace.config.django['instrument_databases']
Whether or not to instrument databases.
Can also be enabled with the
DD_DJANGO_INSTRUMENT_DATABASES
environment variable.Default:
True
- ddtrace.config.django['instrument_caches']
Whether or not to instrument caches.
Can also be enabled with the
DD_DJANGO_INSTRUMENT_CACHES
environment variable.Default:
True
- ddtrace.config.django['trace_query_string']
Whether or not to include the query string as a tag.
Default:
False
- ddtrace.config.django['include_user_name']
Whether or not to include the authenticated user’s username as a tag on the root request span.
Can also be configured via the
DD_DJANGO_INCLUDE_USER_NAME
environment variable.Default:
True
- ddtrace.config.django['use_handler_resource_format']
Whether or not to use the resource format “{method} {handler}”. Can also be enabled with the
DD_DJANGO_USE_HANDLER_RESOURCE_FORMAT
environment variable.The default resource format for Django >= 2.2.0 is otherwise “{method} {urlpattern}”.
Default:
False
- ddtrace.config.django['use_handler_with_url_name_resource_format']
Whether or not to use the resource format “{method} {handler}.{url_name}”. Can also be enabled with the
DD_DJANGO_USE_HANDLER_WITH_URL_NAME_RESOURCE_FORMAT
environment variable.This configuration applies only for Django <= 2.2.0.
Default:
False
- ddtrace.config.django['use_legacy_resource_format']
Whether or not to use the legacy resource format “{handler}”. Can also be enabled with the
DD_DJANGO_USE_LEGACY_RESOURCE_FORMAT
environment variable.The default resource format for Django >= 2.2.0 is otherwise “{method} {urlpattern}”.
Default:
False
Example:
from ddtrace import config
# Enable distributed tracing
config.django['distributed_tracing_enabled'] = True
# Override service name
config.django['service_name'] = 'custom-service-name'
Headers tracing is supported for this integration.
dogpile.cache#
Instrument dogpile.cache to report all cached lookups.
This will add spans around the calls to your cache backend (e.g. redis, memory, etc). The spans will also include the following tags:
key/keys: The key(s) dogpile passed to your backend. Note that this will be the output of the region’s
function_key_generator
, but before any key mangling is applied (i.e. the region’skey_mangler
).region: Name of the region.
backend: Name of the backend class.
hit: If the key was found in the cache.
expired: If the key is expired. This is only relevant if the key was found.
While cache tracing will generally already have keys in tags, some caching setups will not have useful tag values - such as when you’re using consistent hashing with memcached - the key(s) will appear as a mangled hash.
# Patch before importing dogpile.cache
from ddtrace import patch
patch(dogpile_cache=True)
from dogpile.cache import make_region
region = make_region().configure(
"dogpile.cache.pylibmc",
expiration_time=3600,
arguments={"url": ["127.0.0.1"]},
)
@region.cache_on_arguments()
def hello(name):
# Some complicated, slow calculation
return "Hello, {}".format(name)
Elasticsearch#
The Elasticsearch integration will trace Elasticsearch queries.
Enabling#
The elasticsearch integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
from elasticsearch import Elasticsearch
patch(elasticsearch=True)
# This will report spans with the default instrumentation
es = Elasticsearch(port=ELASTICSEARCH_CONFIG['port'])
# Example of instrumented query
es.indices.create(index='books', ignore=400)
# Use a pin to specify metadata related to this client
es = Elasticsearch(port=ELASTICSEARCH_CONFIG['port'])
Pin.override(es.transport, service='elasticsearch-videos')
es.indices.create(index='videos', ignore=400)
OpenSearch is also supported (opensearch-py):
from ddtrace import patch
from opensearchpy import OpenSearch
patch(elasticsearch=True)
os = OpenSearch()
# Example of instrumented query
os.indices.create(index='books', ignore=400)
Configuration#
- ddtrace.config.elasticsearch['service']
The service name reported for your elasticsearch app.
Example:
from ddtrace import config
# Override service name
config.elasticsearch['service'] = 'custom-service-name'
Falcon#
To trace the falcon web framework, install the trace middleware:
import falcon
from ddtrace import tracer
from ddtrace.contrib.falcon import TraceMiddleware
mw = TraceMiddleware(tracer, 'my-falcon-app')
falcon.API(middleware=[mw])
You can also use the autopatching functionality:
import falcon
from ddtrace import tracer, patch
patch(falcon=True)
app = falcon.API()
To disable distributed tracing when using autopatching, set the
DD_FALCON_DISTRIBUTED_TRACING
environment variable to False
.
Supported span hooks
The following is a list of available tracer hooks that can be used to intercept and modify spans created by this integration.
request
Called before the response has been finished
def on_falcon_request(span, request, response)
Example:
import falcon
from ddtrace import config, patch_all
patch_all()
app = falcon.API()
@config.falcon.hooks.on('request')
def on_falcon_request(span, request, response):
span.set_tag('my.custom', 'tag')
Headers tracing is supported for this integration.
Fastapi#
The fastapi integration will trace requests to and from FastAPI.
Enabling#
The fastapi integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
from fastapi import FastAPI
patch(fastapi=True)
app = FastAPI()
If using Python 3.6, the legacy AsyncioContextProvider
will have to be
enabled before using the middleware:
from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())
Configuration#
- ddtrace.config.fastapi['service_name']
The service name reported for your fastapi app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'fastapi'
- ddtrace.config.fastapi['request_span_name']
The span name for a fastapi request.
Default:
'fastapi.request'
Example:
from ddtrace import config
# Override service name
config.fastapi['service_name'] = 'custom-service-name'
# Override request span name
config.fastapi['request_span_name'] = 'custom-request-span-name'
Flask#
The Flask integration will add tracing to all requests to your Flask application.
This integration will track the entire Flask lifecycle including user-defined endpoints, hooks, signals, and template rendering.
To configure tracing manually:
from ddtrace import patch_all
patch_all()
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return 'hello world'
if __name__ == '__main__':
app.run()
You may also enable Flask tracing automatically via ddtrace-run:
ddtrace-run python app.py
Configuration#
- ddtrace.config.flask['distributed_tracing_enabled']
Whether to parse distributed tracing headers from requests received by your Flask app.
Default:
True
- ddtrace.config.flask['service_name']
The service name reported for your Flask app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'flask'
- ddtrace.config.flask['collect_view_args']
Whether to add request tags for view function argument values.
Default:
True
- ddtrace.config.flask['template_default_name']
The default template name to use when one does not exist.
Default:
<memory>
- ddtrace.config.flask['trace_signals']
Whether to trace Flask signals (
before_request
,after_request
, etc).Default:
True
Example:
from ddtrace import config
# Enable distributed tracing
config.flask['distributed_tracing_enabled'] = True
# Override service name
config.flask['service_name'] = 'custom-service-name'
# Report 401, and 403 responses as errors
config.http_server.error_statuses = '401,403'
All HTTP tags are supported for this integration.
Flask Cache#
The flask cache tracer will track any access to a cache backend. You can use this tracer together with the Flask tracer middleware.
The tracer supports both Flask-Cache and Flask-Caching.
To install the tracer, from ddtrace import tracer
needs to be added:
from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache
and the tracer needs to be initialized:
Cache = get_traced_cache(tracer, service='my-flask-cache-app')
Here is the end result, in a sample app:
from flask import Flask
from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache
app = Flask(__name__)
# get the traced Cache class
Cache = get_traced_cache(tracer, service='my-flask-cache-app')
# use the Cache as usual with your preferred CACHE_TYPE
cache = Cache(app, config={'CACHE_TYPE': 'simple'})
def counter():
# this access is traced
conn_counter = cache.get("conn_counter")
Use a specific Cache
implementation with:
from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache
from flask_caching import Cache
Cache = get_traced_cache(tracer, service='my-flask-cache-app', cache_cls=Cache)
futures#
The futures
integration propagates the current active tracing context
between threads. The integration ensures that when operations are executed
in a new thread, that thread can continue the previously generated trace.
Enabling#
The futures integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(futures=True)
gevent#
The gevent integration adds support for tracing across greenlets.
The integration patches the gevent internals to add context management logic.
Note
If ddtrace-run
is not being used then be sure to import ddtrace.auto
before calling gevent.monkey.patch_all
.
If ddtrace-run
is being used then no additional configuration is required.
The integration also configures the global tracer instance to use a gevent context provider to utilize the context management logic.
If custom tracer instances are being used in a gevent application, then configure it with:
from ddtrace.contrib.gevent import context_provider
tracer.configure(context_provider=context_provider)
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(gevent=True)
Example of the context propagation:
def my_parent_function():
with tracer.trace("web.request") as span:
span.service = "web"
gevent.spawn(worker_function)
def worker_function():
# then trace its child
with tracer.trace("greenlet.call") as span:
span.service = "greenlet"
...
with tracer.trace("greenlet.child_call") as child:
...
graphql#
This integration instruments graphql-core
queries.
Enabling#
The graphql integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(graphql=True)
import graphql
...
Global Configuration#
- ddtrace.config.graphql["service"]
The service name reported by default for graphql instances.
This option can also be set with the
DD_SERVICE
environment variable.Default:
"graphql"
- ddtrace.config.graphql["resolvers_enabled"]
To enable
graphql.resolve
spans setDD_TRACE_GRAPHQL_RESOLVERS_ENABLED
to TrueDefault:
False
Enabling instrumentation for resolvers will produce a
graphql.resolve
span for every graphql field. For complex graphql queries this could produce large traces.
To configure the graphql integration using the
Pin
API:
from ddtrace import Pin
import graphql
Pin.override(graphql, service="mygraphql")
Grpc#
The gRPC integration traces the client and server using the interceptor pattern.
Enabling#
The gRPC integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(grpc=True)
# use grpc like usual
Global Configuration#
- ddtrace.config.grpc["service"]
The service name reported by default for gRPC client instances.
This option can also be set with the
DD_GRPC_SERVICE
environment variable.Default:
"grpc-client"
- ddtrace.config.grpc_server["service"]
The service name reported by default for gRPC server instances.
This option can also be set with the
DD_SERVICE
orDD_GRPC_SERVER_SERVICE
environment variables.Default:
"grpc-server"
Instance Configuration#
To configure the gRPC integration on an per-channel basis use the
Pin
API:
import grpc
from ddtrace import Pin, patch, Tracer
patch(grpc=True)
custom_tracer = Tracer()
# override the pin on the client
Pin.override(grpc.Channel, service='mygrpc', tracer=custom_tracer)
with grpc.insecure_channel('localhost:50051') as channel:
# create stubs and send requests
pass
To configure the gRPC integration on the server use the Pin
API:
import grpc
from grpc.framework.foundation import logging_pool
from ddtrace import Pin, patch, Tracer
patch(grpc=True)
custom_tracer = Tracer()
# override the pin on the server
Pin.override(grpc.Server, service='mygrpc', tracer=custom_tracer)
server = grpc.server(logging_pool.pool(2))
server.add_insecure_port('localhost:50051')
add_MyServicer_to_server(MyServicer(), server)
server.start()
gunicorn#
ddtrace works with Gunicorn.
Note
If you cannot wrap your Gunicorn server with the ddtrace-run``command and
it uses ``gevent
workers, be sure to import ddtrace.auto
as early as
possible in your application’s lifecycle.
httplib#
Trace the standard library httplib
/http.client
libraries to trace
HTTP requests.
Enabling#
The httplib integration is disabled by default. It can be enabled when using
ddtrace-run using the DD_TRACE_HTTPLIB_ENABLED
environment variable:
DD_TRACE_HTTPLIB_ENABLED=true ddtrace-run ....
The integration can also be enabled manually in code with
patch_all()
:
from ddtrace import patch_all
patch_all(httplib=True)
Global Configuration#
- ddtrace.config.httplib['distributed_tracing']
Include distributed tracing headers in requests sent from httplib.
This option can also be set with the
DD_HTTPLIB_DISTRIBUTED_TRACING
environment variable.Default:
True
Instance Configuration#
The integration can be configured per instance:
from ddtrace import config
# Disable distributed tracing globally.
config.httplib['distributed_tracing'] = False
# Change the service distributed tracing option only for this HTTP
# connection.
# Python 2
connection = urllib.HTTPConnection('www.datadog.com')
# Python 3
connection = http.client.HTTPConnection('www.datadog.com')
cfg = config.get_from(connection)
cfg['distributed_tracing'] = True
Headers tracing is supported for this integration.
httpx#
The httpx integration traces all HTTP requests made with the httpx
library.
Enabling#
The httpx
integration is enabled automatically when using
ddtrace-run or patch_all()
.
Alternatively, use patch()
to manually enable the integration:
from ddtrace import patch
patch(httpx=True)
# use httpx like usual
Global Configuration#
- ddtrace.config.httpx['service']
The default service name for
httpx
requests. By default thehttpx
integration will not define a service name and inherit its service name from its parent span.If you are making calls to uninstrumented third party applications you can set this setting, use the
ddtrace.config.httpx['split_by_domain']
setting, or use aPin
to override an individual connection’s settings (see example below forPin
usage).This option can also be set with the
DD_HTTPX_SERVICE
environment variable.Default:
None
- ddtrace.config.httpx['distributed_tracing']
Whether or not to inject distributed tracing headers into requests.
Default:
True
- ddtrace.config.httpx['split_by_domain']
Whether or not to use the domain name of requests as the service name. This setting can be overridden with session overrides (described in the Instance Configuration section).
This setting takes precedence over
ddtrace.config.httpx['service']
Default:
False
Instance Configuration#
To configure particular httpx
client instances use the Pin
API:
import httpx
from ddtrace import Pin
client = httpx.Client()
# Override service name for this instance
Pin.override(client, service="custom-http-service")
async_client = httpx.AsyncClient(
# Override service name for this instance
Pin.override(async_client, service="custom-async-http-service")
Headers tracing is supported for this integration.
HTTP Tagging is supported for this integration.
Jinja2#
The jinja2
integration traces templates loading, compilation and rendering.
Auto instrumentation is available using the patch
. The following is an example:
from ddtrace import patch
from jinja2 import Environment, FileSystemLoader
patch(jinja2=True)
env = Environment(
loader=FileSystemLoader("templates")
)
template = env.get_template('mytemplate.html')
The library can be configured globally and per instance, using the Configuration API:
from ddtrace import config
# Change service name globally
config.jinja2['service_name'] = 'jinja-templates'
# change the service name only for this environment
cfg = config.get_from(env)
cfg['service_name'] = 'jinja-templates'
By default, the service name is set to None, so it is inherited from the parent span. If there is no parent span and the service name is not overridden the agent will drop the traces.
kombu#
Instrument kombu to report AMQP messaging.
patch_all
will not automatically patch your Kombu client to make it work, as this would conflict with the
Celery integration. You must specifically request kombu be patched, as in the example below.
Note: To permit distributed tracing for the kombu integration you must enable the tracer with priority sampling. Refer to the documentation here: https://ddtrace.readthedocs.io/en/stable/advanced_usage.html#priority-sampling
Without enabling distributed tracing, spans within a trace generated by the kombu integration might be dropped without the whole trace being dropped.
from ddtrace import Pin, patch
import kombu
# If not patched yet, you can patch kombu specifically
patch(kombu=True)
# This will report a span with the default settings
conn = kombu.Connection("amqp://guest:guest@127.0.0.1:5672//")
conn.connect()
task_queue = kombu.Queue('tasks', kombu.Exchange('tasks'), routing_key='tasks')
to_publish = {'hello': 'world'}
producer = conn.Producer()
producer.publish(to_publish,
exchange=task_queue.exchange,
routing_key=task_queue.routing_key,
declare=[task_queue])
# Use a pin to specify metadata related to this client
Pin.override(producer, service='kombu-consumer')
Mako#
The mako
integration traces templates rendering.
Auto instrumentation is available using the patch
. The following is an example:
from ddtrace import patch
from mako.template import Template
patch(mako=True)
t = Template(filename="index.html")
MariaDB#
The MariaDB integration instruments the MariaDB library to trace queries.
Enabling#
The MariaDB integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(mariadb=True)
Global Configuration#
- ddtrace.config.mariadb["service"]
The service name reported by default for MariaDB spans.
This option can also be set with the
DD_MARIADB_SERVICE
environment variable.Default:
"mariadb"
Instance Configuration#
To configure the mariadb integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
from ddtrace import patch
# Make sure to patch before importing mariadb
patch(mariadb=True)
import mariadb.connector
# This will report a span with the default settings
conn = mariadb.connector.connect(user="alice", password="b0b", host="localhost", port=3306, database="test")
# Use a pin to override the service name for this connection.
Pin.override(conn, service="mariadb-users")
cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")
Molten#
The molten web framework is automatically traced by ddtrace
when calling patch
:
from molten import App, Route
from ddtrace import patch_all; patch_all(molten=True)
def hello(name: str, age: int) -> str:
return f'Hello {age} year old named {name}!'
app = App(routes=[Route('/hello/{name}/{age}', hello)])
You may also enable molten tracing automatically via ddtrace-run
:
ddtrace-run python app.py
Configuration#
- ddtrace.config.molten['distributed_tracing']
Whether to parse distributed tracing headers from requests received by your Molten app.
Default:
True
- ddtrace.config.molten['service_name']
The service name reported for your Molten app.
Can also be configured via the
DD_SERVICE
orDD_MOLTEN_SERVICE
environment variables.Default:
'molten'
All HTTP tags are supported for this integration.
Mongoengine#
Instrument mongoengine to report MongoDB queries.
patch_all
will automatically patch your mongoengine connect method to make it work.
from ddtrace import Pin, patch
import mongoengine
# If not patched yet, you can patch mongoengine specifically
patch(mongoengine=True)
# At that point, mongoengine is instrumented with the default settings
mongoengine.connect('db', alias='default')
# Use a pin to specify metadata related to this client
client = mongoengine.connect('db', alias='master')
Pin.override(client, service="mongo-master")
mysql-connector#
The mysql integration instruments the mysql library to trace MySQL queries.
Enabling#
The mysql integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(mysql=True)
Global Configuration#
- ddtrace.config.mysql["service"]
The service name reported by default for mysql spans.
This option can also be set with the
DD_MYSQL_SERVICE
environment variable.Default:
"mysql"
- ddtrace.config.mysql["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_MYSQL_TRACE_FETCH_METHODS
environment variable.Default:
False
Instance Configuration#
To configure the mysql integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
# Make sure to import mysql.connector and not the 'connect' function,
# otherwise you won't have access to the patched version
import mysql.connector
# This will report a span with the default settings
conn = mysql.connector.connect(user="alice", password="b0b", host="localhost", port=3306, database="test")
# Use a pin to override the service name for this connection.
Pin.override(conn, service='mysql-users')
cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")
Only the default full-Python integration works. The binary C connector, provided by _mysql_connector, is not supported.
Help on mysql.connector can be found on: https://dev.mysql.com/doc/connector-python/en/
mysqlclient#
The mysqldb integration instruments the mysqlclient library to trace MySQL queries.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(mysqldb=True)
Global Configuration#
- ddtrace.config.mysqldb["service"]
The service name reported by default for spans.
This option can also be set with the
DD_MYSQLDB_SERVICE
environment variable.Default:
"mysql"
- ddtrace.config.mysqldb["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_MYSQLDB_TRACE_FETCH_METHODS
environment variable.Default:
False
- ddtrace.config.mysqldb["trace_connect"]
Whether or not to trace connecting.
Can also be configured via the
DD_MYSQLDB_TRACE_CONNECT
environment variable.Note that if you are overriding the service name via the Pin on an individual cursor, that will not affect connect traces. The service name must also be overridden on the Pin on the MySQLdb module.
Default:
False
Instance Configuration#
To configure the integration on an per-connection basis use the
Pin
API:
# Make sure to import MySQLdb and not the 'connect' function,
# otherwise you won't have access to the patched version
from ddtrace import Pin
import MySQLdb
# This will report a span with the default settings
conn = MySQLdb.connect(user="alice", passwd="b0b", host="localhost", port=3306, db="test")
# Use a pin to override the service.
Pin.override(conn, service='mysql-users')
cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")
This package works for mysqlclient. Only the default full-Python integration works. The binary C connector provided by _mysql is not supported.
Help on mysqlclient can be found on: https://mysqlclient.readthedocs.io/
pylibmc#
Instrument pylibmc to report Memcached queries.
patch_all
will automatically patch your pylibmc client to make it work.
# Be sure to import pylibmc and not pylibmc.Client directly,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import pylibmc
# If not patched yet, you can patch pylibmc specifically
patch(pylibmc=True)
# One client instrumented with default configuration
client = pylibmc.Client(["localhost:11211"]
client.set("key1", "value1")
# Use a pin to specify metadata related to this client
Pin.override(client, service="memcached-sessions")
Pylons#
The Pylons integration traces requests and template rendering in a Pylons application.
Enabling#
To enable the Pylons integration, wrap a Pylons application with the provided
PylonsTraceMiddleware
:
from pylons.wsgiapp import PylonsApp
from ddtrace import tracer
from ddtrace.contrib.pylons import PylonsTraceMiddleware
app = PylonsApp(...)
traced_app = PylonsTraceMiddleware(app, tracer, service="my-pylons-app")
Global Configuration#
- ddtrace.config.pylons['distributed_tracing']
Whether to parse distributed tracing headers from requests received by your pylons app.
Can also be enabled with the
DD_PYLONS_DISTRIBUTED_TRACING
environment variable.Default:
True
Example:
from ddtrace import config # Enable distributed tracing config.pylons['distributed_tracing'] = True
- ddtrace.config.pylons["service"]
The service name reported by default for Pylons requests.
This option can also be set with the
DD_SERVICE
environment variable.Default:
"pylons"
All HTTP tags are supported for this integration.
PynamoDB#
The PynamoDB integration traces all db calls made with the pynamodb library through the connection API.
Enabling#
The PynamoDB integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
import pynamodb
from ddtrace import patch, config
patch(pynamodb=True)
Global Configuration#
- ddtrace.config.pynamodb["service"]
The service name reported by default for the PynamoDB instance.
This option can also be set with the
DD_PYNAMODB_SERVICE
environment variable.Default:
"pynamodb"
PyODBC#
The pyodbc integration instruments the pyodbc library to trace pyodbc queries.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(pyodbc=True)
Global Configuration#
- ddtrace.config.pyodbc["service"]
The service name reported by default for pyodbc spans.
This option can also be set with the
DD_PYODBC_SERVICE
environment variable.Default:
"pyodbc"
- ddtrace.config.pyodbc["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_PYODBC_TRACE_FETCH_METHODS
environment variable.Default:
False
Instance Configuration#
To configure the integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
import pyodbc
# This will report a span with the default settings
db = pyodbc.connect("<connection string>")
# Use a pin to override the service name for the connection.
Pin.override(db, service='pyodbc-users')
cursor = db.cursor()
cursor.execute("select * from users where id = 1")
pymemcache#
Instrument pymemcache to report memcached queries.
patch_all
will automatically patch the pymemcache Client
:
from ddtrace import Pin, patch
# If not patched yet, patch pymemcache specifically
patch(pymemcache=True)
# Import reference to Client AFTER patching
import pymemcache
from pymemcache.client.base import Client
# Use a pin to specify metadata related all clients
Pin.override(pymemcache, service='my-memcached-service')
# This will report a span with the default settings
client = Client(('localhost', 11211))
client.set("my-key", "my-val")
# Use a pin to specify metadata related to this particular client
Pin.override(client, service='my-memcached-service')
Pymemcache HashClient
will also be indirectly patched as it uses Client
under the hood.
Pymongo#
Instrument pymongo to report MongoDB queries.
The pymongo integration works by wrapping pymongo’s MongoClient to trace
network calls. Pymongo 3.0 and greater are the currently supported versions.
patch_all
will automatically patch your MongoClient instance to make it work.
# Be sure to import pymongo and not pymongo.MongoClient directly,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import pymongo
# If not patched yet, you can patch pymongo specifically
patch(pymongo=True)
# At that point, pymongo is instrumented with the default settings
client = pymongo.MongoClient()
# Example of instrumented query
db = client["test-db"]
db.teams.find({"name": "Toronto Maple Leafs"})
# Use a pin to specify metadata related to this client
client = pymongo.MongoClient()
pin = Pin.override(client, service="mongo-master")
Global Configuration#
- ddtrace.config.pymongo["service"]
- The service name reported by default for pymongo spans
The option can also be set with the
DD_PYMONGO_SERVICE
environment variableDefault:
"pymongo"
pymysql#
The pymysql integration instruments the pymysql library to trace MySQL queries.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(pymysql=True)
Global Configuration#
- ddtrace.config.pymysql["service"]
The service name reported by default for pymysql spans.
This option can also be set with the
DD_PYMYSQL_SERVICE
environment variable.Default:
"mysql"
- ddtrace.config.pymysql["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_PYMYSQL_TRACE_FETCH_METHODS
environment variable.Default:
False
Instance Configuration#
To configure the integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
from pymysql import connect
# This will report a span with the default settings
conn = connect(user="alice", password="b0b", host="localhost", port=3306, database="test")
# Use a pin to override the service name for this connection.
Pin.override(conn, service="pymysql-users")
cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")
Pyramid#
To trace requests from a Pyramid application, trace your application config:
from pyramid.config import Configurator
from ddtrace.contrib.pyramid import trace_pyramid
settings = {
'datadog_trace_service' : 'my-web-app-name',
}
config = Configurator(settings=settings)
trace_pyramid(config)
# use your config as normal.
config.add_route('index', '/')
Available settings are:
datadog_trace_service
: change the pyramid service namedatadog_trace_enabled
: sets if the Tracer is enabled or notdatadog_distributed_tracing
: set it toFalse
to disable Distributed Tracing
If you use the pyramid.tweens
settings value to set the tweens for your
application, you need to add ddtrace.contrib.pyramid:trace_tween_factory
explicitly to the list. For example:
settings = {
'datadog_trace_service' : 'my-web-app-name',
'pyramid.tweens', 'your_tween_no_1\\nyour_tween_no_2\\nddtrace.contrib.pyramid:trace_tween_factory',
}
config = Configurator(settings=settings)
trace_pyramid(config)
# use your config as normal.
config.add_route('index', '/')
All HTTP tags are supported for this integration.
pytest#
The pytest integration traces test executions.
Enabling#
Enable traced execution of tests using pytest
runner by
running pytest --ddtrace
or by modifying any configuration
file read by pytest (pytest.ini
, setup.cfg
, …):
[pytest]
ddtrace = 1
You can enable all integrations by using the --ddtrace-patch-all
option
alongside --ddtrace
or by adding this to your configuration:
[pytest]
ddtrace = 1
ddtrace-patch-all = 1
Note
The ddtrace plugin for pytest has the side effect of importing the ddtrace package and starting a global tracer.
If this is causing issues for your pytest runs where traced execution of tests is not enabled, you can deactivate the plugin:
[pytest]
addopts = -p no:ddtrace
See the pytest documentation for more details.
Global Configuration#
- ddtrace.config.pytest["service"]
The service name reported by default for pytest traces.
This option can also be set with the integration specific
DD_PYTEST_SERVICE
environment variable, or more generally with the DD_SERVICE environment variable.Default: Name of the repository being tested, otherwise
"pytest"
if the repository name cannot be found.
- ddtrace.config.pytest["operation_name"]
The operation name reported by default for pytest traces.
This option can also be set with the
DD_PYTEST_OPERATION_NAME
environment variable.Default:
"pytest.test"
pytest-bdd#
The pytest-bdd integration traces executions of scenarios and steps.
Enabling#
Please follow the instructions for enabling pytest integration.
Note
The ddtrace.pytest_bdd plugin for pytest-bdd has the side effect of importing the ddtrace package and starting a global tracer.
If this is causing issues for your pytest-bdd runs where traced execution of tests is not enabled, you can deactivate the plugin:
[pytest]
addopts = -p no:ddtrace.pytest_bdd
See the pytest documentation for more details.
psycopg#
The psycopg integration instruments the psycopg2 library to trace Postgres queries.
Enabling#
The psycopg integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(psycopg=True)
Global Configuration#
- ddtrace.config.psycopg["service"]
The service name reported by default for psycopg spans.
This option can also be set with the
DD_PSYCOPG_SERVICE
environment variable.Default:
"postgres"
- ddtrace.config.psycopg["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_PSYCOPG_TRACE_FETCH_METHODS
environment variable.Default:
False
- ddtrace.config.psycopg["trace_connect"]
Whether or not to trace
psycopg2.connect
method.Can also configured via the
DD_PSYCOPG_TRACE_CONNECT
environment variable.Default:
False
Instance Configuration#
To configure the psycopg integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
import psycopg2
db = psycopg2.connect(connection_factory=factory)
# Use a pin to override the service name.
Pin.override(db, service="postgres-users")
cursor = db.cursor()
cursor.execute("select * from users where id = 1")
redis#
The redis integration traces redis requests.
Enabling#
The redis integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(redis=True)
Global Configuration#
- ddtrace.config.redis["service"]
The service name reported by default for redis traces.
This option can also be set with the
DD_REDIS_SERVICE
environment variable.Default:
"redis"
Instance Configuration#
To configure particular redis instances use the Pin
API:
import redis
from ddtrace import Pin
client = redis.StrictRedis(host="localhost", port=6379)
# Override service name for this instance
Pin.override(client, service="my-custom-queue")
# Traces reported for this client will now have "my-custom-queue"
# as the service name.
client.get("my-key")
redis-py-cluster#
Instrument rediscluster to report Redis Cluster queries.
patch_all
will automatically patch your Redis Cluster client to make it work.
from ddtrace import Pin, patch
import rediscluster
# If not patched yet, you can patch redis specifically
patch(rediscluster=True)
# This will report a span with the default settings
client = rediscluster.StrictRedisCluster(startup_nodes=[{'host':'localhost', 'port':'7000'}])
client.get('my-key')
# Use a pin to specify metadata related to this client
Pin.override(client, service='redis-queue')
Global Configuration#
- ddtrace.config.rediscluster["service"]
- The service name reported by default for rediscluster spans
The option can also be set with the
DD_REDISCLUSTER_SERVICE
environment variableDefault:
'rediscluster'
Requests#
The requests
integration traces all HTTP requests made with the requests
library.
The default service name used is requests but it can be configured to match the services that the specific requests are made to.
Enabling#
The requests integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(requests=True)
# use requests like usual
Global Configuration#
- ddtrace.config.requests['service']
The service name reported by default for requests queries. This value will be overridden by an instance override or if the split_by_domain setting is enabled.
This option can also be set with the
DD_REQUESTS_SERVICE
environment variable.Default:
"requests"
- ddtrace.config.requests['distributed_tracing']
Whether or not to parse distributed tracing headers.
Default:
True
- ddtrace.config.requests['trace_query_string']
Whether or not to include the query string as a tag.
Default:
False
- ddtrace.config.requests['split_by_domain']
Whether or not to use the domain name of requests as the service name. This setting can be overridden with session overrides (described in the Instance Configuration section).
Default:
False
Instance Configuration#
To set configuration options for all requests made with a requests.Session
object
use the config API:
from ddtrace import config
from requests import Session
session = Session()
cfg = config.get_from(session)
cfg['service_name'] = 'auth-api'
cfg['distributed_tracing'] = False
RQ#
The RQ integration will trace your jobs.
Usage#
The rq integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(rq=True)
Worker Usage#
ddtrace-run
can be used to easily trace your workers:
DD_SERVICE=myworker ddtrace-run rq worker
Instance Configuration#
To override the service name for a queue:
from ddtrace import Pin
connection = redis.Redis()
queue = rq.Queue(connection=connection)
Pin.override(queue, service="custom_queue_service")
To override the service name for a particular worker:
worker = rq.SimpleWorker([queue], connection=queue.connection)
Pin.override(worker, service="custom_worker_service")
Global Configuration#
- ddtrace.config.rq['distributed_tracing_enabled']
- ddtrace.config.rq_worker['distributed_tracing_enabled']
If
True
the integration will connect the traces sent between the enqueuer and the RQ worker.This option can also be set with the
DD_RQ_DISTRIBUTED_TRACING_ENABLED
environment variable on either the enqueuer or worker applications.Default:
True
- ddtrace.config.rq['service']
The service name reported by default for RQ spans from the app.
This option can also be set with the
DD_SERVICE
orDD_RQ_SERVICE
environment variables.Default:
rq
- ddtrace.config.rq_worker['service']
The service name reported by default for RQ spans from workers.
This option can also be set with the
DD_SERVICE
environment variable.Default:
rq-worker
Sanic#
The Sanic integration will trace requests to and from Sanic.
Enable Sanic tracing automatically via ddtrace-run
:
ddtrace-run python app.py
Sanic tracing can also be enabled manually:
from ddtrace import patch_all
patch_all(sanic=True)
from sanic import Sanic
from sanic.response import text
app = Sanic(__name__)
@app.route('/')
def index(request):
return text('hello world')
if __name__ == '__main__':
app.run()
If using Python 3.6, the legacy AsyncioContextProvider
will have to be
enabled before using the middleware:
from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())
Configuration#
- ddtrace.config.sanic['distributed_tracing_enabled']
Whether to parse distributed tracing headers from requests received by your Sanic app.
Default:
True
- ddtrace.config.sanic['service_name']
The service name reported for your Sanic app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'sanic'
Example:
from ddtrace import config
# Enable distributed tracing
config.sanic['distributed_tracing_enabled'] = True
# Override service name
config.sanic['service_name'] = 'custom-service-name'
Snowflake#
The snowflake integration instruments the snowflake-connector-python
library to trace Snowflake queries.
Note that this integration is in beta.
Enabling#
The integration is not enabled automatically when using
ddtrace-run or patch_all()
.
Use patch()
to manually enable the integration:
from ddtrace import patch, patch_all
patch(snowflake=True)
patch_all(snowflake=True)
or the DD_TRACE_SNOWFLAKE_ENABLED=true
to enable it with ddtrace-run
.
Global Configuration#
- ddtrace.config.snowflake["service"]
The service name reported by default for snowflake spans.
This option can also be set with the
DD_SNOWFLAKE_SERVICE
environment variable.Default:
"snowflake"
- ddtrace.config.snowflake["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_SNOWFLAKE_TRACE_FETCH_METHODS
environment variable.Default:
False
Instance Configuration#
To configure the integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
from snowflake.connector import connect
# This will report a span with the default settings
conn = connect(user="alice", password="b0b", account="dev")
# Use a pin to override the service name for this connection.
Pin.override(conn, service="snowflake-dev")
cursor = conn.cursor()
cursor.execute("SELECT current_version()")
Starlette#
The Starlette integration will trace requests to and from Starlette.
Enabling#
The starlette integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
from starlette.applications import Starlette
patch(starlette=True)
app = Starlette()
If using Python 3.6, the legacy AsyncioContextProvider
will have to be
enabled before using the middleware:
from ddtrace.contrib.asyncio.provider import AsyncioContextProvider
from ddtrace import tracer # Or whichever tracer instance you plan to use
tracer.configure(context_provider=AsyncioContextProvider())
Configuration#
- ddtrace.config.starlette['distributed_tracing']
Whether to parse distributed tracing headers from requests received by your Starlette app.
Can also be enabled with the
DD_STARLETTE_DISTRIBUTED_TRACING
environment variable.Default:
True
- ddtrace.config.starlette['analytics_enabled']
Whether to analyze spans for starlette in App Analytics.
Can also be enabled with the
DD_STARLETTE_ANALYTICS_ENABLED
environment variable.Default:
None
- ddtrace.config.starlette['service_name']
The service name reported for your starlette app.
Can also be configured via the
DD_SERVICE
environment variable.Default:
'starlette'
- ddtrace.config.starlette['request_span_name']
The span name for a starlette request.
Default:
'starlette.request'
Example:
from ddtrace import config
# Enable distributed tracing
config.starlette['distributed_tracing'] = True
# Override service name
config.starlette['service_name'] = 'custom-service-name'
# Override request span name
config.starlette['request_span_name'] = 'custom-request-span-name'
SQLAlchemy#
Enabling the SQLAlchemy integration is only necessary if there is no instrumentation available or enabled for the underlying database engine (e.g. pymysql, psycopg, mysql-connector, etc.).
To trace sqlalchemy queries, add instrumentation to the engine class using the patch method that must be called before importing sqlalchemy:
# patch before importing `create_engine`
from ddtrace import Pin, patch
patch(sqlalchemy=True)
# use SQLAlchemy as usual
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:')
engine.connect().execute("SELECT COUNT(*) FROM users")
# Use a PIN to specify metadata related to this engine
Pin.override(engine, service='replica-db')
SQLite#
The sqlite integration instruments the built-in sqlite module to trace SQLite queries.
Enabling#
The integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(sqlite=True)
Global Configuration#
- ddtrace.config.sqlite["service"]
The service name reported by default for sqlite spans.
This option can also be set with the
DD_SQLITE_SERVICE
environment variable.Default:
"sqlite"
- ddtrace.config.sqlite["trace_fetch_methods"]
Whether or not to trace fetch methods.
Can also configured via the
DD_SQLITE_TRACE_FETCH_METHODS
environment variable.Default:
False
Instance Configuration#
To configure the integration on an per-connection basis use the
Pin
API:
from ddtrace import Pin
import sqlite3
# This will report a span with the default settings
db = sqlite3.connect(":memory:")
# Use a pin to override the service name for the connection.
Pin.override(db, service='sqlite-users')
cursor = db.cursor()
cursor.execute("select * from users where id = 1")
Tornado#
The Tornado integration traces all RequestHandler
defined in a Tornado web application.
Auto instrumentation is available using the patch
function that must be called before
importing the tornado library.
Note: This integration requires Python 3.7 and above for Tornado 5 and 6.
The following is an example:
# patch before importing tornado and concurrent.futures
from ddtrace import tracer, patch
patch(tornado=True)
import tornado.web
import tornado.gen
import tornado.ioloop
# create your handlers
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
self.write("Hello, world")
# create your application
app = tornado.web.Application([
(r'/', MainHandler),
])
# and run it as usual
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
When any type of RequestHandler
is hit, a request root span is automatically created. If
you want to trace more parts of your application, you can use the wrap()
decorator and
the trace()
method as usual:
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
yield self.notify()
yield self.blocking_method()
with tracer.trace('tornado.before_write') as span:
# trace more work in the handler
@tracer.wrap('tornado.executor_handler')
@tornado.concurrent.run_on_executor
def blocking_method(self):
# do something expensive
@tracer.wrap('tornado.notify', service='tornado-notification')
@tornado.gen.coroutine
def notify(self):
# do something
If you are overriding the on_finish
or log_exception
methods on a
RequestHandler
, you will need to call the super method to ensure the
tracer’s patched methods are called:
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
self.write("Hello, world")
def on_finish(self):
super(MainHandler, self).on_finish()
# do other clean-up
def log_exception(self, typ, value, tb):
super(MainHandler, self).log_exception(typ, value, tb)
# do other logging
Tornado settings can be used to change some tracing configuration, like:
settings = {
'datadog_trace': {
'default_service': 'my-tornado-app',
'tags': {'env': 'production'},
'distributed_tracing': False,
'settings': {
'FILTERS': [
FilterRequestsOnUrl(r'http://test\\.example\\.com'),
],
},
},
}
app = tornado.web.Application([
(r'/', MainHandler),
], **settings)
The available settings are:
default_service
(default: tornado-web): set the service name used by the tracer. Usually this configuration must be updated with a meaningful name. Can also be configured via theDD_SERVICE
environment variable.tags
(default: {}): set global tags that should be applied to all spans.enabled
(default: True): define if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the APM agent.distributed_tracing
(default: None): enable distributed tracing if this is called remotely from an instrumented application. Overrides the integration config which is configured via theDD_TORNADO_DISTRIBUTED_TRACING
environment variable. We suggest to enable it only for internal services where headers are under your control.agent_hostname
(default: localhost): define the hostname of the APM agent.agent_port
(default: 8126): define the port of the APM agent.settings
(default:{}
): Tracer extra settings used to change, for instance, the filtering behavior.
urllib3#
The urllib3
integration instruments tracing on http calls with optional
support for distributed tracing across services the client communicates with.
Enabling#
The urllib3
integration is not enabled by default. Use patch_all()
with the environment variable DD_TRACE_URLLIB3_ENABLED
set, or call
patch()
with the urllib3
argument set to True
to manually
enable the integration, before importing and using urllib3
:
from ddtrace import patch
patch(urllib3=True)
# use urllib3 like usual
Global Configuration#
- ddtrace.config.urllib3['service']
The service name reported by default for urllib3 client instances.
This option can also be set with the
DD_URLLIB3_SERVICE
environment variable.Default:
"urllib3"
- ddtrace.config.urllib3['distributed_tracing']
Whether or not to parse distributed tracing headers.
Default:
True
- ddtrace.config.urllib3['trace_query_string']
Whether or not to include the query string as a tag.
Default:
False
- ddtrace.config.urllib3['split_by_domain']
Whether or not to use the domain name of requests as the service name.
Default:
False
Vertica#
The Vertica integration will trace queries made using the vertica-python library.
Vertica will be automatically instrumented with patch_all
, or when using
the ddtrace-run
command.
Vertica is instrumented on import. To instrument Vertica manually use the
patch
function. Note the ordering of the following statements:
from ddtrace import patch
patch(vertica=True)
import vertica_python
# use vertica_python like usual
To configure the Vertica integration globally you can use the Config
API:
from ddtrace import config, patch
patch(vertica=True)
config.vertica['service_name'] = 'my-vertica-database'
To configure the Vertica integration on an instance-per-instance basis use the
Pin
API:
from ddtrace import Pin, patch, Tracer
patch(vertica=True)
import vertica_python
custom_tracer = Tracer()
conn = vertica_python.connect(**YOUR_VERTICA_CONFIG)
# override the service and tracer to be used
Pin.override(conn, service='myverticaservice', tracer=custom_tracer)
yaaredis#
The yaaredis integration traces yaaredis requests.
Enabling#
The yaaredis integration is enabled automatically when using
ddtrace-run or patch_all()
.
Or use patch()
to manually enable the integration:
from ddtrace import patch
patch(yaaredis=True)
Global Configuration#
- ddtrace.config.yaaredis["service"]
The service name reported by default for yaaredis traces.
This option can also be set with the
DD_YAAREDIS_SERVICE
environment variable.Default:
"redis"
Instance Configuration#
To configure particular yaaredis instances use the Pin
API:
import yaaredis
from ddtrace import Pin
client = yaaredis.StrictRedis(host="localhost", port=6379)
# Override service name for this instance
Pin.override(client, service="my-custom-queue")
# Traces reported for this client will now have "my-custom-queue"
# as the service name.
async def example():
await client.get("my-key")
WSGI#
The Datadog WSGI middleware traces all WSGI requests.
Usage#
The middleware can be used manually via the following command:
from ddtrace.contrib.wsgi import DDWSGIMiddleware
# application is a WSGI application
application = DDWSGIMiddleware(application)
Global Configuration#
- ddtrace.config.wsgi["service"]
The service name reported for the WSGI application.
This option can also be set with the
DD_SERVICE
environment variable.Default:
"wsgi"
- ddtrace.config.wsgi["distributed_tracing"]
Configuration that allows distributed tracing to be enabled.
Default:
True
All HTTP tags are supported for this integration.