Advanced Usage¶
Agent Configuration¶
If the Datadog Agent is on a separate host from your application, you can modify
the default ddtrace.tracer
object to utilize another hostname and port. Here
is a small example showcasing this:
from ddtrace import tracer
tracer.configure(hostname=<YOUR_HOST>, port=<YOUR_PORT>, https=<True/False>)
By default, these will be set to localhost
, 8126
, and False
respectively.
You can also use a Unix Domain Socket to connect to the agent:
from ddtrace import tracer
tracer.configure(uds_path="/path/to/socket")
Distributed Tracing¶
To trace requests across hosts, the spans on the secondary hosts must be linked together by setting trace_id and parent_id.
On the server side, it means to read propagated attributes and set them to the active tracing context.
On the client side, it means to propagate the attributes, commonly as a header/metadata.
ddtrace already provides default propagators but you can also implement your own.
Web Frameworks¶
Some web framework integrations support distributed tracing out of the box.
Supported web frameworks:
Framework/Library |
Enabled |
---|---|
True |
|
True |
|
True |
|
True |
|
True |
|
True |
|
True |
|
True |
|
True |
HTTP Client¶
For distributed tracing to work, necessary tracing information must be passed alongside a request as it flows through the system. When the request is handled on the other side, the metadata is retrieved and the trace can continue.
To propagate the tracing information, HTTP headers are used to transmit the required metadata to piece together the trace.
-
class
ddtrace.propagation.http.
HTTPPropagator
¶ A HTTP Propagator using HTTP headers as carrier.
-
static
inject
(span_context, headers)¶ Inject Context attributes that have to be propagated as HTTP headers.
Here is an example using requests:
import requests from ddtrace.propagation.http import HTTPPropagator def parent_call(): with tracer.trace('parent_span') as span: headers = {} HTTPPropagator.inject(span.context, headers) url = '<some RPC endpoint>' r = requests.get(url, headers=headers)
- Parameters
span_context (Context) – Span context to propagate.
headers (dict) – HTTP headers to extend with tracing attributes.
-
static
extract
(headers: dict[str, str]) → Context¶ Extract a Context from HTTP headers into a new Context.
Here is an example from a web endpoint:
from ddtrace.propagation.http import HTTPPropagator def my_controller(url, headers): context = HTTPPropagator.extract(headers) if context: tracer.context_provider.activate(context) with tracer.trace('my_controller') as span: span.set_meta('http.url', url)
- Parameters
headers (dict) – HTTP headers to extract tracing attributes.
- Returns
New Context with propagated attributes.
-
static
Custom¶
You can manually propagate your tracing context over your RPC protocol. Here is an example assuming that you have rpc.call function that call a method and propagate a rpc_metadata dictionary over the wire:
# Implement your own context propagator
class MyRPCPropagator(object):
def inject(self, span_context, rpc_metadata):
rpc_metadata.update({
'trace_id': span_context.trace_id,
'span_id': span_context.span_id,
})
def extract(self, rpc_metadata):
return Context(
trace_id=rpc_metadata['trace_id'],
span_id=rpc_metadata['span_id'],
)
# On the parent side
def parent_rpc_call():
with tracer.trace("parent_span") as span:
rpc_metadata = {}
propagator = MyRPCPropagator()
propagator.inject(span.context, rpc_metadata)
method = "<my rpc method>"
rpc.call(method, metadata)
# On the child side
def child_rpc_call(method, rpc_metadata):
propagator = MyRPCPropagator()
context = propagator.extract(rpc_metadata)
tracer.context_provider.activate(context)
with tracer.trace("child_span") as span:
span.set_meta('my_rpc_method', method)
Sampling¶
Client Sampling¶
Client sampling enables the sampling of traces before they are sent to the Agent. This can provide some performance benefit as the traces will be dropped in the client.
The RateSampler
randomly samples a percentage of traces:
from ddtrace.sampler import RateSampler
# Sample rate is between 0 (nothing sampled) to 1 (everything sampled).
# Keep 20% of the traces.
sample_rate = 0.2
tracer.sampler = RateSampler(sample_rate)
Resolving deprecation warnings¶
Before upgrading, it’s a good idea to resolve any deprecation warnings raised by your project.
These warnings must be fixed before upgrading, otherwise the ddtrace
library
will not work as expected. Our deprecation messages include the version where
the behavior is altered or removed.
In Python, deprecation warnings are silenced by default. To enable them you may add the following flag or environment variable:
$ python -Wall app.py
# or
$ PYTHONWARNINGS=all python app.py
Trace Filtering¶
It is possible to filter or modify traces before they are sent to the Agent by configuring the tracer with a filters list. For instance, to filter out all traces of incoming requests to a specific url:
from ddtrace import tracer
tracer.configure(settings={
'FILTERS': [
FilterRequestsOnUrl(r'http://test\.example\.com'),
],
})
The filters in the filters list will be applied sequentially to each trace and the resulting trace will either be sent to the Agent or discarded.
Built-in filters
The library comes with a FilterRequestsOnUrl
filter that can be used to
filter out incoming requests to specific urls:
-
class
ddtrace.filters.
FilterRequestsOnUrl
(regexps)¶ Filter out traces from incoming http requests based on the request’s url.
This class takes as argument a list of regular expression patterns representing the urls to be excluded from tracing. A trace will be excluded if its root span contains a
http.url
tag and if this tag matches any of the provided regular expression using the standard python regexp match semantic (https://docs.python.org/2/library/re.html#re.match).- Parameters
regexps (list) – a list of regular expressions (or a single string) defining the urls that should be filtered out.
Examples: To filter out http calls to domain api.example.com:
FilterRequestsOnUrl(r'http://api\\.example\\.com')
To filter out http calls to all first level subdomains from example.com:
FilterRequestOnUrl(r'http://.*+\\.example\\.com')
To filter out calls to both http://test.example.com and http://example.com/healthcheck:
FilterRequestOnUrl([r'http://test\\.example\\.com', r'http://example\\.com/healthcheck'])
-
process_trace
(trace: List[Span]) → Optional[List[Span]]¶ When the filter is registered in the tracer, process_trace is called by on each trace before it is sent to the agent, the returned value will be fed to the next filter in the list. If process_trace returns None, the whole trace is discarded.
Writing a custom filter
Create a filter by implementing a class with a process_trace
method and
providing it to the filters parameter of ddtrace.Tracer.configure()
.
process_trace
should either return a trace to be fed to the next step of
the pipeline or None
if the trace should be discarded:
from ddtrace import Span, tracer
from ddtrace.filters import TraceFilter
class FilterExample(TraceFilter):
def process_trace(self, trace):
# type: (List[Span]) -> Optional[List[Span]]
...
# And then configure it with
tracer.configure(settings={'FILTERS': [FilterExample()]})
(see filters.py for other example implementations)
Logs Injection¶
Datadog APM traces can be integrated with the logs product by:
1. Having ddtrace
patch the logging
module. This will add trace
attributes to the log record.
2. Updating the log formatter used by the application. In order to inject
tracing information into a log the formatter must be updated to include the
tracing attributes from the log record. ddtrace-run
will do this
automatically for you by specifying a format. For more detail or instructions
for how to do this manually see the manual section below.
With these in place the trace information will be injected into a log entry which can be used to correlate the log and trace in Datadog.
ddtrace-run¶
When using ddtrace-run
, enable patching by setting the environment variable
DD_LOGS_INJECTION=true
. The logger by default will have a format that
includes trace information:
import logging
from ddtrace import tracer
log = logging.getLogger()
log.level = logging.INFO
@tracer.wrap()
def hello():
log.info('Hello, World!')
hello()
Manual Instrumentation¶
If you prefer to instrument manually, patch the logging library then update the log formatter as in the following example
Make sure that your log format exactly matches the following:
from ddtrace import patch_all; patch_all(logging=True)
import logging
from ddtrace import tracer
FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
'[dd.service=%(dd.service)s dd.env=%(dd.env)s '
'dd.version=%(dd.version)s '
'dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s]'
'- %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger()
log.level = logging.INFO
@tracer.wrap()
def hello():
log.info('Hello, World!')
hello()
HTTP tagging¶
Query String Tracing¶
It is possible to store the query string of the URL — the part after the ?
in your URL — in the url.query.string
tag.
Configuration can be provided both at the global level and at the integration level.
Examples:
from ddtrace import config
# Global config
config.http.trace_query_string = True
# Integration level config, e.g. 'falcon'
config.falcon.http.trace_query_string = True
Headers tracing¶
For a selected set of integrations, it is possible to store http headers from both requests and responses in tags.
Configuration can be provided both at the global level and at the integration level.
Examples:
from ddtrace import config
# Global config
config.trace_headers([
'user-agent',
'transfer-encoding',
])
# Integration level config, e.g. 'falcon'
config.falcon.http.trace_headers([
'user-agent',
'some-other-header',
])
- The following rules apply:
headers configuration is based on a whitelist. If a header does not appear in the whitelist, it won’t be traced.
headers configuration is case-insensitive.
if you configure a specific integration, e.g. ‘requests’, then such configuration overrides the default global configuration, only for the specific integration.
if you do not configure a specific integration, then the default global configuration applies, if any.
if no configuration is provided (neither global nor integration-specific), then headers are not traced.
Once you configure your application for tracing, you will have the headers attached to the trace as tags, with a structure like in the following example:
http {
method GET
request {
headers {
user_agent my-app/0.0.1
}
}
response {
headers {
transfer_encoding chunked
}
}
status_code 200
url https://api.github.com/events
}
Custom Error Codes¶
It is possible to have a custom mapping of which HTTP status codes are considered errors. By default, 500-599 status codes are considered errors. Configuration is provided both at the global level.
Examples:
from ddtrace import config
config.http_server.error_statuses = '500-599'
- Certain status codes can be excluded by providing a list of ranges. Valid options:
400-400
400-403,405-499
400,401,403
OpenTracing¶
The Datadog opentracer can be configured via the config
dictionary
parameter to the tracer which accepts the following described fields. See below
for usage.
Configuration Key |
Description |
Default Value |
---|---|---|
enabled |
enable or disable the tracer |
True |
debug |
enable debug logging |
False |
agent_hostname |
hostname of the Datadog agent to use |
localhost |
agent_https |
use https to connect to the agent |
False |
agent_port |
port the Datadog agent is listening on |
8126 |
global_tags |
tags that will be applied to each span |
{} |
sampler |
see Sampling |
AllSampler |
uds_path |
unix socket of agent to connect to |
None |
settings |
see Advanced Usage |
{} |
Usage¶
Manual tracing
To explicitly trace:
import time
import opentracing
from ddtrace.opentracer import Tracer, set_global_tracer
def init_tracer(service_name):
config = {
'agent_hostname': 'localhost',
'agent_port': 8126,
}
tracer = Tracer(service_name, config=config)
set_global_tracer(tracer)
return tracer
def my_operation():
span = opentracing.tracer.start_span('my_operation_name')
span.set_tag('my_interesting_tag', 'my_interesting_value')
time.sleep(0.05)
span.finish()
init_tracer('my_service_name')
my_operation()
Context Manager Tracing
To trace a function using the span context manager:
import time
import opentracing
from ddtrace.opentracer import Tracer, set_global_tracer
def init_tracer(service_name):
config = {
'agent_hostname': 'localhost',
'agent_port': 8126,
}
tracer = Tracer(service_name, config=config)
set_global_tracer(tracer)
return tracer
def my_operation():
with opentracing.tracer.start_span('my_operation_name') as span:
span.set_tag('my_interesting_tag', 'my_interesting_value')
time.sleep(0.05)
init_tracer('my_service_name')
my_operation()
See our tracing trace-examples repository for concrete, runnable examples of the Datadog opentracer.
See also the Python OpenTracing repository for usage of the tracer.
Alongside Datadog tracer
The Datadog OpenTracing tracer can be used alongside the Datadog tracer. This
provides the advantage of providing tracing information collected by
ddtrace
in addition to OpenTracing. The simplest way to do this is to use
the ddtrace-run command to invoke your OpenTraced
application.
Examples¶
Celery
Distributed Tracing across celery tasks with OpenTracing.
Install Celery OpenTracing:
pip install Celery-OpenTracing
Replace your Celery app with the version that comes with Celery-OpenTracing:
from celery_opentracing import CeleryTracing from ddtrace.opentracer import set_global_tracer, Tracer ddtracer = Tracer() set_global_tracer(ddtracer) app = CeleryTracing(app, tracer=ddtracer)
Opentracer API¶
-
class
ddtrace.opentracer.
Tracer
(service_name=None, config=None, scope_manager=None, dd_tracer=None)¶ A wrapper providing an OpenTracing API for the Datadog tracer.
-
__init__
(service_name=None, config=None, scope_manager=None, dd_tracer=None)¶ Initialize a new Datadog opentracer.
- Parameters
service_name – (optional) the name of the service that this tracer will be used with. Note if not provided, a service name will try to be determined based off of
sys.argv
. If this fails addtrace.settings.ConfigException
will be raised.config – (optional) a configuration object to specify additional options. See the documentation for further information.
scope_manager – (optional) the scope manager for this tracer to use. The available managers are listed in the Python OpenTracing repo here: https://github.com/opentracing/opentracing-python#scope-managers. If
None
is provided, defaults toopentracing.scope_managers.ThreadLocalScopeManager
.dd_tracer – (optional) the Datadog tracer for this tracer to use. This should only be passed if a custom Datadog tracer is being used. Defaults to the global
ddtrace.tracer
tracer.
-
property
scope_manager
¶ Returns the scope manager being used by this tracer.
-
start_active_span
(operation_name, child_of=None, references=None, tags=None, start_time=None, ignore_active_span=False, finish_on_close=True)¶ Returns a newly started and activated Scope. The returned Scope supports with-statement contexts. For example:
with tracer.start_active_span('...') as scope: scope.span.set_tag('http.method', 'GET') do_some_work() # Span.finish() is called as part of Scope deactivation through # the with statement.
It’s also possible to not finish the Span when the Scope context expires:
with tracer.start_active_span('...', finish_on_close=False) as scope: scope.span.set_tag('http.method', 'GET') do_some_work() # Span.finish() is not called as part of Scope deactivation as # `finish_on_close` is `False`.
- Parameters
operation_name – name of the operation represented by the new span from the perspective of the current service.
child_of – (optional) a Span or SpanContext instance representing the parent in a REFERENCE_CHILD_OF Reference. If specified, the references parameter must be omitted.
references – (optional) a list of Reference objects that identify one or more parent SpanContexts. (See the Reference documentation for detail).
tags – an optional dictionary of Span Tags. The caller gives up ownership of that dictionary, because the Tracer may use it as-is to avoid extra data copying.
start_time – an explicit Span start time as a unix timestamp per time.time().
ignore_active_span – (optional) an explicit flag that ignores the current active Scope and creates a root Span.
finish_on_close – whether span should automatically be finished when Scope.close() is called.
- Returns
a Scope, already registered via the ScopeManager.
-
start_span
(operation_name=None, child_of=None, references=None, tags=None, start_time=None, ignore_active_span=False)¶ Starts and returns a new Span representing a unit of work.
Starting a root Span (a Span with no causal references):
tracer.start_span('...')
Starting a child Span (see also start_child_span()):
tracer.start_span( '...', child_of=parent_span)
Starting a child Span in a more verbose way:
tracer.start_span( '...', references=[opentracing.child_of(parent_span)])
Note: the precedence when defining a relationship is the following, from highest to lowest: 1. child_of 2. references 3. scope_manager.active (unless ignore_active_span is True) 4. None
Currently Datadog only supports child_of references.
- Parameters
operation_name – name of the operation represented by the new span from the perspective of the current service.
child_of – (optional) a Span or SpanContext instance representing the parent in a REFERENCE_CHILD_OF Reference. If specified, the references parameter must be omitted.
references – (optional) a list of Reference objects that identify one or more parent SpanContexts. (See the Reference documentation for detail)
tags – an optional dictionary of Span Tags. The caller gives up ownership of that dictionary, because the Tracer may use it as-is to avoid extra data copying.
start_time – an explicit Span start time as a unix timestamp per time.time()
ignore_active_span – an explicit flag that ignores the current active Scope and creates a root Span.
- Returns
an already-started Span instance.
-
property
active_span
¶ Retrieves the active span from the opentracing scope manager
Falls back to using the datadog active span if one is not found. This allows opentracing users to use datadog instrumentation.
-
inject
(span_context, format, carrier)¶ Injects a span context into a carrier.
- Parameters
span_context – span context to inject.
format – format to encode the span context with.
carrier – the carrier of the encoded span context.
-
extract
(format, carrier)¶ Extracts a span context from a carrier.
- Parameters
format – format that the carrier is encoded with.
carrier – the carrier to extract from.
-
ddtrace-run
¶
ddtrace-run
will trace supported web frameworks
and database modules without the need for changing your code:
$ ddtrace-run -h
Execute the given Python program, after configuring it
to emit Datadog traces.
Append command line arguments to your program as usual.
Usage: ddtrace-run <my_program>
The environment variables for ddtrace-run
used to configure the tracer are
detailed in Configuration.
ddtrace-run
respects a variety of common entrypoints for web applications:
ddtrace-run python my_app.py
ddtrace-run python manage.py runserver
ddtrace-run gunicorn myapp.wsgi:application
Pass along command-line arguments as your program would normally expect them:
$ ddtrace-run gunicorn myapp.wsgi:application --max-requests 1000 --statsd-host localhost:8125
If you’re running in a Kubernetes cluster and still don’t see your traces, make sure your application has a route to the tracing Agent. An easy way to test this is with a:
$ pip install ipython
$ DATADOG_TRACE_DEBUG=true ddtrace-run ipython
Because iPython uses SQLite, it will be automatically instrumented and your traces should be sent off. If an error occurs, a message will be displayed in the console, and changes can be made as needed.
uWSGI¶
The tracer and profiler support uWSGI when configured with the following:
Threads must be enabled with enable-threads or with threads if running uWSGI in multithreaded mode.
If manual instrumentation and configuration is used, lazy-apps must be used.
To enable tracing with automatic instrumentation and configuration with environment variables, use import option with the setting ddtrace.bootstrap.customize
. For example, add the following to the uWSGI configuration file:
import=ddtrace.bootstrap.sitecustomize
Note: Automatic instrumentation and configuration using ddtrace-run
is not supported with uWSGI.
To enable tracing with manual instrumentation and configuration, configure uWSGI with the lazy-apps
option and use patch_all() and agent configuration to a WSGI app:
from ddtrace import patch_all
from ddtrace import tracer
patch_all()
tracer.configure(collect_metrics=True)
def application(env, start_response):
with tracer.trace("uwsgi-app"):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
API¶
Tracer
¶
-
class
ddtrace.
Tracer
(url: Optional[str] = None, dogstatsd_url: Optional[str] = None)¶ Tracer is used to create, sample and submit spans that measure the execution time of sections of code.
If you’re running an application that will serve a single trace per thread, you can use the global tracer instance:
from ddtrace import tracer trace = tracer.trace('app.request', 'web-server').finish()
-
__init__
(url: Optional[str] = None, dogstatsd_url: Optional[str] = None) → None¶ Create a new
Tracer
instance. A global tracer is already initialized for common usage, so there is no need to initialize your ownTracer
.- Parameters
url – The Datadog agent URL.
url – The DogStatsD URL.
-
on_start_span
(func: Callable) → Callable¶ Register a function to execute when a span start.
Can be used as a decorator.
- Parameters
func – The function to call when starting a span. The started span will be passed as argument.
-
deregister_on_start_span
(func: Callable) → Callable¶ Unregister a function registered to execute when a span starts.
Can be used as a decorator.
- Parameters
func – The function to stop calling when starting a span.
-
global_excepthook
(tp, value, traceback)¶ The global tracer except hook.
-
get_call_context
(*args, **kwargs) → ddtrace.context.Context¶ Return the current active
Context
for this traced execution. This method is automatically called in thetracer.trace()
, but it can be used in the application code during manual instrumentation like:from ddtrace import tracer async def web_handler(request): context = tracer.get_call_context() # use the context if needed # ...
This method makes use of a
ContextProvider
that is automatically set during the tracer initialization, or while using a library instrumentation.
-
configure
(enabled: Optional[bool] = None, hostname: Optional[str] = None, port: Optional[int] = None, uds_path: Optional[str] = None, https: Optional[bool] = None, sampler: Optional[ddtrace.sampler.BaseSampler] = None, context_provider: Optional[ddtrace.provider.DefaultContextProvider] = None, wrap_executor: Optional[Callable] = None, priority_sampling: Optional[bool] = None, settings: Optional[Dict[str, Any]] = None, collect_metrics: Optional[bool] = None, dogstatsd_url: Optional[str] = None, writer: Optional[ddtrace.internal.writer.TraceWriter] = None) → None¶ Configure an existing Tracer the easy way. Allow to configure or reconfigure a Tracer instance.
- Parameters
enabled (bool) – If True, finished traces will be submitted to the API. Otherwise they’ll be dropped.
hostname (str) – Hostname running the Trace Agent
port (int) – Port of the Trace Agent
uds_path (str) – The Unix Domain Socket path of the agent.
https (bool) – Whether to use HTTPS or HTTP.
sampler (object) – A custom Sampler instance, locally deciding to totally drop the trace or not.
context_provider (object) – The
ContextProvider
that will be used to retrieve automatically the current call context. This is an advanced option that usually doesn’t need to be changed from the default valuewrap_executor (object) – callable that is used when a function is decorated with
Tracer.wrap()
. This is an advanced option that usually doesn’t need to be changed from the default valuepriority_sampling – enable priority sampling, this is required for complete distributed tracing support. Enabled by default.
collect_metrics – Whether to enable runtime metrics collection.
dogstatsd_url (str) – URL for UDP or Unix socket connection to DogStatsD
-
start_span
(name: str, child_of: Optional[Union[ddtrace.span.Span, ddtrace.context.Context]] = None, service: Optional[str] = None, resource: Optional[str] = None, span_type: Optional[str] = None) → ddtrace.span.Span¶ Return a span that will trace an operation called name. This method allows parenting using the
child_of
kwarg. If it’s missing, the newly created span is a root span.- Parameters
name (str) – the name of the operation being traced.
child_of (object) – a
Span
or aContext
instance representing the parent for this span.service (str) – the name of the service being traced.
resource (str) – an optional name of the resource being tracked.
span_type (str) – an optional operation type.
To start a new root span, simply:
span = tracer.start_span('web.request')
If you want to create a child for a root span, just:
root_span = tracer.start_span('web.request') span = tracer.start_span('web.decoder', child_of=root_span)
Or if you have a
Context
object:context = tracer.get_call_context() span = tracer.start_span('web.worker', child_of=context)
-
trace
(name: str, service: Optional[str] = None, resource: Optional[str] = None, span_type: Optional[str] = None) → ddtrace.span.Span¶ Return a span that will trace an operation called name. The context that created the span as well as the span parenting, are automatically handled by the tracing function.
- Parameters
name (str) – the name of the operation being traced
service (str) – the name of the service being traced. If not set, it will inherit the service from its parent.
resource (str) – an optional name of the resource being tracked.
span_type (str) – an optional operation type.
You must call finish on all spans, either directly or with a context manager:
>>> span = tracer.trace('web.request') try: # do something finally: span.finish() >>> with tracer.trace('web.request') as span: # do something
Trace will store the current active span and subsequent child traces will become its children:
parent = tracer.trace('parent') # has no parent span child = tracer.trace('child') # is a child of a parent child.finish() parent.finish() parent2 = tracer.trace('parent2') # has no parent span parent2.finish()
-
current_root_span
() → Optional[ddtrace.span.Span]¶ Returns the root span of the current context.
This is useful for attaching information related to the trace as a whole without needing to add to child spans.
Usage is simple, for example:
# get the root span root_span = tracer.current_root_span() # set the host just once on the root span if root_span: root_span.set_tag('host', '127.0.0.1')
-
current_span
() → Optional[ddtrace.span.Span]¶ Return the active span for the current call context or
None
if no spans are available.
-
write
(spans: Optional[List[ddtrace.span.Span]]) → None¶ Send the trace to the writer to enqueue the spans list in the agent sending queue.
-
set_service_info
(*args, **kwargs)¶ Set the information about the given service.
-
wrap
(name: Optional[str] = None, service: Optional[str] = None, resource: Optional[str] = None, span_type: Optional[str] = None) → Callable[[Callable[[…], Any]], Callable[[…], Any]]¶ A decorator used to trace an entire function. If the traced function is a coroutine, it traces the coroutine execution when is awaited. If a
wrap_executor
callable has been provided in theTracer.configure()
method, it will be called instead of the default one when the function decorator is invoked.- Parameters
name (str) – the name of the operation being traced. If not set, defaults to the fully qualified function name.
service (str) – the name of the service being traced. If not set, it will inherit the service from it’s parent.
resource (str) – an optional name of the resource being tracked.
span_type (str) – an optional operation type.
>>> @tracer.wrap('my.wrapped.function', service='my.service') def run(): return 'run'
>>> # name will default to 'execute' if unset @tracer.wrap() def execute(): return 'executed'
>>> # or use it in asyncio coroutines @tracer.wrap() async def coroutine(): return 'executed'
>>> @tracer.wrap() @asyncio.coroutine def coroutine(): return 'executed'
You can access the current span using tracer.current_span() to set tags:
>>> @tracer.wrap() def execute(): span = tracer.current_span() span.set_tag('a', 'b')
Set some tags at the tracer level. This will append those tags to each span created by the tracer.
- Parameters
tags (dict) – dict of tags to set at tracer level
-
shutdown
(timeout: Optional[float] = None) → None¶ Shutdown the tracer.
This will stop the background writer/worker and flush any finished traces in the buffer.
- Parameters
timeout (
int
|float
|None
) – How long in seconds to wait for the background worker to flush traces before exiting orNone
to block until flushing has successfully completed (default:None
)
-
Span
¶
-
class
ddtrace.
Span
(tracer: Optional[Tracer], name: str, service: Optional[str] = None, resource: Optional[str] = None, span_type: Optional[str] = None, trace_id: Optional[int] = None, span_id: Optional[int] = None, parent_id: Optional[int] = None, start: Optional[int] = None, context: Optional[Context] = None, on_finish: List[Callable[[Span], None]] = None, _check_pid: bool = True)¶ -
__init__
(tracer: Optional[Tracer], name: str, service: Optional[str] = None, resource: Optional[str] = None, span_type: Optional[str] = None, trace_id: Optional[int] = None, span_id: Optional[int] = None, parent_id: Optional[int] = None, start: Optional[int] = None, context: Optional[Context] = None, on_finish: List[Callable[[Span], None]] = None, _check_pid: bool = True) → None¶ Create a new span. Call finish once the traced operation is over.
- Parameters
tracer (ddtrace.Tracer) – the tracer that will submit this span when finished.
name (str) – the name of the traced operation.
service (str) – the service name
resource (str) – the resource name
span_type (str) – the span type
trace_id (int) – the id of this trace’s root span.
parent_id (int) – the id of this span’s direct parent span.
span_id (int) – the id of this span.
start (int) – the start time of request as a unix epoch in seconds
context (object) – the Context of the span.
on_finish – list of functions called when the span finishes.
-
property
start
¶ The start timestamp in Unix epoch seconds.
-
property
duration
¶ The span duration in seconds.
-
finish
(finish_time: Optional[int] = None) → None¶ Mark the end time of the span and submit it to the tracer. If the span has already been finished don’t do anything
- Parameters
finish_time (int) – The end time of the span in seconds. Defaults to now.
-
set_tag
(key: Union[str, bytes], value: Optional[Any] = None) → None¶ Set a tag key/value pair on the span.
Keys must be strings, values must be
stringify
-able.- Parameters
key (str) – Key to use for the tag
value (
stringify
-able value) – Value to assign for the tag
-
get_tag
(key: Union[str, bytes]) → Optional[str]¶ Return the given tag or None if it doesn’t exist.
Set a dictionary of tags on the given span. Keys and values must be strings (or stringable)
-
set_traceback
(limit: int = 20) → None¶ If the current stack has an exception, tag the span with the relevant error info. If not, set the span to the current python stack.
-
set_exc_info
(exc_type: Any, exc_val: Any, exc_tb: Any) → None¶ Tag the span with an error tuple as from sys.exc_info().
-
pprint
() → str¶ Return a human readable version of the span.
-
property
context
¶ Property that provides access to the
Context
associated with thisSpan
. TheContext
contains state that propagates from span to span in a larger trace.
-
Pin
¶
-
class
ddtrace.
Pin
(service: Optional[str] = None, app: Optional[str] = None, app_type=None, tags: Optional[Dict[str, str]] = None, tracer: Optional[Tracer] = None, _config: Optional[Dict[str, Any]] = None)¶ Pin (a.k.a Patch INfo) is a small class which is used to set tracing metadata on a particular traced connection. This is useful if you wanted to, say, trace two different database clusters.
>>> conn = sqlite.connect('/tmp/user.db') >>> # Override a pin for a specific connection >>> pin = Pin.override(conn, service='user-db') >>> conn = sqlite.connect('/tmp/image.db')
-
property
service
¶ Backward compatibility: accessing to pin.service returns the underlying configuration value.
-
static
get_from
(obj: Any) → ddtrace.pin.Pin¶ Return the pin associated with the given object. If a pin is attached to obj but the instance is not the owner of the pin, a new pin is cloned and attached. This ensures that a pin inherited from a class is a copy for the new instance, avoiding that a specific instance overrides other pins values.
>>> pin = Pin.get_from(conn)
- Parameters
obj (object) – The object to look for a
ddtrace.pin.Pin
on- Return type
ddtrace.pin.Pin
, None- Returns
ddtrace.pin.Pin
associated with the object, or None if none was found
-
classmethod
override
(obj: Any, service: Optional[str] = None, app: Optional[str] = None, app_type=None, tags: Optional[Dict[str, str]] = None, tracer: Optional[Tracer] = None) → None¶ Override an object with the given attributes.
That’s the recommended way to customize an already instrumented client, without losing existing attributes.
>>> conn = sqlite.connect('/tmp/user.db') >>> # Override a pin for a specific connection >>> Pin.override(conn, service='user-db')
-
enabled
() → bool¶ Return true if this pin’s tracer is enabled.
-
onto
(obj: Any, send: bool = True) → None¶ Patch this pin onto the given object. If send is true, it will also queue the metadata to be sent to the server.
-
property
patch_all
¶
-
ddtrace.monkey.
patch_all
(**patch_modules: Dict[str, bool]) → None¶ Automatically patches all available modules.
In addition to
patch_modules
, an override can be specified via an environment variable,DD_TRACE_<module>_ENABLED
for each module.patch_modules
have the highest precedence for overriding.- Parameters
patch_modules (dict) –
Override whether particular modules are patched or not.
>>> patch_all(redis=False, cassandra=False)