When contributing to this repository, we advise you to discuss the change you wish to make via an issue.
Development happens in the 1.x branch. When all the features for the next milestone are merged, the next version is released and tagged on the 1.x branch as vVERSION.
Your pull request should target the 1.x branch.
Once a new version is released, a VERSION branch might be created to support micro releases to VERSION. Patches should be cherry-picking from the 1.x branch where possible — or otherwise created from scratch.
The ddtrace.internal module contains code that must only be used inside ddtrace itself. Relying on the API of this module is dangerous and can break at anytime. Don’t do it.
Python Versions and Implementations Support¶
The following Python implementations are supported:
Versions of those implementations that are supported are the Python versions that are currently supported by the community.
The code style is enforced by flake8, its configuration, and possibly extensions. No code style review should be done by a human. All code style enforcement must be automated to avoid bikeshedding and losing time.
How To: Write an Integration¶
An integration should provide concise, insightful data about the library or framework that will aid developers in monitoring their application’s health and performance.
The best way to get started writing a new integration is to refer to existing
integrations. Looking at a similarly themed library or framework is a great
starting point. To write a new integration for
memcached we might refer to
redis integration as a starting point since both of these
would generate similar spans.
The development process looks like this:
Research the library or framework that is to be instrumented. Reading through its docs and code examples will reveal what APIs are meaningful to instrument.
Copy the skeleton module provided in
foowith the integration name. The integration name typically matches the library or framework being instrumented:cp -r templates/integration ddtrace/contrib/<integration>
Create a test file for the integration under
Write the integration (see more on this below).
Open up a draft PR using the integration checklist.
All integrations live in
ddtrace/contrib/ and contain at least two files,
patch.py. A skeleton integration is available under
templates/integration which can be used as a starting point:
cp -r templates/integration ddtrace/contrib/<integration>
It is preferred to keep as much code as possible in
The Pin API is used to configure the instrumentation at run-time. This includes enabling and disabling the instrumentation and overriding the service name.
ddtrace tries to support as many active versions of a library as possible.
The general rule is:
If the integration depends on internals of the library then test every minor version going back 2 years.
Else test each major version going back 2 years.
For libraries with many versions it is recommended to pull out the version of the library to use when instrumenting volatile features. A great example of this is the Flask integration:
pulling out the version: flask version
using it to instrument a later-added feature flask version usage
Exceptions provide a lot of useful information about errors and the application
as a whole and are fortunately usually quite easy to deal with. Exceptions are
a great place to start instrumenting. There are a couple of considerations when
dealing with exceptions in
Re-raising the exception: it is crucial that we do not interfere with the application, so exceptions must be re-raised. See the bottle exception handling instrumentation for an example.
Gathering relevant information:
ddtraceprovides a helper for pulling out this information and adding it to a span. See the cassandra exception handling instrumentation for an example.
Cross execution tracing¶
Some integrations can propagate a trace across execution boundaries to other executions where the trace is continued (processes, threads, tasks, etc). Refer to the Context section of the documentation for more information.
A web framework integration must do the following if possible:
Install the WSGI or ASGI trace middlewares already provided by
Trace the duration of the request.
Assign a resource name for a route.
trace_utils.set_http_metato set the standard http tags.
Have an internal service name.
Support distributed tracing (configurable).
Provide insight to middlewares and views.
Use the SpanTypes.WEB span type.
ddtrace already provides base instrumentation for the Python database API
(PEP 249) which most database client libraries implement in the
Check out some of our existing database integrations for how to use the dbapi:
The tests must be defined in its own module in
Testing is the most important part of the integration. We have to be certain that the integration:
works: submits meaningful information to Datadog
is invisible: does not impact the library or application by disturbing state, performance or causing errors
The best way to get started writing tests is to reference other integration test
tests/contrib/mariadb are good examples.
Be sure to make use of the test utilities and fixtures which will make testing
less of a burden.
Optional! But it would be great if you have a sample app that you could add to trace examples repository along with screenshots of some example traces in the PR description.
These applications are helpful to quickly spin up example app to test as well as see how traces look like for that integration you added.