How To: Write an Integration#

An integration should provide concise, insightful data about the library or framework that will aid developers in monitoring their application’s health and performance.

The best way to get started writing a new integration is to refer to existing integrations. Looking at a similarly themed library or framework is a great starting point. To write a new integration for memcached we might refer to the existing redis integration as a starting point since both of these would generate similar spans.

The development process looks like this:

  • Research the library or framework that is to be instrumented. Reading through its docs and code examples will reveal what APIs are meaningful to instrument.

  • Copy the skeleton module provided in templates/integration and replace foo with the integration name. The integration name typically matches the library or framework being instrumented:

    cp -r templates/integration ddtrace/contrib/<integration>
  • Create a test file for the integration under tests/contrib/<integration>/test_<integration>.py.

  • Write the integration (see more on this below).

  • Open up a draft PR using the integration checklist.

Integration Fundamentals#

Code structure#

All integrations live in ddtrace/contrib/ and contain at least two files, and A skeleton integration is available under templates/integration which can be used as a starting point:

cp -r templates/integration ddtrace/contrib/<integration>

It is preferred to keep as much code as possible in

All spans generated by the integration must add the tag component:<integration_name> to each span. Example of component tag being set in Flask integration.

Pin API#

The Pin API is used to configure the instrumentation at run-time. This includes enabling and disabling the instrumentation and overriding the service name.

Library support#

ddtrace supports as many active versions of a library as possible, however testing all possible versions of a library combined with all supported Python versions is a heavy maintenance burden and provides limited added value in practice. Testing using the below guidelines helps alleviate that burden.

The ddtrace library’s testing support guidelines is as follows:

  • Test the oldest and latest minor versions of the most latest major version going back 2 years.

  • Test the latest minor version of any previous major version going back 2 years.

  • If there are no new releases in the past 2 years, test the latest released version.

  • For legacy Python versions (2.7,3.5,3.6), test the latest minor version known to support that legacy Python version.

For libraries with many versions it is recommended to pull out the version of the library to use when instrumenting volatile features. A great example of this is the Flask integration:


Exceptions provide a lot of useful information about errors and the application as a whole and are fortunately usually quite easy to deal with. Exceptions are a great place to start instrumenting. There are a couple of considerations when dealing with exceptions in ddtrace:

  • Re-raising the exception: it is crucial that we do not interfere with the application, so exceptions must be re-raised. See the bottle exception handling instrumentation for an example.

  • Gathering relevant information: ddtrace provides a helper for pulling out this information and adding it to a span. See the cassandra exception handling instrumentation for an example.

Cross execution tracing#

Some integrations can propagate a trace across execution boundaries to other executions where the trace is continued (processes, threads, tasks, etc). Refer to the Context section of the documentation for more information.

  • Propagating the trace example: requests

  • Receiving and activating a propagated trace example: django

Web frameworks#

A web framework integration must do the following if possible:

  • Install the WSGI or ASGI trace middlewares already provided by ddtrace.

  • Trace the duration of the request.

  • Assign a resource name for a route.

  • Use trace_utils.set_http_meta to set the standard http tags.

  • Have an internal service name.

  • Support distributed tracing (configurable).

  • Provide insight to middlewares and views.

  • Use the SpanTypes.WEB span type.

Some example web framework integrations::

Database libraries#

ddtrace already provides base instrumentation for the Python database API (PEP 249) which most database client libraries implement in the ddtrace.contrib.dbapi module.

Check out some of our existing database integrations for how to use the dbapi:


The tests must be defined in its own module in tests/contrib/<integration>/.

Testing is the most important part of the integration. We have to be certain that the integration:

  1. works: submits meaningful information to Datadog

  2. is invisible: does not impact the library or application by disturbing state, performance or causing errors

The best way to get started writing tests is to reference other integration test suites. tests/contrib/django and tests/contrib/mariadb are good examples. Be sure to make use of the test utilities and fixtures which will make testing less of a burden.

Snapshot Tests#

Many of the tests are based on “snapshots”: saved copies of actual traces sent to the APM test agent.

To update the snapshots expected by a test, first update the library and test code to generate new traces. Then, delete the snapshot file corresponding to your test. Use docker-compose up -d testagent to start the APM test agent, and re-run the test. Use –pass-env as described here to ensure that your test run can talk to the test agent. Once the run finishes, the snapshot file will have been regenerated.

Trace Examples#

Optional! But it would be great if you have a sample app that you could add to trace examples repository along with screenshots of some example traces in the PR description.

These applications are helpful to quickly spin up example app to test as well as see how traces look like for that integration you added.