Skip to technology filters Skip to main content
Dynatrace Hub

Extend the platform,
empower your team.

Popular searches:
Home hero bg
PostgreSQLPostgreSQL
PostgreSQL

PostgreSQL

Monitor your Postgres performance via our new EF2.0 extension framework.

Extension
Free trial
  • Product information
  • Release notes

Overview

PostgresSQL monitoring solution is based on a remote monitoring approach implemented as a Dynatrace ActiveGate extension. The extension queries Postgres databases for key performance and health. Dynatrace's DAVIS AI then analyzes these metrics to provide anomaly and problem analysis.

Use cases

  • Monitor the health state and performance of their PostgreSQL setup.
  • Detect anomalies and alert on them.
  • Take pre-emptive measures to avoid service degradations.

Monitoring top queries

If both the Dynatrace log monitoring is enabled and the pg_stat_statements view is available, Dynatrace will ingest the top 100 queries (sorted by total execution time) every 5 minutes and store them as logs. These logs are available either from the database instance screen or on the Databases App, under "Top queries by total execution time".

To filter by these queries on a dashboard or notebook, one can filter by dt.extension.name = com.dynatrace.extension.postgres and event.group = top_queries. See below a DQL query example:

fetch logs
| filter dt.extension.name=="com.dynatrace.extension.postgres" and event.group=="top_queries"
| sort total_exec_time desc

Regardless of whether pg_stat_statements is available or not, Dynatrace will still collect queries from pg_stat_activity as part of the Queries feature set, which are similarly ingested as logs with event.group = longest_queries.

Fetching execution plan details

For SaaS users who have access to the Databases app and who have top query monitoring enabled (see previous section), fetching execution plans for these queries is possible. This can be done from the Databases app, under Statement performance, by clicking Request on the execution plan for a specific query.

For that query, the extension will then attemp to execute the following:

SELECT * from dynatrace.dynatrace_execution_plan({query})

and then ingest into Dynatrace the first row of the column named explain. These execution plans are ingested as logs with event.group = execution_plans.

Get started

Activation

Steps to activate remote monitoring (version 1.255+):

  • Activate Extension in the Hub by going to:
  • Manage → Dynatrace Hub → PostgreSQL → Add to environment
  • Add new monitoring configuration

Requirements

  • There must be connectivity between the ActiveGate, where the extension is deployed, and the Postgres database.

  • Database user with proper permissions must be provided. Example:

    CREATE USER dynatrace WITH PASSWORD '<PASSWORD>' INHERIT;  
    GRANT pg_monitor TO dynatrace;
    

For top query monitoring:

  • pg_stat_statements view (must be enabled)[https://www.postgresql.org/docs/current/pgstatstatements.html#PGSTATSTATEMENTS].
  • pg_stat_statements.track_planning must be turned on to enable plan fetching.

For execution plan details monitoring:

  • Special dynatrace.dynatrace_execution_plan function must be created in the database, to which you will connect and from which the execution plans will be fetched.

    CREATE SCHEMA dynatrace;
    CREATE OR REPLACE FUNCTION dynatrace.dynatrace_execution_plan(
       query text,
       OUT explain JSON
    ) RETURNS SETOF JSON
       LANGUAGE plpgsql
       VOLATILE
       RETURNS NULL ON NULL INPUT
       SECURITY DEFINER
       ROWS 1
       SET plan_cache_mode = force_generic_plan
    AS
    $$DECLARE
       arg_count integer;
       open_paren text;
       close_paren text;
       explain_cmd text;
       json_result json;
    BEGIN
    
       /* reject statements containing a semicolon in the middle */
       IF pg_catalog.strpos(
             pg_catalog.rtrim(dynatrace_execution_plan.query, ';'),
             ';'
          ) OPERATOR(pg_catalog.>) 0 THEN
          RAISE EXCEPTION 'query string must not contain a semicolon';
       END IF;
    
       /* get the parameter count */
       SELECT count(*) INTO arg_count
       FROM pg_catalog.regexp_matches( /* extract the "$n" */
             pg_catalog.regexp_replace( /* remove single quoted strings */
                dynatrace_execution_plan.query,
                '''[^'']*''',
                '',
                'g'
             ),
             '\$\d{1,}',
             'g'
          );
    
       IF arg_count OPERATOR(pg_catalog.=) 0 THEN
          open_paren := '';
          close_paren := '';
       ELSE
          open_paren := '(';
          close_paren := ')';
       END IF;
    
       /* construct a prepared statement */
       EXECUTE
          pg_catalog.concat(
             'PREPARE _stmt_',
             open_paren,
             pg_catalog.rtrim(
                pg_catalog.repeat('unknown,', arg_count),
                ','
             ),
             close_paren,
             ' AS ',
             dynatrace_execution_plan.query
          );
    
       /* construct an EXPLAIN statement */
       explain_cmd :=
          pg_catalog.concat(
             'EXPLAIN (FORMAT JSON, ANALYZE FALSE) EXECUTE _stmt_',
             open_paren,
             pg_catalog.rtrim(
                pg_catalog.repeat('NULL,', arg_count),
                ','
             ),
             close_paren
          );
    
       /* get and return the plan */
       EXECUTE explain_cmd INTO json_result;
       RETURN QUERY SELECT json_result;
    
       /* delete the prepared statement */
       DEALLOCATE _stmt_;
    END;$$;
    

    Notice that the function above is defined with SECURITY DEFINER, meaning it executes with the privileges of the user who defined it, not the one executing it. This is due to the fact that the permissions required to EXPLAIN a query are the same as the one required to run that query, meaning that the user with which this function is defined needs to have a sufficient priveliges to run PREPARE and EXPLAIN on the queries it will be explaining. As such, for full functionality ensure that the function is defined with the appropriate user.

  • The monitoring user will need to have USAGE grant on the dynatrace schema.

    GRANT USAGE ON SCHEMA dynatrace to <username>
    
  • In some PostgreSQL configurations you might also need to set search_path for the monitoring user:

    ALTER USER dynatrace SET search_path to dynatrace, public;
    

Details

Limitations

Top queries:

  • For security reasons ALTER statements are excluded from top query collection.

Execution plan details:

  • Only available to SaaS users with DB App installed
  • Execution plan can only be fetched from the database where dynatrace.dynatrace_execution_plan function has been created.

Licensing

The metrics collected through this extension consume Dynatrace Davis Data Units (see DDUs for metrics).

A rough estimation of the amount of DDUs consumed by metric ingest can be obtained through the following formula:

( (11 * number of instances)
+ (29 * number of databases)
+ (1 * number of tablespaces)
) * 525.6 DDUs/year

For logs, regular DDU consumption for log monitoring applies. Depending on your licensing model, refer either to DDU consumption for Log Management and Analytics or DDUs for Log Monitoring Classic.

If your license consists of Custom Metrics. Each custom metric is equivalent to 525.6 DDUs/yr Please see documentation Metric Cost Calculation.

FAQ

How does this Dynatrace extension collect data from my databases?

This extension will run from your Dynatrace ActiveGates and connect to the configured databases. Once the connection has been established, the extension will regularly run queries on the database to gather performance and health metrics, reporting the results back to Dynatrace.

Only SELECT queries are executed to collect data. To see exactly which queries are executed, please download the extension yaml artifact by going to Release notes, opening a release and pressing the Download version button.

How often are these monitoring queries executed?

From version 2.3.0 onwards, query execution frequency is controlled by the configuration variables query-interval and heavy-query-interval. Most of the queries executed by the extension will run every query-interval minutes (with a default of 1 minute), while the queries under the Queries feature set will run every heavy-query-interval minutes (with a default of 5 minutes).

For older versions, most queries run every minute, with exceptions for the heavy queries mentioned above, which run every 5 minutes.

Why are some of the extension's queries failing?

In order to support a wide range of Postgres versions we need to have several versions of the same queries running at the same time, since over time Postgres has changed column names for several tables. As such, it is expected for some queries to fail, but as long as there is no missing data, there is no cause for concern.

Compatibility information

The extension is designed to work with PostgreSQL version 11 and later.

Provided by

Dynatrace

Support

By Dynatrace
Dynatrace support center
Subscribe to new releases
Copy to clipboard

Extension content

Content typeNumber of items included
screen logs cards
2
screen chart groups
17
screen layout
3
screen properties
2
list screen layout
3
alerts
1
screen entities lists
8
screen actions
8
metric metadata
51
screen custom cards
2
dashboards
1
generic type
4
generic relationship
5
screen dql table
2

Feature sets

Below is a complete list of the feature sets provided in this version. To ensure a good fit for your needs, individual feature sets can be activated and deactivated by your administrator during configuration.

Feature setsNumber of metrics included
Metric nameMetric keyDescriptionUnit
Instance uptimepostgres.uptimeTime since the instance has been startedSecond
Metric nameMetric keyDescriptionUnit
Active backend processespostgres.activity.activeNumber of server processes executing a queryCount
Idle backend processespostgres.activity.idleNumber of server processes waiting for a new client commandCount
Idle in transaction backends processespostgres.activity.idle_in_transactionNumber of server processes in transaction not currently executing a queryCount
Idle in transaction aborted backends processespostgres.activity.idle_in_transaction_abortedNumber of server processes in transaction not currently executing a query where one of the statements caused an errorCount
Fast-path function backend processespostgres.activity.fastpath_function_callNumber of server processes executing a fast-path function callCount
Metric nameMetric keyDescriptionUnit
Instance recovery modepostgres.recovery.stateIndicate that the instance is in recovery mode. 1 if in recovery, 0 otherwise.State
Metric nameMetric keyDescriptionUnit
Number of backendspostgres.numbackendsNumber of backends currently connected to this databaseCount
Committed transactionspostgres.xact_commit.countNumber of transactions in this database that have been committedCount
Rolled back transactionspostgres.xact_rollback.countNumber of transactions in this database that have been rolled backCount
Block read from diskpostgres.blks_read.countNumber of disk blocks read in this databaseCount
Blocks found in buffer cachepostgres.blks_hit.countNumber of times disk blocks were found already in the buffer cache, so that a read was not necessaryCount
Live rows returnedpostgres.tup_returned.countNumber of live rows fetched by sequential scans and index entries returned by index scans in this databaseCount
Live rows fetched by index scanspostgres.tup_fetched.countNumber of live rows fetched by index scans in this databaseCount
Rows insertedpostgres.tup_inserted.countNumber of rows inserted by queries in this databaseCount
Rows updatedpostgres.tup_updated.countNumber of rows updated by queries in this databaseCount
Rows deletedpostgres.tup_deleted.countNumber of rows deleted by queries in this databaseCount
Queries canceled due to conflictpostgres.conflicts.countNumber of queries canceled due to conflicts with recovery in this databaseCount
Temporary files createdpostgres.temp_files.countNumber of temporary files created by queries in this databaseCount
Data written to temporary filespostgres.temp_bytes.countTotal amount of data written to temporary files by queries in this databaseByte
Deadlockspostgres.deadlocks.countNumber of deadlocks detected in this databaseCount
Data file blocks reading timepostgres.blk_read_time.countTime spent reading data file blocks by backends in this databaseMilliSecond
Data file blocks writing timepostgres.blk_write_time.countTime spent writing data file blocks by backends in this databaseMilliSecond
Time spent by sessionspostgres.session_time.countTime spent by database sessions in this databaseMilliSecond
Time spent executing SQL statementspostgres.active_time.countTime spent executing SQL statements in this databaseMilliSecond
Time spent idlingpostgres.idle_in_transaction_time.countTime spent idling while in a transaction in this databaseMilliSecond
Established sessionspostgres.sessions.countTotal number of sessions establishedCount
Abandoned sessionspostgres.sessions_abandoned.countNumber of database sessions to this database that were terminated because connection to the client was lostCount
Fatal error terminated sessionspostgres.sessions_fatal.countNumber of database sessions to this database that were terminated by fatal errorsCount
Killed sessionspostgres.sessions_killed.countNumber of database sessions to this database that were terminated by operator interventionCount
Data page checksum failurespostgres.checksum_failures.countNumber of data page checksum failures detected in this databaseCount
Metric nameMetric keyDescriptionUnit
Scheduled checkpoints performedpostgres.checkpoints_timed.countNumber of scheduled checkpoints that have been performedCount
Requested checkpoints performedpostgres.checkpoints_req.countNumber of requested checkpoints that have been performedCount
Checkpoints write timepostgres.checkpoint_write_time.countTotal amount of time that has been spent in the portion of checkpoint processing where files are written to diskMilliSecond
Checkpoint sync timepostgres.checkpoint_sync_time.countTotal amount of time that has been spent in the portion of checkpoint processing where files are synchronized to diskMilliSecond
Buffers written during checkpointspostgres.buffers_checkpoint.countNumber of buffers written during checkpointsCount
Buffers written by background writerpostgres.buffers_clean.countNumber of buffers written by the background writerCount
Cleaning scan stopspostgres.maxwritten_clean.countNumber of times the background writer stopped a cleaning scan because it had written too many buffersCount
Buffers written by backendpostgres.buffers_backend.countNumber of buffers written directly by a backendCount
Backend fsync executionspostgres.buffers_backend_fsync.countNumber of times a backend had to execute its own fsync callCount
Buffers allocatedpostgres.buffers_alloc.countNumber of buffers allocatedCount
Metric nameMetric keyDescriptionUnit
Tablespace sizepostgres.tablespace.sizeTablespace size in bytesByte
Metric nameMetric keyDescriptionUnit
WAL diff sizepostgres.wal_diff_sizeSize of difference between current WAL and last WAL replayByte
WAL records per minutepostgres.wal_records.countNumber of WAL records generated per minuteCount
WAL fpi per minutepostgres.wal_fpi.countNumber of WAL full page images generated per minuteCount
WAL bytespostgres.wal_bytes.countTotal amount of WAL generated in bytesCount
WAL buffers fullpostgres.wal_buffers_full.countNumber of times WAL data was written to disk because WAL buffers became fullCount
WAL writepostgres.wal_write.countNumber of times WAL buffers were written out to disk via XLogWrite requestCount
WAL syncpostgres.wal_sync.countNumber of times WAL files were synced to disk via issue_xlog_fsync requestCount
WAL write timepostgres.wal_write_time.countTotal amount of time spent writing WAL buffers to disk via XLogWrite request, in millisecondsMilliSecond
WAL sync timepostgres.wal_sync_time.countTotal amount of time spent syncing WAL files to disk via issue_xlog_fsync request, in millisecondsMilliSecond

Related to PostgreSQL

ActiveGate logo

ActiveGate

Route traffic, monitor clouds and remote technologies & run Synthetic monitors

Full version history

To have more information on how to install the downloaded package, please follow the instructions on this page.
ReleaseDate

Full version history

Features:

  • When OneAgent monitoring is configured in the database server, a SAME_AS relationship will be established between the sql:postgres_instance and the corresponding PROCESS_GROUP_INSTANCE.

ℹ️ This version requires ActiveGate version 1.301.0 or higher.

Full version history

Bug fixes:

  • Fixed bug where the WAL diff size metric would not be collected for replication instances. This metric will now have value 0 instead of null when there is no recovery happening.

ℹ️ This version requires ActiveGate version 1.301.0 or higher.

Full version history

Bug fixes:

  • Fixed bug where the heavy-query-interval variable's default value would not be set correctly.

Full version history

Bug fixes:

  • Fixed bug where top statements would appear duplicated in the Databases app

Features:

  • There are now two variables which can be defined at the configuration level, query-interval and heavy-query-interval, which control the frequency with which the Dynatrace queries the database/ingests data. Please see the FAQ section for more details.

Full version history

Bugfixes:

  • Fixed the top queries not showing for certain non-Grail environment in the Postgres Instance screen.

ℹ️ This version requires ActiveGate version 1.293.0 or higher.

Full version history

Improvements

  • Top queries will now no longer collect ALTER statements for security reasons.

Bugfixes:

  • Fixed the Top queries by execution time section in the Databases app.
  • Fixed the displayed columns for top statements in the Databases app.

ℹ️ This version requires ActiveGate version 1.293.0 or higher.

Full version history

WARNING This extension requires minimum ActiveGate version 1.293. Please check your version before upgrading.

Bugfixes:

  • Fixed bug where top queries would not be collected from pg_stat_statements for PostgreSQL 13.

Full version history

WARNING This extension requires minimum ActiveGate version 1.293. Please check your version before upgrading.

Features:

  • (SaaS only) Added execution plan query support with the database app. For users on SaaS who have access to the Databases app execution plan fetching is now supported. View the Getting started section on the hub tile for more details on how to set it up.

Full version history

Bug fixes:

  • Query duration as collected by longest_queries is now calculated as clock_timestamp() - query_start. Previously it was computed as now() - query_start which sometimes caused the query duration to be negative.

    The reason behind this is that now() or CURRENT_TIMESTAMP represent the start of the transaction and an individual query within the transaction can start later. The clock_timestamp() however returns an actual system cpu clock and can change over the course of the transaction, producing a correct value to subtract query_start from to compute a valid duration.

Full version history

Bug fixes:

  • [BREAKING CHANGE] Renamed postgres.blk_hit.count metric key to postgres.blks_hit.count to match official Postgres statistics collector name.

Features:

  • Added an optional endpoint metadata configuration parameter to each endpoint in the monitoring configuration. This parameter will populate the endpoint_metadata attribute of sql:postgres_instance entities and can be used in entity selectors (to facilitate tagging, for example).
  • The Queries feature set will now collect queries (ingested as logs) from the pg_stat_statements table as well. These new logs can be queried by filtering by event_group = top_queries. Note that for this to work, the pg_stat_statements table needs to be enabled at the database level as per the official docs.
  • Added metric charts to the Postgres instance view in the Databases app.

Improvements:

  • Aggregated activities from pg_stat_activity is now collected for all activities, not just the ones associated with a specific datid. That adds monitoring for activites such as autovacuum worker.

Full version history

Improvements:

  • Monitoring for databases was improved to support all versions of Postgres starting with 11. Previously, it only worked for Postgres 14+.
  • All metrics were linked to their corresponding Entity Type to make sure the metrics are displayed correctly when Management Zones are activated and are filtered by entity.

Bug fixes:

  • client_port dimension was removed from the following metrics, because the field bring no valuable information and is too volatile, causing too many distinct timeseries to be generated.
    • postgres.activity.active
    • postgres.activity.fastpath_function_call
    • postgres.activity.idle_in_transaction_aborted
    • postgres.activity.idle_in_transaction
    • postgres.activity.idle

Full version history

Breaking changes: Existing monitoring configurations must be recreated through API or manually

New features:

  • New feature sets
  • New topology type: Tablespace
  • WAL diff metric
  • Query level visibility for longest running queries
  • New metrics and charts on entity screens

Bug fixes:

  • Duplicate topology entities
  • Incorrect metric types
  • Incorrect recovery metric
  • Dashboard and entity screen links

Full version history

Fix for schema change

Full version history

  • Fix link to configurations in overview dashboard

Full version history

No release notes

Dynatrace Hub
Get data into DynatraceBuild your own app
All (768)Log Management and AnalyticsKubernetesAI and LLM ObservabilityInfrastructure ObservabilitySoftware DeliveryApplication ObservabilityApplication SecurityDigital ExperienceBusiness Analytics
Filter
Type
Built and maintained by
Deployment model
SaaS
  • SaaS
  • Managed
Partner FinderBecome a partnerDynatrace Developer

Discover recent additions to Dynatrace

Problems logo

Problems

Analyze abnormal system behavior and performance problems detected by Davis AI.

Logs logo

Logs

Explore all your logs without writing a single query.

Security Investigator logo

Security Investigator

Fast and precise forensics for security and logs on Grail data with DQL queries.

Business Flow logo

Business Flow

Track, analyze, and optimize your critical business processes.

Cost & Carbon Optimization logo

Cost & Carbon Optimization

Track, analyze, and optimize your IT carbon footprint and public cloud costs.

Davis Anomaly Detection logo

Davis Anomaly Detection

Detect anomalies in timeseries using the Davis AI

Analyze your data

Understand your data better with deep insights and clear visualizations.

Notebooks logo

Notebooks

Create powerful, data-driven documents for custom analytics and collaboration.

Dashboards logo

Dashboards

Transform complex data into clear visualizations with custom dashboards.

Automate your processes

Turn data and answers into actions, securely, and at scale.

Workflows logo

Workflows

Automate tasks in your IT landscape, remediate problems, and visualize processes

Jira logo

Jira

Create, query, comment, transition, and resolve Jira tickets within workflows.

Slack logo

Slack

Automate Slack messaging for security incidents, attacks, remediation, and more.

Secure your cloud application

See vulnerabilities and attacks in your environment.

Security Overview logo

Security Overview

Get a comprehensive overview of the security of your applications.

Code-Level Vulnerabilities logo

Code-Level Vulnerabilities

Detect vulnerabilities in your code in real time.

Threats & Exploits logo

Threats & Exploits

Understand, triage, and investigate application security findings and alerts.

Are you looking for something different?

We have hundreds of apps, extensions, and other technologies to customize your environment

Leverage our newest innovations of Dynatrace Saas

Kick-start your app creation

Kick-start your app creation

Whether you’re a beginner or a pro, Dynatrace Developer has the tools and support you need to create incredible apps with minimal effort.
Go to Dynatrace Developer
Upgrading from Dynatrace Managed to SaaS

Upgrading from Dynatrace Managed to SaaS

Drive innovation, speed, and agility in your organization by seamlessly and securely upgrading.
Learn More
Log Management and Analytics

Log Management and Analytics

Innovate faster and more efficiently with unified log management and log analytics for actionable insights and automation.
Learn more