Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed postgresql>=10 secondary server lag always 0, SuperQ proposed a… #977

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ARPABoy
Copy link

@ARPABoy ARPABoy commented Nov 29, 2023

Fixed postgresql>=10 secondary server lag always 0, SuperQ proposed a more clean code solution :), pg_replication_test modified to test pgReplicationQueryBeforeVersion10 or pgReplicationQueryAfterVersion10 depending of the postgresql version

… more clean code solution :), pg_replication_test modified to test pgReplicationQueryBeforeVersion10 or pgReplicationQueryAfterVersion10 depending of the postgresql version

Signed-off-by: kr0m <[email protected]>
Copy link
Contributor

@SuperQ SuperQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Thanks!

@SuperQ
Copy link
Contributor

SuperQ commented Nov 29, 2023

To add some more information here. It was discovered that if replication is failed, say blocked with an iptables firewall rule, pg_last_wal_receive_lsn() and pg_last_wal_replay_lsn() will remain the same. This masks the lag value.

IMO, we should probably expose separate metrics for each of these things rather than compute it in the exporter.

@SuperQ SuperQ requested a review from sysadmind November 29, 2023 10:04
@SuperQ
Copy link
Contributor

SuperQ commented Nov 29, 2023

This is a fix to the bug introduced by #895.

CC @IamLuksha.

@IamLuksha
Copy link
Contributor

Maybe I'm wrong
There is no way to check and test right now
But in versions above 9.3 pg_last_xact_replay_timestamp does not work correctly. And in the proposed commit it is used for versions above 10

The case of channel blocking must be considered separately. Check connection between devices

@SuperQ
Copy link
Contributor

SuperQ commented Nov 29, 2023

@IamLuksha Are there recommended metrics/alerts for monitoring replication? Server side? Client side?

I think I am beginning to understand some of how this is being done. I'm going to propose an additional metric that will will help with this.

Maybe we should add some example alerts to the mixin.

@IamLuksha
Copy link
Contributor

IamLuksha commented Nov 29, 2023

@SuperQ

Are there recommended metrics/alerts for monitoring replication? Server side? Client side?

Myabe reply_time. I need some time for test. Maybe next week. I don't have a working cluster right now

An additional metric is a great idea. Because it will clearly indicate the problem.
Since this commit will return issue #895

@ARPABoy
Copy link
Author

ARPABoy commented Nov 30, 2023

Are there recommended metrics/alerts for monitoring replication? Server-side? Client-side?
If you want to monitor lag looking at the Primary server I think it would be better to monitor replay_lag but if there are more than one synchronized Secondary servers you have to know the IP/FQDN of the Secondary to assign an identification tag to the metric.

You can detect broken synchronization on Secondary servers, if you know It's IP/FQDN:
SELECT COUNT(*) FROM pg_stat_replication WHERE client_addr='SLAVE_IP' AND state = 'streaming';

And you can get the LAG using replay_lag metric in that way:
SELECT COALESCE(EXTRACT(EPOCH FROM replay_lag)::bigint, 0) AS replay_lag FROM pg_stat_replication WHERE client_addr='SLAVE_IP';

Anyway, with the current Secondary metric monitoring, we can monitor lag if we apply the proposed patch, there are two queries, one for PostgreSQL>=10 and another for PostgreSQL<10.

The old versions will execute:
pgReplicationQueryBeforeVersion10 = SELECT
CASE
WHEN NOT pg_is_in_recovery() THEN 0
WHEN pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn () THEN 0
ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
END AS lag,
CASE
WHEN pg_is_in_recovery() THEN 1
ELSE 0
END as is_replica

And the new versions will execute:
pgReplicationQueryAfterVersion10 = SELECT
CASE
WHEN NOT pg_is_in_recovery() THEN 0
ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
END AS lag,
CASE
WHEN pg_is_in_recovery() THEN 1
ELSE 0
END as is_replica

I think that by monitoring pg_up(Primary/Secondary) and pg_replication_lag_seconds(Secondary) metric we have all the troubleshooting conflicts covered.

@IamLuksha
Copy link
Contributor

Anyway, with the current Secondary metric monitoring, we can monitor lag if we apply the proposed patch, there are two queries, one for PostgreSQL>=10 and another for PostgreSQL<10.

This method does not work and was abandoned earlier
If there are no changes in the database for a long time and this is normal
We will get a replication error

@ARPABoy
Copy link
Author

ARPABoy commented Nov 30, 2023

@IamLuksha This method does not work in postgresql>=10 or postgresql<10 or both?

@IamLuksha
Copy link
Contributor

@IamLuksha This method does not work in postgresql>=10 or postgresql<10 or both?

both

		WHEN NOT pg_is_in_recovery() THEN 0
                WHEN pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn () THEN 0
		ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))

This code for version > 9.3. But you propose to use it for <10

And your 'postgresql>=10' will give a replication error with an empty database or when it is rarely used.
In the end, having solved your problem, you return the old ones

In your case you need to check the connection between the servers

@ARPABoy
Copy link
Author

ARPABoy commented Nov 30, 2023

@IamLuksha Correct me if I am wrong.

It can be two kind of lag:

  • Local-Secondary server lag, the information was received by the Secondary server and written to disk but it hasn't been already dumped to DB.
  • Network lag between Primary and Secondary servers.

https://www.postgresql.org/docs/current/functions-admin.html

For Local lag we can check:
pg_last_wal_receive_lsn (): Returns the last write-ahead log location that has been received and synced to disk by streaming replication.
pg_last_wal_replay_lsn (): Returns the last write-ahead log location that has been replayed during recovery.

For Network lag:
pg_last_xact_replay_timestamp (): Returns the time stamp of the last transaction replayed during recovery. This is the time at which the commit or abort WAL record for that transaction was generated on the primary.

The problem with using pg_last_xact_replay_timestamp is that it remains the same value when there's no activity in the Primary server.

Have I understood the whole problem correctly?

@IamLuksha
Copy link
Contributor

Have I understood the whole problem correctly?

Yes!

So we need to check pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn ()

@ARPABoy
Copy link
Author

ARPABoy commented Nov 30, 2023

When you say: In your case you need to check the connection between the servers

Check the connection using node_exporter or any other external monitoring system? Or querying some PosqtgreSQL data?

@IamLuksha
Copy link
Contributor

Check the connection using node_exporter or any other external monitoring system? Or querying some PosqtgreSQL data?

Every method can be used

I don't remember if Postgres has a method to check the connection.
I won't be able to check until next week.

@ARPABoy
Copy link
Author

ARPABoy commented Dec 4, 2023

Hello @IamLuksha , what do you think about monitoring lag and availability from Primary server?

You can detect broken synchronizations, if you know It's IP/FQDN:
SELECT COUNT(*) FROM pg_stat_replication WHERE client_addr='SECONDARY_IP' AND state = 'streaming';

And you can get the LAG using replay_lag metric in this way:
SELECT COALESCE(EXTRACT(EPOCH FROM replay_lag)::bigint, 0) AS replay_lag FROM pg_stat_replication WHERE client_addr='SECONDARY_IP';

The only problem that I can detect in this way is that when the Secondary server is unavailable from Primary, pg_stat_replication returns no results, so it simply disappears, the only solution that I have thought is saving previous watched Secondary servers in a list file, and if someone of them disappears, then trigger an alarm.

Do you think it's a worthy approximation? Any suggestion or solution?

bitfehler added a commit to bitfehler/postgres_exporter that referenced this pull request Nov 6, 2024
The exported replication lag does not handle all failure modes, and can
report 0 for replicas that are out of sync and incapable of recovery.

A proper replacement for that metric would require a different approach
(see e.g. prometheus-community#1007), but for a lot of folks, simply exporting the age of
the last replay can provide a pretty strong signal for something being
amiss.

I think this solution might be preferrable to prometheus-community#977, though the lag
metric needs to be fixed or abandoned eventually.

Signed-off-by: Conrad Hoffmann <[email protected]>
bitfehler added a commit to bitfehler/postgres_exporter that referenced this pull request Nov 6, 2024
The exported replication lag does not handle all failure modes, and can
report 0 for replicas that are out of sync and incapable of recovery.

A proper replacement for that metric would require a different approach
(see e.g. prometheus-community#1007), but for a lot of folks, simply exporting the age of
the last replay can provide a pretty strong signal for something being
amiss.

I think this solution might be preferable to prometheus-community#977, though the lag
metric needs to be fixed or abandoned eventually.

Signed-off-by: Conrad Hoffmann <[email protected]>
bitfehler added a commit to bitfehler/postgres_exporter that referenced this pull request Nov 6, 2024
The exported replication lag does not handle all failure modes, and can
report 0 for replicas that are out of sync and incapable of recovery.

A proper replacement for that metric would require a different approach
(see e.g. prometheus-community#1007), but for a lot of folks, simply exporting the age of
the last replay can provide a pretty strong signal for something being
amiss.

I think this solution might be preferable to prometheus-community#977, though the lag
metric needs to be fixed or abandoned eventually.

Signed-off-by: Conrad Hoffmann <[email protected]>
bitfehler added a commit to bitfehler/postgres_exporter that referenced this pull request Nov 12, 2024
The exported replication lag does not handle all failure modes, and can
report 0 for replicas that are out of sync and incapable of recovery.

A proper replacement for that metric would require a different approach
(see e.g. prometheus-community#1007), but for a lot of folks, simply exporting the age of
the last replay can provide a pretty strong signal for something being
amiss.

I think this solution might be preferable to prometheus-community#977, though the lag
metric needs to be fixed or abandoned eventually.

Signed-off-by: Conrad Hoffmann <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants