From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jeremy Schneider <schnjere(at)amazon(dot)com> |
Cc: | neeraj kumar <neeru(dot)cse(at)gmail(dot)com>, pgsql-admin(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Query on pg_stat_activity table got stuck |
Date: | 2019-05-09 20:00:52 |
Message-ID: | 30687.1557432052@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-general |
Jeremy Schneider <schnjere(at)amazon(dot)com> writes:
> Seems to me that at a minimum, this loop shouldn't go on forever. Even
> having an arbitrary, crazy high, hard-coded number of attempts before
> failure (like a million) would be better than spinning on the CPU
> forever - which is what we are seeing.
I don't think it's the readers' fault. The problem is that the
writer is violating the protocol. If we put an upper limit on
the number of spin cycles on the reader side, we'll just be creating
a new failure mode when a writer gets swapped out at the wrong moment.
IMO we need to (a) get the failure-prone code out of the critical
section, and then (b) fix the pgstat_increment_changecount macros
so that the critical sections around these shmem changes really are
critical sections (ie bump CritSectionCount). That way, if somebody
makes the same mistake again, at least there'll be a pretty obvious
failure rather than a lot of stuck readers.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | neeraj kumar | 2019-05-09 23:35:37 | Re: Query on pg_stat_activity table got stuck |
Previous Message | Jeremy Schneider | 2019-05-09 19:46:32 | Re: Query on pg_stat_activity table got stuck |
From | Date | Subject | |
---|---|---|---|
Next Message | Erik Jones | 2019-05-09 20:03:50 | Hot Standby Conflict on pg_attribute |
Previous Message | Jeremy Schneider | 2019-05-09 19:46:32 | Re: Query on pg_stat_activity table got stuck |