pg_dump on very large table fails to lock the table

133 views Asked by At

I have defined a LOG table. Then I created lot of child tables, one per minute, in the form of:

create LOG_20231209_1500 .. inherits LOG;
create LOG_20231209_1501 .. inherits LOG;
create LOG_20231209_1502 .. inherits LOG;
...

After many years a count(*) on LOG returns some billion. The number of child tables is: 357299

But now it's time to have a backup, and I'd like to have each table backed up in its own file.

Unfortunately, using the following command:

pg_dump logdb -t LOG_20231209_1501 > LOG_20231209_1501.sql

raises the following error:

pg_dump: error: query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE public.LOG_20231209_1501 IN ACCESS SHARE MODE

even for a small table with 3000 records. Note the a direct select * from LOG_20231209_1501 produces results immediately. Also note that I've configured max_locks_per_transaction=1024 in postgres.conf, with no success.

While I'm dumping noone is using the database: can I make pg_dump avoid locking the table and giving no modifications for granted?

PostgreSQL version is: 14.8

1

There are 1 answers

2
Jack On

As suggested in comments, partial answer to the problem is to increase max_locks_per_transaction to (at least) the number of objects involved in the query. In my case, since the parent table has 359299 child tables, I had to set max_locks_per_transaction=360000 and pg_dump worked.

It is still a partial answer because pg_dump still tries to lock all the tables to dump structure or data of one single table; due to this it takes up to 1 minute to be executed, even if a simple COPY tab INTO 'file' CSV is immediate; this makes impractical to dump all tables using pg_dump.

Still I believe the correct way - if feasible - is to make pg_dump avoid locking at all.