Postgresql - random_page_cost =1.1 and NVMe disk: slower query

1.3k views Asked by At

My SysAdmin colleague told me that the Postgres'hosts use NVME disk.

  • How can I check that with Linux command?
  • Why does the planner/optimizer seem to get it wrong when I set random_page_cost =1.1

To bench it, I created table t_1(id int):

CREATE TABLE t_1 (id int);

In the id column, I inserted random numbers between 1 and 10,000,000. I inserted 10 millions rows.

Then:

CREATE INDEX ON t_1(id);
SET random_page_cost =1.1;

The query, using a Seq Scan, looks like this:

EXPLAIN (ANALYZE,BUFFERS) SELECT * from t_1 where id > 1600000;

And the result like this:

 Seq Scan on t_1  (cost=0.00..169248.00 rows=8361406 width=4) (actual time=0.010..917.250 rows=8399172 loops=1)
   Filter: (id > 1600000)
   Rows Removed by Filter: 1600828
   Buffers: shared hit=44248
Planning Time: 0.079 ms
Execution Time: 1301.160 ms
(6 lignes)

The query, using a Seq Scan, looks like that:

EXPLAIN (ANALYZE,BUFFERS) SELECT * from t_1 where id > 1700000;

And the result like that:

 Index Only Scan using t_1_id_idx on t_1  (cost=0.43..166380.65 rows=8258658 width=4) (actual time=0.027..1309.272 rows=8300257 loops=1)
  Index Cond: (id > 1700000)
  Heap Fetches: 0
  Buffers: shared hit=3176619
Planning Time: 0.070 ms
Execution Time: 1698.532 ms
(6 lignes)

On the internet, I read that it is recommended to set the random_page_cost ~= seq_page_cost because random accesses are as fast as sequential accesses with NVME disks. Using an index requires reading random pages from disk; whereas reading the full table gets to read pages sequentially. By decreasing the random_page_cost parameter, I notice that the planner favors the use of Indexes as expected, but to my surprise the execution time deteriorates.

  • My manager ask me to bench the random_page_cost parameter. How can I bench random_page_cost ?
0

There are 0 answers