That is, if it has enough work_mem to do so. > Seq Scan on t2 (actual rows=10000 loops=3)īuckets: 16384 Batches: 1 Memory Usage: 480kB > Parallel Seq Scan on t1 (actual rows=333333 loops=3)īuckets: 2048 Batches: 16 Memory Usage: 40kB Then we reset it and run the query again to compare plans.ĭemo=# create table t1 (c) as select generate_series(1, 1000000) ĭemo=# create table t2 (c) as select generate_series(1, 1000000, 100) ĭemo=# explain (analyze on, costs off, timing off)ĭemo-# select * from t1 join t2 using (c) In this next example, we set work_mem to its lowest possible setting before running the query. > Seq Scan on t (actual time=0.011.56.573 rows=1000000 loops=1)Īnother indicator that work_mem is set too low is if a hashing operation is done in batches. Sort Method: external merge Disk: 17696kB Consider increasing work_mem if you see this.ĭemo=# insert into t select generate_series(1, 1000000) ĭemo=# explain (analyze on, costs off) table t order by c Since RAM is much faster than disks (even SSDs), this can be a cause of slow queries. If there is not enough work_mem for a sort operation, Postgres will spill to disk. If set to off, only the final byte is written when the file is created so that it has the expected size. Disabling this parameter will disable this feature, helping VMs to perform better. This will slow down WAL operations on COW filesystems. By default, WAL space is allocated before WAL records are inserted.
0 Comments
Leave a Reply. |