Introduction
pgconf.dev 2025 just wrapped up in Montreal, Canada, following its successful debut in Vancouver last year—and once again, it delivered a fantastic mix of deep technical content and strong community social activities.
As always, the focus was on both the current state and future direction of PostgreSQL, with over 40 thoughtfully curated technical talks covering everything from performance, storage, extensions and new features. The week wasn’t just about technical talks though—there were plenty of chances to connect through community events like Meet & Eat, the Social Run, and group dinners, making the experience as social as it was informative.
Montreal brought its own unique charm to the event. With its French-speaking culture, beautiful Old Town, and scenic waterfront, the city felt a little like Europe—laid-back, stylish, and totally different from the west coast vibe of Vancouver. Oh, and the food? Absolutely amazing!
WARNING: long blog post
Conference Highlights
Here are some personal highlights from pgconf.dev 2025, based on my own experience and participation throughout the week. I’ve made an effort to capture key takeaways from the talks I attended. and included photos from the conference to give you a feel for the energy, community, and atmosphere of the event.
Sponsor Swags
At the conference sign-in desk, a colorful array of sponsor swag was neatly displayed alongside the official pgconf.dev T-shirts. From stickers and pens to notebooks, socks, and other branded goodies, the table was a treasure trove for attendees. Everyone was welcome to help themselves and take as many items as they needed — a small but thoughtful way for sponsors to share their appreciation and for participants to bring home a piece of the event. The generous assortment added a lively and welcoming touch to the registration area, setting a positive tone from the moment attendees walked in.
Have you got the keychains sponsored by Highgo?
If so, you’re in luck — those were handmade by me just days before the conference! ?
Social
This year’s social event was held at TimeOut Market, a vibrant and upscale food hall located inside a downtown shopping mall. The venue featured a wide variety of food stalls and a full-service bar, offering something for everyone’s taste. Each attendee received a CAD $65 gift card, giving us the freedom to explore and enjoy whatever we liked—from food to desserts and drinks. American BBQ stall was very popular.
Compared to last year’s social at Rogue in Vancouver, I personally enjoyed this one much more. The open and spacious layout made it easy to mingle and move around, creating a relaxed and social atmosphere. The diverse food choices were fantastic, and having the mall right there added an extra layer of fun—some of us even squeezed in a bit of shopping between bites and conversations.
Poster Session
The Poster Session is an exciting new—though unofficial—addition to pgconf.dev 2025, organized by Andrew Borodin. It offers a creative and informal platform for participants to showcase their work through A2-sized posters highlighting patches, projects, or community initiatives. This is a great opportunity to visually share ideas and spark conversations with fellow attendees. Example topics include VACUUM statistics, the extended TOAST pointer, IvorySQL’s Oracle-compatible packages, and PostgreSQL Women. One standout poster promoting the HOW2025 conference is strategically placed right in front of the drink station—perfectly positioned to catch attention and encourage engagement.
Lightning Talks
The Lightning Talk session is a beloved pgconf.dev tradition happending on Thursday May 15th, where speakers each take the stage for just 5 minutes to deliver fast-paced, insightful presentations on various PostgreSQL-related topics — hence the name “lightning talk.”
This year, our colleague Grant Zhou had the honor of presenting. His talk highlighted the power of the open-source ecosystem in connecting the world, and he also introduced the upcoming HOW2025 conference to be held in Jinan, China. Grant shared that the Call for Speakers is still open, encouraging contributors from around the globe to participate and help shape this exciting event.

Hallway Conversations
One of the most valuable parts of pgconf.dev happens outside the scheduled talks—in the hallways. Hallway conversations are where spontaneous discussions take place, allowing attendees to connect, share ideas, ask questions, and form new collaborations. Whether it’s diving deeper into a talk topic, chatting with a contributor you admire, or simply exchanging experiences with fellow PostgreSQL users, these informal moments often leave a lasting impact.
In addition to hallway chats, there was a dedicated community booth—a welcoming space to meet members of the PostgreSQL community, ask questions, grab stickers or swag, and snap some fun photos. It served as a central spot for newcomers and veterans alike to connect and celebrate the spirit of the community.
Community Booth
The community booth was a charming and welcoming addition to the conference space, offering a cozy spot where attendees could relax on comfy couches and chat with fellow community members. A giant PostgreSQL sign made the perfect backdrop for fun photos.
Adding to the energy of the booth, Andrew Borodin was on hand giving live demos on how to build PostgreSQL from source and how to create your first PostgreSQL patch. His hands-on guidance drew in curious newcomers and seasoned developers alike. Bravo, Andrew!

The Opening
The conference once again kicked off with a warm welcome from Jonathan Katz and Melanie Plageman, who greeted attendees with enthusiasm and set the tone for an engaging week ahead. They also took a moment to thank the generous sponsors who made the event possible.
Keynote: From RAP to Snowflake – A Look at 50 Years of SQL DB Scalability – David DeWitt
- Introduced the revolution of databases in terms of scaling capabilities in the past 50 years.
- 3 major eras of data warehousing breakthrough:
- the database machine and appliance (1975 – 1990)
- host your own database software (1985 – 2005)
- the cloud database (2006+)
- The RAP (Rotating Associative Processor) – one of the oldest database machines.
- Joins achieved with a form of nested loop
- not the cleanest working prototype
- takes minutes to complete a 2MB data joins, but still acceptable back then.
- Based on shared-nothing architecture where CPU, memory, storage are connected together
- have to buy the hardware first, so it incurred higher hardware cost and less setup efforts.
- hosting your own database was increasing popularity during 1985 – 2005 due to PC and server markets becoming more common. Hardware costs are less than dedicated database machine but initial setup could be painful.
- The cloud – it has been popular since 2006 which has the lowest hardware and setup costs. 3 cloud databases were mentioned particularly:
- Redshift
- snowflake
- databricks spark
- where Snowflake is considered as the most innovative.
- talked about Snowflake’s separation of compute and storage architecture and how it could scale 2TB tables effectively.
Wednesday – May 14th
Investigating Multithreaded PostgreSQL – Thomas Munro
- shared the design proposal and current status of multi-threaded postgresql work that is still on-going.
- PostgreSQL has been a multi-processed database since the beginning so there is a lot of work to turn it into multi-threaded model.
- threads are preferred because it is much faster for a CPU to do context switches and time slicing among threads than processes.
- proposed to have a single process that has multiple threads running inside responsible for serving connecting clients.
- One of the biggest problem is to convert 2000+ of global variables to thread local.
- dynamic shared memory (DSM) module that is responsible for creating and attaching shared memory segments would have to be removed or enhanced with multi-threaded support
- Still keep postmaster running to manage auxiliary backend processes such as autovacuum worker and walwriter.
- signal handlers will no longer be needed → so there needs to be another means to wake up latches and do inter-thread communications.
- needed to keep the original multi-process architecture and add multi-threads as a switch to allow extensions to slowly adapt to the multi-thread architecture.
- extensions could be a problem to adapt to multi-threads
- slowly convert modules to be thread-safe such as parser, buffer manager…etc
What is new in C and POSIX? – Peter Eisentraut
- summarize the what is new in C and POSIX, both of which are used in PostgreSQL today.
- POSIX.1-2008 is what postgresql requires today.
- POSIX.1-2024 new utilities:
- all C17 library functions
- ppoll(), bindtextdomain(), gettext(), ngettext(), strlcat, strlcpy, memmem(), getlocalname_l()
- posix_close(), asprintf(), mkostemp(), dup3(), accept4(), pipe2(), beNNtoh(), htobeNN()
- getresuid(), getresggid(), secure_getenv(), str2sig), sig2str()
- … and many more
- POSIX.1-2024 removed functions:
- gettimeofday() → use clock_gettime() instead
- isascii()
- getitimer() / setitimer() → use timer_gettime() and timer_settime() instead
- ulimit() → use getrlimit() / setrlimit() instead
- C11:
- alignment, noreturn, static assertions, benign typedef redefinitions, anonymous structures and unions, generic selection, atomic types.
- improves Unicode, floating point, complex values support
- bounds checking
- remove gets()
- … and many more
- C23:
- #embed, #elifdef, #elifndef
- __has_include()
- nullptr → new unambiguous null pointer constant
- typeof()
- constexpr, binary literals, digit separator
- new keywords: true, false, alignas, alignof, bool, static_assert, thread_local
- empty initialization {}
- [[attributes]]
- PostgreSQL is slowly making its way to support C23, perhaps PG30 will require C23.
postgresql.org: The hidden parts – Magnus Hagander
- introduced the team, “PGInfra” that consists of 7 members that hosts and maintains postgresql.org, that powers many apps and services in PostgreSQL community. Services include:
- downloads
- postgresql.org
- yum and deb repo
- mailing list and archive
- commitfest
- planetpg blog site
- wiki
- buildfarms
- conference system
- internal list archives
- email delivery
- DNS
- 1 laaS cloud
- Used to have about 67 VMs running among 13 servers, but now down to 9 servers
- internally use PostgreSQL 15
- one big painpoints is to reboot a Cisco ASA enterprise hardware every week to keep it running fine → to be replaced.
- postgresql.org has about 80-100 hits per seconds with approximately 275000 unique URLs
- buildfarm consumes the most resources due to uploading of large build log files → 33 Mbits/s
- PGInfra is looking for new hosting services that have dedicated hardware, unmetered bandwidth and ipv4+v6 native.
ChatGPT Ain’t Got $%@& On Me! Next Generation Automated Database Tuning – William Zhang
- Introduced database tuning techniques in the:
- past (DBAs)
- present (automatic tuning tools)
- frontier (Holistic tuner)
- future (Large Language Models LLMs)
- tuning is normally based on the system or the user workload or both and it tries to make specific changes to the database system to increase performance.
- in the past, database tuning was traditionally done by DBAs tuning different “knobs” supported by PostgreSQL:
- buffer pool size
- optimizer flags
- indexes
- plan hints (query knobs)
- in the present, database tuning can be done automatically via different tuner tools:
- PGTune, OtterTune → for system knobs tuning
- Dester → database tuning advisor
- Bao, Auto-Steer → query option tuners
- these tools normally target a specific aspect within the database system
- Using individual tuning tools mentioned can increase the performance but it may be far from reaching the their true potential.
- Using sequential tuning is one way to optimize the tuning further by re-using existing tools and determining the best way to schedule and run them:
- round robin scheduling policy
- benefit-based policy
- combine each tool’s local optima to a holistic configuration → system-wide approach to optimizing database performance by considering all aspects of the environment—not just isolated configuration parameters or query plans
- define similarities for better tuning:
- shared buffer size ranges
- indexes with common prefixes are similar
- approximate executions
- tuning makes expensive changes (create index)
- runtime plan estimate with different index type
- use of extensions to prevent intrusive changes to planner
- LLMs and AI are coming to optimize holistic tuning even more
- LLMs can be in training data mode to learn from results, not just from query behavior like many tuning tools do → actively exploring and adapting to new environments and workloads
Scaling Postgres to the next level at OpenAI – Bohan Zhang
- shared how OpenAI scales a single primary + 40 replica setups without sharding to cope with the increasing demand.
- PostgreSQL is not ideal for write-heavy workloads → only one single primary that can handle write, but for read, it can scale pretty well.
- Reduce load on primary
- migrate write-heavy workloads to other systems
- use lazy writes where possible
- set a rate limit
- reduce number of writes in application level
- offload read queries to replicas rather then primary
- Query optimization
- avoid long running idle transactions by using a idle transaction timeout
- set statement_timeout, idle_in_transaction_session_timeout
- set client set timeout as well
- avoid multi-way joins (such as 12 table joins) at the application level
- remove insufficient queries in application level where possible
- Prioritize requests
- categorize requests by their impacts when unavailable.
- only single writer → single point of failure
- dedicated read replica for high priority requests
- Rate limit
- set a rate limit on long running queries at application level functions
- rate limit on creation of new connections to prevent pool exhaustion
- rate limit on query digests to control impact of expensive queries
- Connection pooling
- use pgbouncer as postgreqsl proxy
- reduces connection latency significantly
- reduces number of connections
- auto reroute to available replicas should one fails.
- Schema management
- only lightweight schema changes permitted. No create table query allowed
- table rewrite not allowed
- these practices and techniques allow the postgresql instance to millions of QPs, which powers OpenAI’s critical services, only one SEV-0 indicent in past 9 months. Sustainable for future growth
Tracking plan shapes over time with Plan IDs, and a new pg_stat_plans – Lukas Fittl
- The goal is to enable aggregate analysis of which queries use which plan structures and the ability to track plan changes efficiently.
- propose a new pg_stat_plans that leverage postgresql’s extensible cumulative statistics system
- tracks plan texts and statistics in shared memory
- plan ID differentiates plan shape (the textual representation of a plan) while query ID differentiates query structure internally.
- plan IDs allows us to track plan usage over time and detect possible changes
- AWS has aurora_pam_stats for Aurora while Microsoft has plan IDs in Query Store for Azure PostgreSQL with similar purposes.
- Plan ID calculation must be fast → if put in core, it is easy to maintain “what is significant” → can reuse tree walk code to avoid re-implementation of a plan tree walk function
- propose pg_stat_plans that tracks per-plan call counts, execution times and EXPLAIN text in postgresql
- pg_stat_plans can be used to track currently running query’s plan shape
- next: pg_stat_plans 2.0 → plan text compression, stabilize extension, partial support for older releases.
Reproducible Postgres – Alvaro Hernandez, Javier Maestro
- Reproducible Postgres – an Apache-licensed, secure distribution of vanilla upstream PostgreSQL.
- aims to have the same binaries, bit-for-bit, across all enviornments.
- without reproducible builds, there is little guarantee of how the binary was built, and cannot troubleshoot dev/test environments with the very same binary and provisioning gets harder.
- introduces Hermetic builds, which is fully isolated from host system to prevent tainting.
- strong defense against supply chain attacks and improves build and caching efficiency
- building postgresSQL and its extensions from a unified monorepo
- open source
- both a binary and source distribution platform
- can re-use and re-package
- support 5 major release versions and 1000+ extensions with multiple versions (a lot of version combinations supported) → totaling over 1 million packages
- based on Bazel, an open-source, build and testing tool created by Google and community.
- no vendor lock-in + industry support + extensible
- next steps:
- publish as open source
- monobot → automatic crawler that will generate repository meta data file
- add more extensions
- support multiple glibc, and multiple forks (Babelfish, IvorySQL, OrioleDB, OpenHalo, PgEdge)
Thursday – May 15th
Changing shared_buffers on the fly – Ashutosh Bapat
- Problem:
shared_buffersis a key PostgreSQL performance parameter, but changing it requires a database restart.- This limits adaptability to dynamic workloads.
- Leads users to settle for non-optimal values, causing performance or cost issues.
- The challenge: resizing shared memory structure changes the start address of it and therefore all backend needs to be notified the new address. The existing data needs to be moved to the new address from the old. Extensions may also need to cope with the change of shared memory address.
- proposed solution:
- use separate shared memory segments for buffer pool, buffer id and buffer description arrays…etc
- avoid moving shared memory structures
- maintains pointer stability
- pure mmap approach
- mmap is a unix system call that maps files or anonymous memory into process address space.
- can be used to shrink or expand a already created shared memory segment
- mmap with PROT_WRITE | PROT_READ flags during initialization.
- Resizing: unmap reserved memory → remap allocated memory→ map reserved memory
- mmap + anonymous file
- memory mapped file I/O → can create large contiguous memory regions
- the size of memory is the size of memory mapped file
- resizing: ftruncate() → fallocate, no need changes to mapping
- problem: fallocate is linux only, posix_fallocate() does not work with shm fds.
- madvise:
- lazy in releasing memory
- linux only
- freed pages can still be written.
- shrinking buffer:
- evict all buffers in area to be shrunk → flush dirty buffer, wait if buffer is pinned, remove buffer lookup table
- compact buffer lookup table
- shrink shared memory segments
- publish the new NBuffers
- expanding buffers:
- expand shared buffer memory segments
- initialize elements in new expanded memory
- expand buffer lookup table
- publish new NBuffers
- Synchronization
- postmaster initiates resizing task and signals all backends to enter resizing process
- new backends creations are blocked until resizing operation has been completed
- backend exit signals are ignored until resizing is done
- Failure handling:
- backend with a pinned buffer may delay entering resizing process
- bgwriter and checkpointer both may be delayed to enter resizing
- backends have to wait forever until ready to participate
- queries are aborted in the backend after the waiting
- remapping failures: still a questionable problem: restart? rollback resizing? backend exits?
- can use the existing pg_reload_conf() to trigger resizing
- also propose new sql function pg_update_shared_buffers to perform the actual resizing
- or as a DDL command: ALTER SYSTEM UPDATE shared_buffers…
- if multi-thread becomes possible:
- shared memory is not required
- memory mapping may still be required
- process synchronization still required
Rethinking PostgreSQL Performance in the Age of Monster Hardware – Lætitia AVROT
- Modern servers with terabytes of RAM, fast SSDs, and hundreds of CPU threads challenge the traditional need for horizontal scaling.
- Core question: Why scale out when one powerful machine can handle the workload?
- rethinking vertical scaling as a viable, often superior strategy for postgresql workloads
- Explained Non-Uniform Memory Access (NUMA) architecture and suggest that NUMA awareness is coming!
- each CPU socket has its own local memory and can access local or remote memory
- modern servers can have 2 NUMA nodes
- NUMA nodes communicate via infinity fabric and Remote Memory Access
- How others do it?
- Oracle: NUMA-aware memory and process
- SQL Server: soft NUMA
- DB2: full NUMA support
- MariaDB and PostgreSQL: None
- workaaround on how postgresql can be made NUMA-aware
- numactl –cpunodebind=0 –membind=0 — pg_ctl start (bound to NUMA node0)
- nmactl –interleave=all– pg_ctl start (balancing NUMA node usage on all node)
- explained process vs threads
- process has higher overhead, shared memory difficulty and connection pool mandatory
- thread model – in theory more performant and can better utilize NUMA architecture
- challenges with threads:
- state management across threads
- turn global to thread-local
- synchronization and locking
- signal handling
- shared memory management
- True bottleneck is still the storage I/O and vertical scaling on monster hardware is much simpler.
What went wrong with AIO – Andres Freund
- AIO has been in and out of development for the past 5+ years and its core components have finally arrived at postgresql 18
- The aim of AIO is to enhance postgresql’s IO operation close to the storage hardware limits as AIO removes most IO overhead and wait. It is based on the modern io_uring Linux kernel interface introduced in Linux 5.1 (2019) that handles disk, file, socket and more I/O efficiently.
- the talk discusses why the project took such a long time and share the pain points.
- project challenges:
- AIO interacts with some of the least tested code in postgresql
- basic assumptions need to be changed
- hard to get an architectural view as so many components get affected, not without actually working on AIO
- postgresql community is not known for a lot of core development iterations
- prototype
- based on only io_uring per worker and then posix_aio interface created by Thomas
- AIO is for reads, checkpointer, bgwriter, backend buffer flushes, synch request, WAL writes
- a lot of unknowns, so more experiments have been carried out, causing more redesign work
- WAL doing a lot of fsync → high lock overhead
- observed weird perf issues with AIO
- prototype is crucial → code quality went down because not all things have been optimized or refactored as much
- architecture mistakes
- limited number of io_uring rings
- arbitrary number of IOs can be reserved
- IO merging purely inside AIO layer, not user level
- work related to AIO in community
- replace buffer IO locks with conditional variables
- bulk relation extension
- read streams
- aligned allocations
- AIO in pg18
- core AIO infrastructure added
- sync, worker, and io_uring API methods
- buffered reads via streaming read interface use AIO → seqscans, pg_prewarm, vacuum..etc
- no writes
Advanced testing with Injection Points – Michael Pacquier
- Injection points allow backend and extension developers to perform advanced testing and fault injection in PostgreSQL’s core and extensions.
- Think of them as hooks or programmable checkpoints inside PostgreSQL code where custom behavior or test logic can be triggered.
- Designed to simulate edge cases, failures, and rare race conditions during testing.
- postgresql makes testing hard as a multi-process architecture:
- cannot synchronized actions
- hooks are limited by code paths and states
- extensions limited in backend and external libraries
- cross-process interactions → tricky with wait and wake + monitoring
- corner cases, manipulation and reproducibility rates may be hard to execute
- enable injection points by:
- configure –enable-injection-points
- meson -Dinjection_points=true
- an injection point is declared by the MACRO: → few injection points already declared as default at strategic places.
- INJECTION_POINT(name)
- A callback function needs to be created and attached to a already-declared injection point by the function:
- InjectionPointAttach(name, library, function, private_data, private_data_size)
- the callback function has prototype void (func_name*)(name, private_data)
- callback is available across all backends
- Stack manipulation
- injection point can be used to modify stack variable for testing edge cases
if (IS_INJECTION_POINT_ATTACHED("name"))
{
someval = 999; // modify value
INJECTION_POINT_CACHED("name", NULL); //invoke injection point
- wait and wakeup in isolation, cross process tests:
- SELECT injection_points_run(“name”) → can be used to invoke an injection point that waits
- SELECT injection_points_wakeup(“name”) → can be used to wake up givne injection point
- SELECT injection_points_detach(“name”) → detach after wait and wakeup
- next:
- make injection point persistence
- integrate with injection_points, flush via SQL
- early startup conditions
- backpatch to stable branches if needed
What can Postgres learn from DuckDB? – Jelte Fennema-Nio
- Unlike PostgreSQL, DuckDB is a analytical database that runs every where, easy to install, supports multiple input and output types and utilizes columnar storage for fast data lookups
- can be installed easily in multiple ways
- curl
- pip install duckdb
- install.packages(‘duckdb’)
- cargo add duckdb
- npm install duckdb
- ODBC, ADBC …etc
- regarded as world fastest CSV parser and can take several input and output formats
- csv, json, parquet, icebeg…etc
- supported various source and destinations:
- mysql, postgresql, sqlite
- https://, s3://, gcs://, azure://
- …etc
- generic data type, lightweight compression, NUMA-aware parallel query mechanism
- runtime added native functions
- DuckDB is quite different from PostgreSQL, but some of its ease of use feature, design and type supports are something PostgreSQL can learn from.
Writing fast C code for a modern CPU (and applying it to PostgreSQL) – David Rowley
- Performance gains now come primarily from doing more work per clock cycle (IPC – Instructions Per Cycle).
- How efficiently a CPU runs depends heavily on the structure and behavior of the code it executes.
- Branch Prediction: How CPUs guess future control flow and how mispredictions hurt performance.
- Cache Prefetching: How CPUs anticipate memory accesses and the impact of memory access patterns.
- not much study has been done into branch prediction in postgresql
- Other tricks like instruction pipelining, out-of-order execution, and speculative execution may also be touched on.
- branch predictor can be tested with a simple for loop that assigns a value to an array element either sequentially or with randomly generated numbers → turns out randomly generated number have better branch prediction results
- CPU stats about branch prediction can be tested with (make sure the program is compiled with -O0
<code>perf stat ./program 1000</code>
- modern CPUs have lower and lower branch predictor misses as array size increases
- branch misprediction could slow down CPU’s instruction handling efficiency
- how to find branch misprediction?
perf list perf record -a brach-misses ./program 100000 perf report
- cache prefetching is another technical to speed up faster code execution
- demonstrated how small code changes can significantly improve execution speed.
Friday – May 16th – The Unconference
The Unconference is a long-standing tradition at pgconf.dev and a unique, participant-driven session that thrives on community engagement. Led by Robert Haas and Nathan Bossart, this interactive format empowers attendees to shape the agenda. This year, over 20 discussion topics were submitted by participants, with the top 12 selected through community voting. It’s a dynamic and collaborative space where developers dive into emerging ideas, unsolved challenges, and innovative directions for PostgreSQL.
Can the Community Support an Additional Batch Executor? – Amit Langote, David Rowley
- current executor is designed to interface with the heap access method to process one row at a time. One “Tuple Table Slot” (TTS) structure is normally passed between executor and access method during SELECT, INSERT, UPDATE, DELETE.
- In some cases, it may be more efficient to support an executor that can process in “batches”
- instead of one TTS, it would support an array of TTSs instead with a batch size limit and ramp up mechanism.
- debate about:
- add a new batch executor → easier to add and has minimal impact on current executor, but hard to maintain because there are now 2 executors.
- modify existing to support batch executor → invasive changes need to be done on current executor that is hard to progress in community, but it is neater and easier to maintain since there is still one executor.
- Planner would also need to support a batch executor plan, so this work would affect current planner and we do not want 2 planners in postgresql. Again, an invasive change also on planner
- also mentioned about a push-based executor rather if multi-thread becomes available.
- perhaps executor has extensible hooks that can be used by an extension to add batch support? Current executor hooks may not be sufficient though.
- can this batch work be moved to the heap access method instead? perhaps to make sequential scan to be done more efficiently rather than calling “heapgetnext()” repeatedly with extra call overhead.
- the work probably needs to be done starting at planner stage with a custom plan and passed it to the existing executor with a custom executor hook.
Multithreaded PostgreSQL (2025 edition) – Peter Eisentraut
- there are about 2000+ global variables that need to be converted to a thread local struct in multi-threaded architecture. Either there should be a big struct per session or several smaller structs per-module is a debating point
- variables that are backed by GUC variables are still not clear how to proceed. Some GUCs are per-session, some are global, how should multiple threads share or have their own copies of such GUC variables remain unknown.
- pg_duckdb is an example of an extension that currently uses threads on its own. We should keep that working.
- current process has a shutdown hook that is responsible for resource cleanups, the threads also need similar mechanism during exits, how?
- file descriptors are not shared among processed currently, but in a multi-threaded environment, file descriptors are shared among threads. Need synchronization mechanism and protection on these file descriptors → nothing can happen at the same time.
- extensions also need its own thread local session struct in multi-threaded environment
- how about auxiliary processes? are they staying as processes or as threads? still not known.
Commit Order on Standbys/CSN Snapshots – Sergey Melnik, Peter Geoghegan
- visibility order does not equal commit order, and there could cause replica to deviate from primary momentarily producing different results. → a typical problem in distributed database setup.
- visibility order is determined by a lock on ProcArray
- commit order is determined by WAL
- concurrent transaction commits could cause replica to obtain a wrong snapshot and produce a wrong result.
- Commit Sequence Number (CSN) can be used to correct the snapshot anomaly at replica.
- XID-to-CSN mapping management
- make snapshot as a single number
- what about synchronous commit = off? meaning the primary will not wait for replica to process the WAL → transaction abort
- propose a read-wait for asynchronous transaction based on commit LSN, transaction ID and CSN.
Global Index – Mark Dilger
- the main benefit from global index is to have a uniqueness check on non-partitioned key column that spans across multiple partitions rather than within one partition. It is called global index because Oracle also has this type of index supported.
- Dilip Kumar was originally going to present his global index work but unfortunately he could not attend the conference.
- Idea is to propose the changes in both heap access method and index access method to store more information on index and heap tuples that are relevant to global index.
- TID
- offset
- zone ID
- the zone ID indicates which partition the tuple resides on.
- the both access methods will store extra information and have new mechanisms to ensure global uniqueness. Such as a global struct that tells where a value may be located → cache mechanism.
- the access methods may need to add new API functions for this feature and it needs to support adding and removing a zone.
- it does not intend to change how current partitioned table works.
- global index is not likely to be committed as a whole patch. Rather, smaller commits and efforts will be committed bit by bit, so progress is slowly made.

End of Conference
The conference concluded on a warm and appreciative note, with Jonathan Katz and Melanie Plageman expressing heartfelt thanks to all the attendees, speakers, volunteers, and sponsors who made pgconf.dev 2025 a success. They also shared the exciting news that pgconf.dev 2026 will once again be held in Vancouver, continuing the tradition in this vibrant city. To keep the momentum going, they presented a list of upcoming PostgreSQL events for the remainder of the year — including the HOW2025 conference — encouraging the community to stay engaged and connected.

Cary is a Senior Software Developer at Hornetlabs with over 10 years of industry experience in C/C++ software development. Before Hornetlabs, he worked in PostgreSQL and smart metering energy sector, designing innovative and scalable solutions for engery utilties. He holds a Bachelor’s degree in Electrical Engineering from the University of British Columbia (UBC) since 2012, and has extensive expertise in networking, database internals, data security, smart metering innovations, infrastructure, PostgreSQL databases, deployment and containerization with Docker and Kubernetes, as well as embedded systems engineering. With a strong passion for open-source technologies and high-performance database systems, Cary continues to contribute to PostgreSQL development and innovative designs.

