
by Jonathan A. Handler, MD, FACEP, FAMIA
WARNINGS AND CAVEATS
- If you do any of this, you do so entirely at your own risk.
- These instructions were specifically based on an upgrade involving PostgreSQL from the Mac Postgres app (postgres.app).
- This was based on a migration from a PostgreSQL major version 17 database to a PostgreSQL major version 18 database on my personal Apple M1 Max Pro 64 GB laptop running MacOS Tahoe v26.1. It may not apply to other operating systems, hardware, PostgreSQL database version upgrades, other types of server moves, or anything else. It may not even work properly for its intended use.
- These instructions seemed to work for me on early testing. I haven’t fully tested the resulting database to make sure nothing has gone wrong.
- These instructions may not work for you. They may go catastrophically badly. You could lose all of your data, or worse. This is not a complete listing of all the bad things that could happen, or even necessarily the worst things that could happen. I tried hard to make a safe upgrade approach, but I provide no warranties or promises or guarantees of any kind. For all I know, this content may be grossly negligent, or worse (if there is anything worse than that).
- Even if instructions may sound like I’m telling you what to do, I am not. Regardless of “person” (first, second, third) used in my verbiage, it’s really just a listing of what I did based on my recording of the steps I took to remind myself of what I did in case I want to repeat it, or something like it, in the future.
- I may have forgotten or incorrectly recorded some steps or even included some steps I failed to take. Literally, anything here could be wrong and/or incomplete. I tried hard to make it correct and complete, but I might have failed at either or both.
- ChatGPT helped me a lot with this. Take that into consideration if you are thinking about replicating these steps, since ChatGPT may have hallucinated or provided incorrect information, and other resources I used (I have provided links to some, perhaps not all) also may have provided incorrect information or I might have misunderstood the information.
- I may use variants of the words “upgrade” and “migrate” interchangeably.
- Again, if you do any of this, you do so entirely at your own risk.
This post was mostly just for me, to remember what I did when it’s time for the next major version upgrade or server move. I’m just sharing my experiences here. However, if it ends up helping someone else, great!
My Requirements
I took the approach of using pg_dumpall to dump the server then restoring from the resulting file, rather than using the pg_upgrade command line utility, because I was using the Postgres app. Some of my decision-making on approach was based on the instructions I found here:
https://postgresapp.com/documentation/migrating-data.html
Relative Ease of Execution and Safety
My requirements included relative ease of execution, and safety. It appeared I would have to do some special steps to do pg_upgrade since my database is through the Postgres.app and those seemed more complex. It also sounded risky to do pg_upgrade, at least based on my reading of the description on the web page above.
Indication of Progress
I also desired an indication of progress, since I was upgrading a reasonably large (> 500 GB) database. I couldn’t see that pg_dumpall gave any indication of progress, though I could at least see the destination file growing, and even that was often very delayed when dumping to an archive file. According to ChatGPT, restoring from pg_dumpall via psql provides no progress reporting even though it may be a very long process (a day or more in my case). According to ChatGPT, pg_upgrade displays on the command line when it completes major steps, and then when it finally completes. However, I figured (as did ChatGPT) that the step of creating the contents of the file would likely be, by far, the longest step and, per ChatGPT, that step reportedly gives no indication of progress. Although there are extra steps that can be taken to see progress during this step (e.g., queries, file sizes), I wanted something right there on the command line (or in a GUI) if possible.
Database Cleanup and Performance Enhancement
I had a large number of dead tuples, and a reportedly (per ChatGPT) nice side effect of this route (pg_dumpall then restore from the resulting file) is that the dead tuples no longer remain and everything is re-ordered. Dead tuples consume hard drive space and may affect database performance, even after a standard vacuum. So I took the pg_dumpall route to upgrade, also on that page described but here I have some tips and tricks and tweaks.
Completeness
I read that this pg_dumpall process gets “everything” I wanted, including databases, indexes, roles, passwords, etc… In essence, supposedly everything I need to just get started right away with the new database (except, perhaps, for making available to the server some extensions, as described below). There is reportedly a method to backup each individual database as a separate step, then backup the “globals” (I think it’s called) to get the other items like roles, passwords, etc. However, I wanted to do the backup all at once, then the restore all at once.
I Had No Requirement to Avoid Database or Server Downtime
If needed, taking my database offline or making it unavailable for users for a day or two was not problem or concern for me. I did not have a requirement to reduce downtime. Although I would guess that this set of instructions would work while the original server remains online and available for other users, I don’t know that for sure. I simply don’t know how these instructions will affect database performance, or what other issues may occur if the steps here are performed while the database is in active use.
Also… FYI…
“Backup Server…” in PGAdmin gives you a GUI to do a pg_dumpall, but it apparently only creates plain (not gzipped) files. My plain SQL file was over 500 GB, but it was about 1/3 the size when gzipped. I don’t recall that the Backup Server function provides a method to see progress, other than looking at the destination file size growth (although that seemed to work pretty well to allow me to ensure that the process was still progressing). Also, I was unable to find a PGAdmin GUI to do the full restore and I think ChatGPT “confirmed” that none existed.
The Upgrade Process I Followed
Dump the Server to be Updated
- Theoretically, I understood that this process should reportedly (per ChatGPT) be safe. However, since I tend to be paranoid, it seems safest to do a backup of the old server. There may be other caveats of which I am unaware. This assumes that the new server is empty of anything you actually care about. If not, these instructions may not apply.
- To enable seeing progress, I used Homebrew to install the “pv” package. If you don’t have Homebrew:
- Homebrew is available here, along with instructions on how to install it:
https://brew.sh - Then, if pv is not already installed, go to the Terminal app and type:
brew install pv- I’m pretty sure that it won’t hurt anything if you run it and it’s already installed, but since I’m not absolutely positive, I’m not saying that for sure.
- Homebrew is available here, along with instructions on how to install it:
- Now, you must run pg_dumpall from the proper binaries location.
- Open the Postgres app and click on the server from which you will do the dump.
- Click on Server settings, and copy the directory holding the binary and paste it into a text editor.
- To the directory holding the binary, add a forward slash and then:
pg_dumpall. - Then after that, paste:
–quote-all-identifiers - Then add the IP address or hostname of the postgres server, like:
-h localhost - Then add the port of the postgres server (also found in Server Settings in the Postgres app), like:
-p 5432 - Then add the Postgres user under which this should run, which I believe must be a superuser, and I think is often the postgres user, like:
-U postgres - Then add the pv thing for press bar (note space before vertical bar), like:
| pv -f -pterab - Then add that you want this file gzipped: (note space before vertical bar), like:
| gzip - Then put -c > plus a destination file path for the dump file to be created, like:
-c > ~/Desktop/postgresapp.sql.gz - It might look something like this when done:
/Applications/Postgres.app/Contents/Versions/17/bin/pg_dumpall –quote-all-identifiers -h localhost -p 5434 -U postgres | pv -f -pterab | gzip -c > ~/Desktop/postgresapp.sql.gz - Paste a line like the one above into the Terminal, and then hit enter to run.
Restore to the New Server Instance
- BEFORE restoring, make sure all extensions that need to be potentially available are in fact available. Run this query on the old server:
- Run the following query on the old server. This query generates another query that can be run on the new server to see which extensions existed on the old server but are missing on the new server.
- SELECT
‘– Step 2: Run this on the new server to check missing extensions’ || E’\n’ || ‘SELECT name AS missing_extension FROM (‘ || E’\n’ || ‘VALUES ‘ || E’\n’ || string_agg(‘ (‘ || quote_literal(name) || ‘)’, ‘,’ || E’\n’) || E’\n’ || ‘) AS old_exts(name)’ || E’\n’ || ‘WHERE name NOT IN (SELECT name FROM pg_available_extensions);’ AS check_query_text FROM pg_available_extensions WHERE name <> ‘plpgsql’;
- SELECT
- The above query should generate a new query string as a query result, and if you copy/paste that resulting query string into the new server query window and run it, you should get a list of all (if any) extensions that were available on the old one but not the new.
- You might not need any of the extensions that were available on the old one but not the new, or you might need one or more of them. If possible, it might be best to make sure they are available (in order to avoid upgrade failures), or else ensure that no database that you are migrating from your old server has actually installed them. All installed extensions must be available on the new server when you do the restore (not necessarily installed, just available) or your restore will fail, according to ChatGPT.
- To see what each database has installed, I was unable to find a way to do this across the entire server in one simple query. However, ChatGPT made me this query to tell what extensions are actually installed on each database. You would have to run it for every database in your server though:
- SELECT * FROM pg_extension
(NOTE: THIS QUERY WAS CHANGED ON 12/1/2025 @ about 12:53 PM Central Time FROM “SELECT name, default_version, installed_version FROM pg_available_extensions”. I changed it because that old one reported all the extensions available on a server, not the what’s installed in a particular database). - The above query in intended to tell you what extensions are actually installed for a database (not just available). I can’t really provide instructions on how to make an extension available if it’s not already available for you, since an extension could theoretically come from anywhere. In my case, Postgres.app already had provided all the necessary extensions.
- SELECT * FROM pg_extension
- Run the following query on the old server. This query generates another query that can be run on the new server to see which extensions existed on the old server but are missing on the new server.
- Then, to restore into next version (e.g. PostgreSQL version 18 running on port 5432):
- The new server to which the data will be restored has to be started and running for this to work).
- Run Terminal and enter something like the example that follows on the command line, but first, make the substitutions that follow the example.
- Example:
pv -f -pterab ~/Desktop/postgresapp.sql.gz | gunzip | /Applications/Postgres.app/Contents/Versions/18/bin/psql -h localhost -p 5432 -U postgres - Substitutions to be made into the example:
- After “-pterab “, substitute in the correct file location of the dump file you created above.
- After ” | gunzip | ” substitute in the correct binaries location for the new version, followed by a forward slash and then “psql” as in the example above.
- To find the binaries, Go to postgres.app, open it, click on the new server to which you will be restoring the data from the old server, Click on Server settings, and copy the directory holding the binaries. Then add a forward slash and then psql.
- Substitute in the correct host (after the “-h “) and port (after the “-p “) for the new server.
- Substitute in the correct database user (after the “-U “). The database user must (I think) have superuser privileges, and often that is the “postgres” user (I believe), as in the example above.
- Hit Enter and it should run.
- While this is running, the command line in Terminal, you should see progress information.
- At any time during the process (or really any time), you should also be able to go to a query tool in the new server and see all currently running queries.
- This is the SQL query I used to see the running queries:
SELECT pid, usename, datname, state, query, query_start
FROM pg_stat_activity
WHERE state = ‘active’; - During the restore process, in all likelihood, you will see a COPY query in there, though other queries may be happening too (for example, AUTOVACUUM, at the right instant, CREATE TABLE, etc.).
- This is the SQL query I used to see the running queries:
- For some gigantic indexes, when I looked at the progress displayed in the command line output (in Terminal), it appeared to be stalled for hours. <sigh> However, I opened the data directory, which I could get to by going to the new (“restore to”) server in Postgres.app, clicking on the Server Settings button, and then clicking the “Show” button next to Data Directory,. Then I clicked “Get Info…” and I could see the Size continuing to grow, so I knew data was getting written.
- At any time during the process (or really any time), you should also be able to go to a query tool in the new server and see all currently running queries.
Summary
Looking back on it now, it probably looks like a lot of complicated work to those reading this. However, the hard part for me was figuring out what to do to meet my needs. After that, I found actually executing the process simple. I think it looks a lot more complex here because I tried to break out every single step to try to make it easy. Ironically, that may have made it appear more complex. Also, what looks easy for one person may look hard to another, and vice versa. Hopefully this proves helpful to others, and if not, at least I think it will prove helpful to me in the future.
If, in testing and using the database going forward, I find any gotchas or bugs or issues, I hope to remember to report them here and to provide any suggested fixes as able. But, as always, no promises! 😀
All opinions expressed here are entirely those of the author(s) and do not necessarily represent the opinions or positions of their employers (if any), affiliates (if any), or anyone else. The author(s) reserve the right to change his/her/their minds at any time.
Leave a comment