Nice overview of pg-osc. I'd like to also mention pgroll, which has some similarities, but does this at column level rather than table level and takes things further: it can expose the old and new schema simultaneously (using views), which means you don't need to maintain backwards compatibility code in your app.
Plug bomb, but pgroll looks pretty good. I do a lot of copy/renames to update fields on very large tables with a lot of indexes, if it could automate those scripts including dependencies I would use it for big bespoke migrations in a heartbeat.
On the smaller side, I can see this being useful to avoid migration bugs, but being its own migration tool isn't a great choice since ecosystem specific migration tools have a lot of useful options and can be used programmatically. I'd make a pgroll plugin for alembic and other common ecosystem specific feature-rich migration tools that hooks into the ddl emission to transform a "dumb" migration into your juiced up migrations. That'd make it an instant use for me.
This is a nice tool to add to one’s Postgres toolbelt. However, when I saw the heading “Benefits of pg_osc” I hoped for a corresponding “drawbacks” or “tradeoffs” section but there doesn’t appear to be, at least on this page. The big one seems to be that this tool creates a copy of your original table (like MySQL Large Hadron Migrator, IIRC) which can be problematic if you are disk constrained and/or your table is very large which is more likely to be the case if you are reaching for specialized tools like this, I think. It would be great if instead some of these operations were supported safely with almost zero-cost the same way eg postgres 11 added support for safely adding new columns with default values. That may be beyond the capacity of an extension, however.
Online schema changes (and upgrades) are absolutely critical for high-availability applications. In cockroachdb all schema changes are online and work more or less like this. One thing you give up with more or less all of these things is the ability to perform the migrations in a transaction. Postgres is quite advanced relative to other databases in terms of its ability to perform DDLs in transactions. Cockroach still isn't there either -- though it's making real progress, at least in terms of transactions that exclusively consist of DDLs (i.e. don't mix DDLs with DMLs).
A good way to think about it is that any operation for which Postgres needs to inspect the existing data can block for a long time if there's a lot of data. For example, adding a unique constraint has to block. There is, however, a workaround with "NOT VALID".
Same with adding a NOT NULL constraint without a default value. If there is a (constant) default value, then postgres can do that without blocking, which is pretty cool. That works because it only needs to modify metadata.
Same with changing column types, they need to go over the existing data.
Disclaimer: I work at Xata, we maintain pgroll.