To be honest this looks like half-baked. Couple of points:
- migration queries for up and down should be persisted. Here migrations are always read from files. If one changes down migration it might not match the up migration.
- migration content should be tracked (crc, hashing, whatever) to prevent any changes to migrations once applied
- my opinion is that each migration should have strict order and not be relied upon file sorts to prevent adding migrations in the middle of existing ones. This is accomplished by enforcing each id/migration number in files to be incremental
- 1-to-1 mapping should be enforced and errors should be produced if there are migrations in database, but not present in files
- one nit pick: two tables for migrations seems overkill
This is mostly coming from Java background and having the above things was really life saver in some situations. I think the Playframwork's way of managing migrations was really good. Flyway is good too for SpringBoot (not sure if exists something besides it), but lacks down migrations.
https://flywaydb.org/ is another contender in this space, that also allows for migrations written in Java if needed.
We do exactly that at work - Liquibase with SQL for about 7 years now. It's wonderful and you don't have to learn anything on top of your SQL dialect. Also, it makes database-first a breeze since you can export your changes to SQL from any IDE these days and drop it right into a migration.
https://docs.liquibase.com/workflows/liquibase-community/mig...
Have you seen Sqitch (https://sqitch.org/)? It does exactly this, it's a battle-tested system with a decent number of users, and it supports many database. I didn't dig deep into this new system, but it looks very much like Sqitch at a glance.
With SQL Server Data Tools (SSDT) we get desired-state schema management, which seems much better than writing any kind of migrations. In practice it works well, and the code you're writing is plain DDL. I wonder why it's not more commonly seen in other ecosystems.
You might want to look at https://flywaydb.org/download. 1. This is a redgate product 2. They do have a community edition (no money option.)
Besides raw performance, why would I ever want to write SQL instead of using some type of ORM or query builder??
If this interest you, both dbmate [https://github.com/amacneil/dbmate] and golang/migrate [https://github.com/golang-migrate/migrate] are in very similar spaces---and can be provided as a single executable.
The reason why people use ORM mappers with their migration functionality is that you get rid of a lot of repetitive stuff and get an okay result for 95%.
I have the feeling, if you don’t have a lot of different tables and migrations, writing them by hand probably saves you time compared to learning the intricacies of an ORM mapper (they are usually very leaky abstractions, so you have to read their source code to understand what’s going on). If you already know an ORM-mapper then it’s usually not worth it. The real trap seems to be to believe that ORM mappers will abstract the database away and that you don’t need to understand your database anymore.
I remember using something like this for a system based in ColdFusion and Microsoft SQL server circa 2005.
The missing magic in tools like liquibase is a low-maintenance parser framework. If we had composable grammars it wouldn't be that hard for a tool to track the SQL syntax of various databases and be able to apply some real intelligence to SQL definitions.
Trouble is nothing good ever happens in parsing frameworks because everybody things they have the same problem as Go and they worry too much about the speed of parsing.
Several years ago, I wrote something similar for Clojure, as part of a way to make it super simple to put a SQL database behind a Clojure all. Essential goal was for close to single jar file deployment.
https://github.com/mschaef/sql-file
Doesn’t get a lot of use aside from a few small things I use it for, but has been nice to have around.
(This was before I knew of Flyway…. These days I might just link to that for the migration part.)
Ohhh perfect, I was just looking for a simple way to do migrations in PostgreSQL without having to build an app and write python code. Just pure SQL is perfect!
Question: do you know if it will work with Azure DevOps? Where does it store the state of what scripts were executed so that it doesn't have to redo those the next time?
This seems to assume SQL-first development instead of using a DBMS just as a storage back-end.
I'd prefer something like a UTC datetime prefix to the numeric sorting dependence, but...
We do something like this with SQLite.
Fun fact - you don't even need to maintain a special unicorn "_migrations" table or any other external state to keep track of things with SQLite migration. You can simply utilize the user_version pragma:
https://sqlite.org/pragma.html#pragma_user_version
We have a DatabaseVersion constant in the classes that own each type of SQLite database, and all they have to do at ctor time is query the database for current version and run a for loop over the difference to execute the required migration scripts.
Our migrators are one-way (we don't define a matching 'down.sql'). We would push new code to back something out of a SQL schema and then increment our counter just like if we added something new. Decrementing/skipping is disallowed since this would cause information loss on the migration path. Having a monotonic version number per type of database makes it super easy to keep everything on rails for us.
For database access, we also use raw SQL via Dapper.