6 Commits
0.2.7 ... 0.4.0

Author SHA1 Message Date
f5943a69f0 Allow going down while in an inconsistent state.
Previously we prevent the user migrating the database in either
direction if there were missing or out-of-order migrations. However,
going down should always be safe (assuming proper down scripts were
written) and often going down is the proper way to resolve out-of-order
migrations. Going down should always be done thoughtfully (it is highly
likely to lead to data loss), and really should not be done in a
production setting.

However, in a development environment, it is not unusual to get into
states where you have missing scripts. For example, if there are test
scripts that apply test data to a development environment on top of the
formal application schema, you may need to back out the test migrations
before creating new application migrations to avoid getting into an
"inconsistent state." Consider the following migrations:

- 1-initial-schema
- 2-add-feature-x
- T1-test-user-data

If you then add *3-add-feature-y*, you will  be in an inconsistent
state, as the expected ordering from db_migrate's point of view will be:

- 1-initial-schema
- 2-add-feature-x
- 3-add-feature-y
- T1-test-user-data

With the previous behavior db_migrate refuses to do anything and you
must manually back-out the test migrations before applying the new
migration. With the new behavior you can easily go down to back out the
test migrations automatically before going back up with the new migrations.

As an aside:

Another way to avoid this is to name test migrations in such a way that
they stay inline with the changes they are built on top of:

- 1-initial-schema
- 1.1-test-user-data
- 2-add-feature-x
- 2.1-feature-x-test-data

But sometimes the previous pattern is preferable, as it allows you to
have test migrations that evolve and match the "current state of test
data" rather than a developer needing to layer the contents of multiple
migrations to get a clear picture of the test data. This same issue
applies to non-test migrations, but db_migrate exists to solve the
use-case where you generally prefer to have traversible, immutable
layers that are used to manage production database schema migrations
downtown.
2025-08-13 08:12:35 -05:00
2dbe3ea07c Update for Nim 2.x 2024-12-28 11:07:50 -06:00
9acbc27710 Add debug logging for migration diff. 2021-08-07 23:54:10 -05:00
7cf53a4702 Allow multiple SQL directories (to support, for example, test configurations). 2021-07-19 22:57:47 -05:00
6837e5448b Update for Nim 1.4.x+ 2021-07-03 22:00:14 -05:00
daf3a8dad0 Rename migrationsDir sqlDir. 2020-09-01 16:30:50 -05:00
4 changed files with 163 additions and 87 deletions

View File

@@ -1,4 +1,36 @@
DB Migrate # DB Migrate
==========
Small tool(s) to manage database migrations in various languages. Small tool(s) to manage database migrations in various languages.
## Usage
```
Usage:
db_migrate [options] create <migration-name>
db_migrate [options] up [<count>]
db_migrate [options] down [<count>]
db_migrate [options] init <schema-name>
db_migrate (-V | --version)
db_migrate (-h | --help)
Options:
-c --config <config-file> Use the given configuration file (defaults to
"database.properties").
-q --quiet Suppress log information.
-v --verbose Print detailed log information.
--very-verbose Print very detailed log information.
-V --version Print the tools version information.
-h --help Print this usage information.
```
## Database Config Format
The database config is formatted as JSON. The following keys are supported by
all of the implementations:
* `sqlDir` -- Directory to store SQL files.
The following keys are supported by the Nim implementation:
* `connectionString` --

View File

@@ -1,7 +1,7 @@
# Package # Package
bin = @["db_migrate"] bin = @["db_migrate"]
version = "0.2.7" version = "0.4.0"
author = "Jonathan Bernard" author = "Jonathan Bernard"
description = "Simple tool to handle database migrations." description = "Simple tool to handle database migrations."
license = "BSD" license = "BSD"
@@ -9,5 +9,4 @@ srcDir = "src/main/nim"
# Dependencies # Dependencies
requires: @["nim >= 0.13.0", "docopt >= 0.1.0"] requires: @["nim >= 2.0.0", "docopt >= 0.1.0", "db_connector"]

View File

@@ -42,7 +42,7 @@ Options:
private static Logger LOGGER = LoggerFactory.getLogger(DbMigrate) private static Logger LOGGER = LoggerFactory.getLogger(DbMigrate)
Sql sql Sql sql
File migrationsDir File sqlDir
public static void main(String[] args) { public static void main(String[] args) {
@@ -90,14 +90,14 @@ Options:
givenCfg.clear() } } givenCfg.clear() } }
// Check for migrations directory // Check for migrations directory
File migrationsDir = new File(givenCfg["migrations.dir"] ?: 'migrations') File sqlDir = new File(givenCfg["sqlDir"] ?: 'migrations')
if (!migrationsDir.exists() || !migrationsDir.isDirectory()) { if (!sqlDir.exists() || !sqlDir.isDirectory()) {
clilog.error("'{}' does not exist or is not a directory.", clilog.error("'{}' does not exist or is not a directory.",
migrationsDir.canonicalPath) sqlDir.canonicalPath)
System.exit(1) } System.exit(1) }
// Instantiate the DbMigrate instance // Instantiate the DbMigrate instance
DbMigrate dbmigrate = new DbMigrate(migrationsDir: migrationsDir) DbMigrate dbmigrate = new DbMigrate(sqlDir: sqlDir)
// If we've only been asked to create a new migration, we don't need to // If we've only been asked to create a new migration, we don't need to
// setup the DB connection. // setup the DB connection.
@@ -112,7 +112,7 @@ Options:
// Create the datasource. // Create the datasource.
Properties dsProps = new Properties() Properties dsProps = new Properties()
dsProps.putAll(givenCfg.findAll { it.key != 'migrations.dir' }) dsProps.putAll(givenCfg.findAll { it.key != 'sqlDir' })
HikariDataSource hds = new HikariDataSource(new HikariConfig(dsProps)) HikariDataSource hds = new HikariDataSource(new HikariConfig(dsProps))
@@ -125,8 +125,8 @@ Options:
public List<File> createMigration(String migrationName) { public List<File> createMigration(String migrationName) {
String timestamp = sdf.format(new Date()) String timestamp = sdf.format(new Date())
File upFile = new File(migrationsDir, "$timestamp-$migrationName-up.sql") File upFile = new File(sqlDir, "$timestamp-$migrationName-up.sql")
File downFile = new File(migrationsDir, "$timestamp-$migrationName-down.sql") File downFile = new File(sqlDir, "$timestamp-$migrationName-down.sql")
upFile.text = "-- UP script for $migrationName ($timestamp)" upFile.text = "-- UP script for $migrationName ($timestamp)"
downFile.text = "-- DOWN script for $migrationName ($timestamp)" downFile.text = "-- DOWN script for $migrationName ($timestamp)"
@@ -140,7 +140,7 @@ Options:
CREATE TABLE IF NOT EXISTS migrations ( CREATE TABLE IF NOT EXISTS migrations (
id SERIAL PRIMARY KEY, id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL, name VARCHAR NOT NULL,
run_at TIMESTAMP NOT NULL DEFAULT NOW())''') } run_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW())''') }
public def diffMigrations() { public def diffMigrations() {
def results = [notRun: [], missing: []] def results = [notRun: [], missing: []]
@@ -150,7 +150,7 @@ CREATE TABLE IF NOT EXISTS migrations (
.collect { it.name }.sort() .collect { it.name }.sort()
SortedSet<String> available = new TreeSet<>() SortedSet<String> available = new TreeSet<>()
available.addAll(migrationsDir available.addAll(sqlDir
.listFiles({ d, n -> n ==~ /.+-(up|down).sql$/ } as FilenameFilter) .listFiles({ d, n -> n ==~ /.+-(up|down).sql$/ } as FilenameFilter)
.collect { f -> f.name.replaceAll(/-(up|down).sql$/, '') }) .collect { f -> f.name.replaceAll(/-(up|down).sql$/, '') })
@@ -215,7 +215,7 @@ CREATE TABLE IF NOT EXISTS migrations (
toRun.each { migrationName -> toRun.each { migrationName ->
LOGGER.info(migrationName) LOGGER.info(migrationName)
File migrationFile = new File(migrationsDir, File migrationFile = new File(sqlDir,
"$migrationName-${up ? 'up' : 'down'}.sql") "$migrationName-${up ? 'up' : 'down'}.sql")
if (!migrationFile.exists() || !migrationFile.isFile()) if (!migrationFile.exists() || !migrationFile.isFile())

View File

@@ -3,11 +3,18 @@
## ##
## Simple tool to manage database migrations. ## Simple tool to manage database migrations.
import algorithm, json, times, os, strutils, docopt, db_postgres, sets, import std/[algorithm, json, logging, os, sequtils, sets, strutils, tables,
sequtils, logging times]
import db_connector/db_postgres
import docopt
type type
DbMigrateConfig* = tuple[ driver, sqlDir, connectionString: string, logLevel: Level ] DbMigrateConfig* = object
driver, connectionString: string
sqlDirs: seq[string]
logLevel: Level
MigrationEntry* = tuple[name, upPath, downPath: string]
proc ensureMigrationsTableExists(conn: DbConn): void = proc ensureMigrationsTableExists(conn: DbConn): void =
let tableCount = conn.getValue(sql""" let tableCount = conn.getValue(sql"""
@@ -38,33 +45,35 @@ proc loadConfig*(filename: string): DbMigrateConfig =
let idx = find(LevelNames, cfg["logLevel"].getStr.toUpper) let idx = find(LevelNames, cfg["logLevel"].getStr.toUpper)
logLevel = if idx == -1: lvlInfo else: (Level)(idx) logLevel = if idx == -1: lvlInfo else: (Level)(idx)
return ( return DbMigrateConfig(
driver: driver:
if existsEnv("DATABASE_DRIVER"): $getEnv("DATABASE_DRIVER") if existsEnv("DATABASE_DRIVER"): $getEnv("DATABASE_DRIVER")
elif cfg.hasKey("driver"): cfg["driver"].getStr elif cfg.hasKey("driver"): cfg["driver"].getStr
else: "postres", else: "postres",
sqlDir:
if existsEnv("MIGRATIONS_DIR"): $getEnv("MIGRATIONS_DIR")
elif cfg.hasKey("sqlDir"): cfg["sqlDir"].getStr
else: "migrations",
connectionString: connectionString:
if existsEnv("DATABASE_URL"): $getEnv("DATABASE_URL") if existsEnv("DATABASE_URL"): $getEnv("DATABASE_URL")
elif cfg.hasKey("connectionString"): cfg["connectionString"].getStr elif cfg.hasKey("connectionString"): cfg["connectionString"].getStr
else: "", else: "",
sqlDirs:
if existsEnv("MIGRATIONS_DIRS"): getEnv("MIGRATIONS_DIRS").split(';')
elif cfg.hasKey("sqlDirs"): cfg["sqlDirs"].getElems.mapIt(it.getStr)
else: @["migrations"],
logLevel: logLevel) logLevel: logLevel)
proc createMigration*(config: DbMigrateConfig, migrationName: string): seq[string] = proc createMigration*(config: DbMigrateConfig, migrationName: string): MigrationEntry =
## Create a new set of database migration files. ## Create a new set of database migration files.
let timestamp = getTime().getLocalTime().format("yyyyMMddHHmmss") let timestamp = now().format("yyyyMMddHHmmss")
let filenamePrefix = timestamp & "-" & migrationName let filenamePrefix = timestamp & "-" & migrationName
let upFilename = joinPath(config.sqlDir, filenamePrefix & "-up.sql") let migration = (
let downFilename = joinPath(config.sqlDir, filenamePrefix & "-down.sql") name: filenamePrefix & "-up.sql",
upPath: joinPath(config.sqlDirs[0], filenamePrefix & "-up.sql"),
downPath: joinPath(config.sqlDirs[0], filenamePrefix & "-down.sql"))
let scriptDesc = migrationName & " (" & timestamp & ")" let scriptDesc = migrationName & " (" & timestamp & ")"
let upFile = open(upFilename, fmWrite) let upFile = open(migration.upPath, fmWrite)
let downFile = open(downFilename, fmWrite) let downFile = open(migration.downPath, fmWrite)
upFile.writeLine "-- UP script for " & scriptDesc upFile.writeLine "-- UP script for " & scriptDesc
downFile.writeLine "-- DOWN script for " & scriptDesc downFile.writeLine "-- DOWN script for " & scriptDesc
@@ -72,47 +81,69 @@ proc createMigration*(config: DbMigrateConfig, migrationName: string): seq[strin
upFile.close() upFile.close()
downFile.close() downFile.close()
return @[upFilename, downFilename] return migration
proc diffMigrations*(pgConn: DbConn, config: DbMigrateConfig): proc diffMigrations*(
tuple[ run, notRun, missing: seq[string] ] = pgConn: DbConn,
config: DbMigrateConfig
): tuple[
available: TableRef[string, MigrationEntry],
run: seq[string],
notRun, missing: seq[MigrationEntry] ] =
debug "diffMigrations: inspecting database and configured directories " &
"for migrations"
# Query the database to find out what migrations have been run. # Query the database to find out what migrations have been run.
var migrationsRun = initSet[string]() var migrationsRun = initHashSet[string]()
for row in pgConn.fastRows(sql"SELECT * FROM migrations ORDER BY name", @[]): for row in pgConn.fastRows(sql"SELECT * FROM migrations ORDER BY name", @[]):
migrationsRun.incl(row[1]) migrationsRun.incl(row[1])
# Inspect the filesystem to see what migrations are available. # Inspect the filesystem to see what migrations are available.
var migrationsAvailable = initSet[string]() var migrationsAvailable = newTable[string, MigrationEntry]()
for filePath in walkFiles(joinPath(config.sqlDir, "*.sql")): for sqlDir in config.sqlDirs:
debug "Looking in " & sqlDir
for filePath in walkFiles(joinPath(sqlDir, "*.sql")):
debug "Saw migration file: " & filePath
var migrationName = filePath.extractFilename var migrationName = filePath.extractFilename
migrationName.removeSuffix("-up.sql") migrationName.removeSuffix("-up.sql")
migrationName.removeSuffix("-down.sql") migrationName.removeSuffix("-down.sql")
migrationsAvailable.incl(migrationName) migrationsAvailable[migrationName] = (
name: migrationName,
upPath: joinPath(sqlDir, migrationName) & "-up.sql",
downPath: joinPath(sqlDir, migrationName) & "-down.sql")
# Diff with the list of migrations that we have in our migrations # Diff with the list of migrations that we have in our migrations
# directory. # directory.
let migrationsInOrder = let migrationsInOrder =
toSeq(migrationsAvailable.items).sorted(system.cmp) toSeq(migrationsAvailable.keys).sorted(system.cmp)
var migrationsNotRun = newSeq[string]() var migrationsNotRun = newSeq[MigrationEntry]()
var missingMigrations = newSeq[string]() var missingMigrations = newSeq[MigrationEntry]()
for migration in migrationsInOrder: for migName in migrationsInOrder:
if not migrationsRun.contains(migration): if not migrationsRun.contains(migName):
migrationsNotRun.add(migration) migrationsNotRun.add(migrationsAvailable[migName])
# if we've already seen some migrations that have not been run, but this # if we've already seen some migrations that have not been run, but this
# one has been, that means we have a gap and are missing migrations # one has been, that means we have a gap and are missing migrations
elif migrationsNotRun.len > 0: elif migrationsNotRun.len > 0:
missingMigrations.add(migrationsNotRun) missingMigrations.add(migrationsNotRun)
migrationsNotRun = newSeq[string]() migrationsNotRun = newSeq[MigrationEntry]()
return (run: toSeq(migrationsRun.items).sorted(system.cmp), result = (available: migrationsAvailable,
run: toSeq(migrationsRun.items).sorted(system.cmp),
notRun: migrationsNotRun, notRun: migrationsNotRun,
missing: missingMigrations) missing: missingMigrations)
debug "diffMigration: Results" &
"\n\tavailable: " & $toSeq(result[0].keys) &
"\n\trun: " & $result[1] &
"\n\tnotRun: " & $(result[2].mapIt(it.name)) &
"\n\tmissing: " & $(result[3].mapIt(it.name))
proc readStatements*(filename: string): seq[SqlQuery] = proc readStatements*(filename: string): seq[SqlQuery] =
result = @[] result = @[]
var stmt: string = "" var stmt: string = ""
@@ -130,28 +161,30 @@ proc readStatements*(filename: string): seq[SqlQuery] =
if stmt.strip.len > 0: result.add(sql(stmt)) if stmt.strip.len > 0: result.add(sql(stmt))
proc up*(pgConn: DbConn, config: DbMigrateConfig, toRun: seq[string]): seq[string] = proc up*(
var migrationsRun = newSeq[string]() pgConn: DbConn,
config: DbMigrateConfig,
toRun: seq[MigrationEntry]): seq[MigrationEntry] =
var migrationsRun = newSeq[MigrationEntry]()
# Begin a transaction. # Begin a transaction.
pgConn.exec(sql"BEGIN") pgConn.exec(sql"BEGIN")
# Apply each of the migrations. # Apply each of the migrations.
for migration in toRun: for migration in toRun:
info migration info migration.name
let filename = joinPath(config.sqlDir, migration & "-up.sql")
if not filename.fileExists: if not migration.upPath.fileExists:
pgConn.rollbackWithErr "Can not find UP file for " & migration & pgConn.rollbackWithErr "Can not find UP file for " & migration.name &
". Expected '" & filename & "'." ". Expected '" & migration.upPath & "'."
let statements = filename.readStatements let statements = migration.upPath.readStatements
try: try:
for statement in statements: for statement in statements:
pgConn.exec(statement) pgConn.exec(statement)
pgConn.exec(sql"INSERT INTO migrations (name) VALUES (?);", migration) pgConn.exec(sql"INSERT INTO migrations (name) VALUES (?);", migration.name)
except DbError: except DbError:
pgConn.rollbackWithErr "Migration '" & migration & "' failed:\n\t" & pgConn.rollbackWithErr "Migration '" & migration.name & "' failed:\n\t" &
getCurrentExceptionMsg() getCurrentExceptionMsg()
migrationsRun.add(migration) migrationsRun.add(migration)
@@ -160,27 +193,28 @@ proc up*(pgConn: DbConn, config: DbMigrateConfig, toRun: seq[string]): seq[strin
return migrationsRun return migrationsRun
proc down*(pgConn: DbConn, config: DbMigrateConfig, migrationsToDown: seq[string]): seq[string] = proc down*(
var migrationsDowned = newSeq[string]() pgConn: DbConn,
config: DbMigrateConfig,
migrationsToDown: seq[MigrationEntry]): seq[MigrationEntry] =
var migrationsDowned = newSeq[MigrationEntry]()
pgConn.exec(sql"BEGIN") pgConn.exec(sql"BEGIN")
for migration in migrationsToDown: for migration in migrationsToDown:
info migration info migration.name
let filename = joinPath(config.sqlDir, migration & "-down.sql") if not migration.downPath.fileExists:
pgConn.rollbackWithErr "Can not find DOWN file for " & migration.name &
". Expected '" & migration.downPath & "'."
if not filename.fileExists: let statements = migration.downPath.readStatements
pgConn.rollbackWithErr "Can not find DOWN file for " & migration &
". Expected '" & filename & "'."
let statements = filename.readStatements
try: try:
for statement in statements: pgConn.exec(statement) for statement in statements: pgConn.exec(statement)
pgConn.exec(sql"DELETE FROM migrations WHERE name = ?;", migration) pgConn.exec(sql"DELETE FROM migrations WHERE name = ?;", migration.name)
except DbError: except DbError:
pgConn.rollbackWithErr "Migration '" & migration & "' failed:\n\t" & pgConn.rollbackWithErr "Migration '" & migration.name & "' failed:\n\t" &
getCurrentExceptionMsg() getCurrentExceptionMsg()
migrationsDowned.add(migration) migrationsDowned.add(migration)
@@ -196,12 +230,15 @@ Usage:
db_migrate [options] down [<count>] db_migrate [options] down [<count>]
db_migrate [options] init <schema-name> db_migrate [options] init <schema-name>
db_migrate (-V | --version) db_migrate (-V | --version)
db_migrate (-h | --help)
Options: Options:
-c --config <config-file> Use the given configuration file (defaults to -c --config <config-file> Use the given configuration file (defaults to
"database.json"). "database.json").
-h --help Show this usage information.
-q --quiet Suppress log information. -q --quiet Suppress log information.
-v --verbose Print detailed log information. -v --verbose Print detailed log information.
@@ -212,7 +249,7 @@ Options:
""" """
# Parse arguments # Parse arguments
let args = docopt(doc, version = "db-migrate (Nim) 0.2.7\nhttps://git.jdb-labs.com/jdb/db-migrate") let args = docopt(doc, version = "db-migrate (Nim) 0.4.0\nhttps://git.jdb-software.com/jdb/db-migrate")
let exitErr = proc(msg: string): void = let exitErr = proc(msg: string): void =
fatal("db_migrate: " & msg) fatal("db_migrate: " & msg)
@@ -240,20 +277,22 @@ Options:
else: logging.setLogFilter(config.logLevel) else: logging.setLogFilter(config.logLevel)
# Check for migrations directory # Check for migrations directory
if not existsDir config.sqlDir: for sqlDir in config.sqlDirs:
if not dirExists sqlDir:
try: try:
warn "SQL directory '" & config.sqlDir & warn "SQL directory '" & sqlDir &
"' does not exist and will be created." "' does not exist and will be created."
createDir config.sqlDir createDir sqlDir
except IOError: except IOError:
exitErr "Unable to create directory: " & config.sqlDir & ":\L\T" & getCurrentExceptionMsg() exitErr "Unable to create directory: " & sqlDir & ":\L\T" & getCurrentExceptionMsg()
# Execute commands # Execute commands
if args["create"]: if args["create"]:
try: try:
let filesCreated = createMigration(config, $args["<migration-name>"]) let newMigration = createMigration(config, $args["<migration-name>"])
info "Created new migration files:" info "Created new migration files:"
for filename in filesCreated: info "\t" & filename info "\t" & newMigration.upPath
info "\t" & newMigration.downPath
except IOError: except IOError:
exitErr "Unable to create migration scripts: " & getCurrentExceptionMsg() exitErr "Unable to create migration scripts: " & getCurrentExceptionMsg()
@@ -266,14 +305,14 @@ Options:
pgConn.ensureMigrationsTableExists pgConn.ensureMigrationsTableExists
let (run, notRun, missing) = diffMigrations(pgConn, config) let (available, run, notRun, missing) = diffMigrations(pgConn, config)
if args["up"]:
# Make sure we have no gaps (database is in an unknown state) # Make sure we have no gaps (database is in an unknown state)
if missing.len > 0: if missing.len > 0:
exitErr "Database is in an inconsistent state. Migrations have been " & exitErr "Database is in an inconsistent state. Migrations have been " &
"run that are not sequential." "run that are not sequential."
if args["up"]:
try: try:
let count = if args["<count>"]: parseInt($args["<count>"]) else: high(int) let count = if args["<count>"]: parseInt($args["<count>"]) else: high(int)
let toRun = if count < notRun.len: notRun[0..<count] else: notRun let toRun = if count < notRun.len: notRun[0..<count] else: notRun
@@ -285,7 +324,8 @@ Options:
elif args["down"]: elif args["down"]:
try: try:
let count = if args["<count>"]: parseInt($args["<count>"]) else: 1 let count = if args["<count>"]: parseInt($args["<count>"]) else: 1
let toRun = if count < run.len: run.reversed[0..<count] else: run.reversed let toRunNames = if count < run.len: run.reversed[0..<count] else: run.reversed
let toRun = toRunNames.mapIt(available[it])
let migrationsRun = pgConn.down(config, toRun) let migrationsRun = pgConn.down(config, toRun)
info "Went down " & $(migrationsRun.len) & "." info "Went down " & $(migrationsRun.len) & "."
except DbError: except DbError:
@@ -294,6 +334,11 @@ Options:
elif args["init"]: discard elif args["init"]: discard
let newResults = diffMigrations(pgConn, config) let newResults = diffMigrations(pgConn, config)
if newResults.missing.len > 0:
exitErr "Database is in an inconsistent state. Migrations have been " &
"run that are not sequential."
if newResults.notRun.len > 0: if newResults.notRun.len > 0:
info "Database is behind by " & $(newResults.notRun.len) & " migrations." info "Database is behind by " & $(newResults.notRun.len) & " migrations."
else: info "Database is up to date." else: info "Database is up to date."