How to handle database migrations with Liquibase on Ktor
- Part 1: Structuring a Ktor project
- Part 2: How to persist Ktor logs
- Part 3: How to use an in-memory database for testing on Ktor
- Part 4: How to handle database migrations with Liquibase on Ktor
- Part 5 Generate API documentation from Swagger on Ktor
- Part 6: How to schedule jobs with Quartz on Ktor
Databases are an important and critical part of backend infrastructures. They are the place where all the information is stored and that data cannot be compromised or lost. That’s why it is important to have proper management of the evolution of the database: it is necessary to be able to modify the schema, migrate the data, or roll back to a previous schema version if something unexpected happened.
In this article, I will cover how to set up Liquibase in a Ktor project and how to create two Gradle tasks responsible to migrate a test and a production MySQL database. There is also a pro version of Liquibase, but the free community version was enough for me.
This post is part of a series of posts dedicated to Ktor where I cover all the topics that made me struggle during development and that was not easy to achieve out of the box. You can check out the other instances of the series in the index above or follow me on Twitter to keep up to date.
The first thing to do is to add all the required dependencies. The starting point is the Gradle plugin in the
After syncing the project, it is possible to add now the required dependencies for the Liquibase runtime:
Note that here
liquibaseRuntime is used instead of the usual
Besides the core functionality, the other dependencies are necessary for the database connection, for logging, and for parsing XML, since all the data about the migrations will be saved in an XML file (as shown later on).
Configuring the migration task
To perform the database migrations, it is necessary to connect to the database, and to do so, some access information, like the database URL, the user, and the password, need to be stored somewhere and retrieved.
The access information can be saved, for example, on
local.properties or in the environment variables:
and can be retrieved in the
The migration task can be configured and customized by providing some parameters in the
activities.register block, inside the
The ones that I’ve provided are the following, but you can find more parameters in the Liquibase documentation:
logInfo-> execution log level (
changeLogFile-> the path of the changelog XML file to use;
url-> database JDBC URL;
username-> database username;
password-> database password;
The location where the changelog
XML file and the
SQL files can be freely chosen depending on the project. I’ve decided to put them in the
resources folder of the project, with the following structure:
The SQL files are contained in the
changesets subfolder and are named with the following pattern to make the file unique:
migrations.xml file contains the definitions of every migration:
Every migration is represented by a
changeSet, that has a unique ID. An ID could be, for example, the YearMonthDayHourMinute used for the file name.
In the changeSet object, it is necessary to provide the path of the SQL file for the migration, and also a comment can be added.
Finally, at this point, it is possible to run the Gradle task that will perform the database migration.
Migrating multiple databases
As shown above, every
activity registered in the
liquibase block corresponds to a different database instance to connect to. However, to connect and migrate different databases instances, it is necessary to register different
activity with different names.
By default, the Liquibase plugin will run every activity. However, it is possible to set the
runList parameter with the name of the activities to run:
The value of the parameter can also be provided from the command line when running the Gradle task. To do that, it is necessary to first define an empty variable in the
Then the variable will be retrieved in the
build.gradle.kts file and assigned to the
The value of the variable can then be injected from the command line with the following argument:
And that’s it for today. You can find the code mentioned in the article on GitHub.
In the next episode, I’ll cover how to show the API documentation from a Swagger specification. You can follow me on Twitter to know when I’ll publish the next episodes.