How long would it take to freshly installed an Ubuntu host, deploy a combo of PostgreSQL server + Relativity Server, and to let it accept connections on port 80 using nginx? Do this twice to create both development and production environments and remember to sync any config changes between these. Sounds line a nightmare and like a very tedious and error-prone piece of work?
Thanks to Docker and Docker Compose, this can be done in literally minutes.
What is Docker? Well, it is a tool that can take your app and put into a container (think of it as a slim Virtual Machine image with a lot of configurable options). Later you can take this container and deploy it on other host (either a physical computer or a cloud-based one like AWS, Azure, Heroku, etc).
Docker Compose is a tool that allows to configure several Docker containers to work together.
This all might sound a bit scary in the case you have not used these tools already. But no worries. Let's take a closer look at Docker tools and develop some scripts for Relativity Server, and you will see now easy it is to configure and to use these tools.
Also please note that the approach described below would also work for custom Data Abstract or Remoting SDK servers as well.
At first you need to install the Docker toolset itself. The installation process is very straightforward and the only thing one needs to remember is that on Linux Docker Compose needs to be installed separately after the Docker engine is installed.
On macOS and Windows environments you need to install Docker Desktop package, which includes the Docker engine, command line tools, Docker Compose and other useful tools. Detailed installation instructions for Docker Desktop can be found at docs.docker.com/desktop/.
The Linux environment requires a bit more effort. At first you need to install Docker Engine (see docs.docker.com/engine/install/ for detailed instructions for different Linux distros). Then, Docker Compose can be installed (again see docs.docker.com/compose/install/ for detailed instructions).
At this point you should have working Docket environment.
The first step is to put Relativity Server into a Docker image. The
Dockerfile (configuration file describing what to do to create an image) to do this is a very simple one:
COPY . .
ENTRYPOINT [ "mono", "/usr/src/app/Relativity.exe", "--console" ]
All this it says it "take the image of Mono 6.10 and copy the Relativity Server files. Then use the given command to start it"
Now you can build Docker image using simple
docker build command and after that, regardless of your OS (be it Windows, Linux, macOS) and the version of Mono installed there (if any at all) you will always get a Relativity Server instance running using Mono 6.10 on Debian 10
This makes your app behavior predictable, as you can control its environment.
Of course, Relativity Server alone is not that useful. It would be cool to connect it to database server. Also, we need to make sure that our data does not vanish when the Docker container is rebuilt.
This is where Docker Compose comes to help. This is a magic tool that allows to configure and start several Docker containers at once.
A sample Docker Compose configuration file (
docker-compose.yml) is as simple as
What it does is
- define 3 containers (as you remember "container" is a very slim VM created based on a source "image"):
- Relativity Server container (note that I use an unofficial Relativity Server image here as a source)
- PostgreSQL 13 container
- nginx container with a custom configuration
- a custom network that these 3 containers should use to communicate with each other
- a set of so-called volumes (persistent storage containers) used to store the database and Relativity configuration
All these containers can now be started with just a single command:
docker-compose up . This single command will start a complex process that does exactly what was described above: a combo of PostgreSQL 13 + Relativity Server hidden behind a nginx reverse proxy.
What's way more important is that this same configuration file can now be used on any other computer. Docker magic will fetch images and configure containers, so literally within minutes the same configuration will be up and running.
The only thing you need to remember that Relativity Server instance should not try to access PostgreSQL via localhost. This won't work because for Relativity localhost means the container it is running in (not the host). In this case database server should be accessed by its container name docker-database (one defined in the docker-compose configuration). Docker will resolve this as an IP address in the virtual network used by the running containers.
So the connection string should look like
Schema Modeler will route all database access operations (like fetching database tables list, previewing data etc) through Relativity data access services.
And that is it. Complex configuration that would take (for me at least) hours to install and configure is up within minutes and accepts connections.
P.S.: This article is not a Docker tutorial. We have just scratched the surface as it is just not possible for a short article to go deeper. I really suggest you to invest some time into learning Docker, Docker Compose and other tools. Every hour spent reading the docs or watching courses will pay for it dozens of times.
We have great plans for Relativity, Data Abstract and Remoting SDK.
Relativity Server for instance will get .NET 5 support with all its performance and compatibility boosts. We are also looking at implementing a set of plugin APIs that will allow you to create your own authentication providers (f.e. one could authenticate Relativity users using Amazon Cognito or Firebase). Also improvements of existing Debug and Administration APIs are on the desk, along with a new Web-based administration tool.
Share with us you thoughts on what would you like to see in the next generations of Relativity, Data Abstract and Remoting SDK!