Containers are everywhere — whether your deploy your app to a Kubernetes cluster or even a cloud environment, or you use containers as a local runtime and dev environment, it’s hard to ignore them! I wrote the following detailed description as part of a post on the Web API and its use with a JavaScript based app, and this will be published shortly. But it made sense to keep the Docker-related part of the walkthrough separate, since it applies independently.
If you’d like to check out the sample setup on your own machine, you can follow the first part of the walkthrough to set up your environment, and then use the prepared repository without having to go through the steps yourself. On the other hand, the instructions below include all required steps to build a Docker-based development environment, so you could create your own sample and follow along.
Why use Docker?
Many devs will be content to use their Cross-Platform .NET App UI (XAF)-based application, or their separate Web API Service project, in the usual Windows and Visual Studio based environment. However, being able to run such a setup in containers has several advantages:
- It abstracts the runtime and clearly defines project requirements
- Development can occur on any machine, either with a local container setup or with access to a remote setup. The structure I’ll describe in this post allows developers to use any development machine running Windows, Mac OS or Linux.
- Other devs can reproduce the required setup with a single command
- You take some important steps towards a deployment, or of course Continuous Integration and Deployment
Some of these considerations are recognized all over the developer world — for instance in the success of the Development Containers concept, which is used with Visual Studio Code Dev Containers or GitHub Codespaces.
Note: you can substitute other environments for Docker, in case you like Podman/Buildah better, or (depending on your environment) perhaps an LXD environment. Some other environments are fully compatible with Docker, others need different instructions but follow the same ideas.
The decision to use Docker does not mean that you can never use Visual Studio on this project again! For instance, the Model Editor in XAF is only available in Visual Studio, so if you want to make non-trivial model changes, it’s easiest to open the solution in Visual Studio.
Goals for this post
The setup described below is meant for developers! This is important, since I won’t explain how to target deployment scenarios related to containers — using K8s clusters, monitoring, scaling etc etc. These are complicated topics in their own right, but the requirements are quite different from those at development time. Structuring your application system in a container-compatible way is an advantage in any case, but developer concerns are the focus of this post.
If you are interested in a deployment setup for XAF in Docker and Kubernetes, please read this blog post: Deploy and scale an XAF Blazor Server app: use Azure Kubernetes Service to serve hundreds of users (we’ve recently updated the XAF Docker hub images to .NET 7 and DevExpress v22.2 - check it out).
As a developer, you’ll be able to spin up the entire orchestrated container structure in a single command at the end of this walkthrough. The source code folders will be mounted into the running containers and watched for changes, so that you can use any IDE or editor on the host machine to work on the sources, and the containers will — most of the time — restart and rebuild as needed. Occasionally it will be necessary to restart containers manually to accommodate changes, but you won’t need Visual Studio or even .NET installed on the host machine. You won’t need a local SQL Server either!
Note that a local .NET installation can be useful to work with features like Intellisense in your IDE. But it is not a technical requirement!
Install Docker
In order to follow along, or to run the sample project in Docker, you will need a working Docker installation. If you are new to Docker, I recommend checking out some of the hands-on guides on the Docker website. If your machine does not have a Docker setup yet, you can find the right download for your OS and use the “Docker Desktop” package to install.
Create a solution
For purposes of the following instructions, the details of creating the solution are not relevant! You can use any solution you already have, or one you create with the wizard in Visual Studio. The instructions assume that you have a solution with the following structure:
The standard XAF app used here has a frontend project called XAFApp.Blazor.Server
, and a second frontend project called XAFApp.WebApi
. Both of these projects share the XAFApp.Module
. The walkthrough will create two separate containers for the two frontends.
Note that the walkthrough assumes that the test project uses EF Core for its data model. This makes only a minor difference to some code samples, the process would be largely the same for an XPO based application.
Create Docker images
Now it’s time to get started! Docker images are specific runtime environments “at rest”, i.e. before you start them up. If you “run a container”, an image is used as a template for that container. Creating images is therefore the first important step, since this is where you define what your runtime environment will include and how it will work.
General instructions for running .NET apps in Docker containers are available from Microsoft. Here’s an entry point, in case you’re interested. There are also some relevant pages in the DevExpress documentation, for instance this information on required packages for Linux, from the Reporting team. Reporting is not used in the sample app, but I include the required packages in the Dockerfile
anyway, to illustrate how it works.
Here’s the Dockerfile
on the level of the the solution folder. A lot of the time, you will see one Dockerfile
for each image, but in this case the file is reused since it is very similar for the Blazor and WebApi projects.
FROM mcr.microsoft.com/dotnet/sdk:7.0
RUN apt-get update
RUN apt-get install -y libc6 libicu-dev libfontconfig1 libgdiplus
WORKDIR /src
COPY . ./
ARG DXNUGETKEY
RUN dotnet nuget add source https://nuget.devexpress.com/api -n DXFeed --store-password-in-clear-text -u DevExpress -p $DXNUGETKEY
ARG STARTSCRIPT
ENV STARTSCRIPT $STARTSCRIPT
CMD wait-for-it -t 0 sql:1433 -- $STARTSCRIPT
Starting with the standard .NET 7 SDK image, the apt-get
lines install the extra packages that may be required later by DevExpress Reporting. The project files are then copied into the image. I will also set things up in a moment so that the live project folder is mounted into the running container. Strictly speaking, only one of the two approaches is required, but it doesn’t hurt to do both.
The lines dealing with the DXNUGETKEY
install the NuGet feed for DevExpress packages in the image. The value for the key itself must be supplied as an argument to the build process, as you will see when I set up “docker compose”.
Note that this approach leaves the secret feed key in the image! This is not the best approach to use when you deploy an application, but this image is meant as a development environment that does not leave your machine, and it’s convenient to have the feed key available.
Orchestrate the required services
The STARTSCRIPT
is passed to the Dockerfile
externally. As I mentioned above, this Dockerfile
will be reused with two different start scripts. Both of these also go into the solution folder. Here is start-blazor.sh
:
#!/bin/sh
dotnet watch run --project XAFApp.Blazor.Server
And this is start-webapi.sh
:
#!/bin/sh
dotnet watch run --project XAFApp.WebApi
Again, these scripts are made to work as a development environment. In a deployment image, you would run dotnet restore
only as part of the build process, while it is an implicit part of the startup process in this setup. For the development scenario, this approach is convenient since the images will not need to be rebuilt if extra packages are needed. By not calling dotnet restore
explicitly, the run
command can decide for itself which projects need to be restored in each case.
Back to the Dockerfile
: the final line uses the command wait-for-it
to make sure that the real start script is not activated before the SQL Server on port 1433 is ready. Don’t worry that we don’t have a SQL Server yet — you will add it to the configuration soon!
One more quick step is required before we can start orchestrating startup services for the demo solution. Add a file called .dockerignore
in the XAF solution folder, with the following content:
**/bin
**/obj
This file excludes the bin
and obj
folders from the context that is copied to the image during the build. This saves a bit of time, since these folders can be substantial, and it also helps avoid confusion if your local development platform is not similar to the Linux environment inside the container.
Back to the topic of orchestration: now you can create the file docker-compose.yml
at the top level of the repository (i.e. outside the XAF solution folder). Here is its initial content:
version: '3.8'
services:
sql:
image: mcr.microsoft.com/azure-sql-edge
environment:
ACCEPT_EULA: 1
MSSQL_SA_PASSWORD: ${SQL_SA_PASSWD}
MSSQL_TELEMETRY_ENABLED: 'FALSE'
cap_add:
- SYS_PTRACE
volumes:
- sql-data:/var/opt/mssql
webapi:
build:
context: ./XAFApp
args:
DXNUGETKEY: ${DXNUGETKEY}
STARTSCRIPT: '/src/start-webapi.sh'
depends_on:
- sql
environment:
SQL_DBNAME: ${SQL_DBNAME}
SQL_SA_PASSWD: ${SQL_SA_PASSWD}
DOTNET_WATCH_RESTART_ON_RUDE_EDIT: 1
ports:
- '5273:5273'
volumes:
- ./XAFApp:/src
blazor:
build:
context: ./XAFApp
args:
DXNUGETKEY: ${DXNUGETKEY}
STARTSCRIPT: '/src/start-blazor.sh'
depends_on:
- sql
environment:
SQL_DBNAME: ${SQL_DBNAME}
SQL_SA_PASSWD: ${SQL_SA_PASSWD}
DOTNET_WATCH_RESTART_ON_RUDE_EDIT: 1
ports:
- '5274:5274'
volumes:
- ./XAFApp:/src
volumes:
sql-data:
In case you’re not familiar with it, note that the YAML file format requires whitespace to be lined up correctly! I use Visual Studio Code to edit these files, like all other sources, and it handles YAML correctly without any special tricks.
Of course the file format is documented in detail on the Docker website, but a quick summary of the setup should be useful. The first service is called sql
and it uses Microsoft’s image azure-sql-edge
to run SQL Server. Using an environment variable, the EULA of this image is accepted, and you should of course read it before you do so. Documentation of the Docker image is here, and the EULA itself is linked from this page. At the time of writing, the EULA text includes the following provision:
You may install and use copies of the software on any device, including third party shared devices, to design, develop, test, and demonstrate your programs. You may not use the software in a production environment.
Please make sure you adhere to the licensing terms if you use this image.
Note that I use the image mcr.microsoft.com/azure-sql-edge
in this configuration instead of mcr.microsoft.com/mssql/server
, because it has support for processor architectures other than amd64 and can run natively on Macs and other non-Intel/AMD machines.
In the docker-compose.yml
configuration, you can also see that an extra capability is configured (the SYS_PTRACE
line), and a Docker volume is connected to store the data for the container. I’ll get back to the variable SQL_SA_PASSWD
in a moment.
The two services xaf
and blazor
are very similar, and they both use the Dockerfile
you created in the previous step. They differ in the STARTSCRIPT
and they use different ports — that’s it. This setup will build the two images directly when the orchestrated setup is started, and the source folder is mounted into the running containers so that the dotnet watch
command in each start script can rebuild the projects as needed.
The three environment variables DXNUGETKEY
, SQL_DBNAME
and SQL_SA_PASSWD
must be configured and, in the case of the database name and the sa
password, utilized in the projects. Providing the values is easy, since Docker will read them from a local .env
file automatically. Create this file on the top level of your repository and add three lines like these:
SQL_DBNAME=XAFDatabase
SQL_SA_PASSWD=Super7%Pwd
DXNUGETKEY=MYKEY
The SQL SA Password is provided as an example. Azure SQL Edge has certain password requirements, such as lower and upper case characters, numeric digits and special characters. You can use the example password directly, if you like, since it will only be used locally in your development environment.
For the NuGet key, you need to substitute your own key for the MYKEY
placeholder. In case you have never worked with DevExpress NuGet packages, please see this documentation page on the DevExpress site for instructions to obtain your feed key.
Important: Please add a .gitignore
file to the top level of your repo at this point, to make sure that .env
is not checked into version control! Just create the file called .gitignore
, edit it and add .env
. For obvious reasons, you should never add passwords or keys to version control, and the .env
will need to be recreated by other devs who work on the same repo, for their own local setups.
Configure connection strings dynamically from the environment variables
What remains is to use the SQL_DBNAME
and SQL_SA_PASSWD
variables in the running .NET projects, as part of the connection string. There is certainly more than one way to do this, but the approach I chose is to include a placeholder in the connection string. This general purpose approach can be used for other similar scenarios, where an app in a Docker container receives certain parameters through environment variables.
In each of the two projects XAFApp.Blazor.Server
and XAFApp.WebApi
, edit the file appsettings.json
. In the block ConnectionStrings
(usually near the start of the file), change the ConnectionString
so it resembles this:
Pooling=false;Data Source=sql;Initial Catalog=<SQL_DBNAME>;MultipleActiveResultSets=true;User ID=sa;Password=<SQL_SA_PASSWD>;
- The
Data Source
is set tosql
. This is the name of the SQL Server service as configured indocker-compose.yml
. The orchestration makes the virtual server available by name within the .NET container, so that this name resolves to the IP address required to connect to the service. - The
Initial Catalog
is set from the outside, using the environment variableSQL_DBNAME
. - The
User ID
is fixed tosa
. Once more, this is a development setup — in a deployment you should consider using a separate access account instead. - The
Password
is again set from the outside, using the environment variable you already included indocker-compose.yml
.
To replace the environment variables in the connection string, add a package reference to each project file. Insert this line near the existing PackageReference
entries:
<PackageReference Include="StringTemplate4" Version="4.0.8" />
Now edit the file Startup.cs
, again once for the WebApi and once for the Blazor project. Find the block where the connection string is retrieved from the configuration file and passed to the UseSqlServer
call. In the WebApi project, there are only two relevant lines:
string connectionString = Configuration.GetConnectionString("ConnectionString");
options.UseSqlServer(connectionString);
Replace these two lines with the following code:
var connectionStringTemplate = new Template(Configuration.GetConnectionString("ConnectionString"));
connectionStringTemplate.Add("SQL_DBNAME", System.Environment.GetEnvironmentVariable("SQL_DBNAME"));
connectionStringTemplate.Add("SQL_SA_PASSWD", System.Environment.GetEnvironmentVariable("SQL_SA_PASSWD"));
options.UseSqlServer(connectionStringTemplate.Render());
Make sure to resolve the type Template
by adding the line using Antlr4.StringTemplate;
at the start of the file.
In the Blazor project, the code is very similar, but it is interrupted by an extra block that takes care of the special EASYTEST
connection string. For purposes of this sample, you can replace that part and use the same code as for the WebApi — if you need the EasyTest configuration, you can obviously edit the sample code as required.
Simplify the network setup
To run the apps in containers, you can simplify the network setup of the standard startup process. In an orchestrated Docker scenario, it is unnecessary to run each container separately with an HTTPS endpoint. Typically a deployed container-based setup will use a shared proxy that runs in its own container, perhaps nginx or traefik, or maybe YARP. The HTTPS requirement will then be satisfied by the proxy and is of no concern to individual apps — just like it should be at development time, and that’s another great advantage of thinking in terms of containers!
Edit the Startup.cs
files for both projects again and find the lines that call app.UseHttpsRedirection();
. Comment them out — you won’t need them in this setup. Then edit Properties/launchSettings.json
for each project. Remove all the extra blocks, leaving only one per project. This is what the result should be for the WebApi project:
{
"$schema": "http://json.schemastore.org/launchsettings.json",
"profiles": {
"XAFDemoApp.WebApi": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"launchUrl": "swagger",
"applicationUrl": "http://*:5273",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
And here is the corresponding content for the Blazor app project:
{
"profiles": {
"XAFDemoApp.Blazor.Server": {
"commandName": "Project",
"launchBrowser": false,
"applicationUrl": "http://*:5274",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
Both of these entries use the respective port numbers that are also configured in docker-compose.yml
(quite a random choice, by the way, so feel free to choose your own!), and the applicationUrl
variables use the *
placeholder instead of a fixed server name. This is an important change from the standard setup, because the URL uses localhost
by default, and this would not work as expected in a Docker orchestration.
Define an EF Core data model and add some test data
To test the orchestrated setup and see some output from the test app, you need to add a business object and some test data. Of course you can use existing data if you have it! If you’re walking through this description with a new project, find the BusinessObjects
folder in the Module project and add the file SaleProduct.cs
:
using DevExpress.Persistent.Base;
using DevExpress.Persistent.BaseImpl.EF;
namespace XAFApp.Module.BusinessObjects {
[DefaultClassOptions]
public class SaleProduct : BaseObject {
public SaleProduct() {
}
public virtual string Name { get; set; }
public virtual decimal? Price { get; set; }
}
}
Assuming your test project uses EF Core, open the file XAFAppDbContext.cs
in the same folder and add a line to the class XAFAppEFCoreDbContext
to register the entity:
public class XAFAppEFCoreDbContext : DbContext {
...
public DbSet<SaleProduct> SaleProducts { get; set; }
...
}
Still in the Module project, edit DatabaseUpdate/Updater.cs
and add some test data code to the method UpdateDatabaseAfterUpdateSchema
. Like this:
public override void UpdateDatabaseAfterUpdateSchema() {
base.UpdateDatabaseAfterUpdateSchema();
var rubberChicken = ObjectSpace.FirstOrDefault<SaleProduct>(p => p.Name == "Rubber Chicken");
if (rubberChicken == null) {
// we assume that the demo data doesn't exist yet
rubberChicken = ObjectSpace.CreateObject<SaleProduct>();
rubberChicken.Name = "Rubber Chicken";
rubberChicken.Price = 13.99m;
var pulley = ObjectSpace.CreateObject<SaleProduct>();
pulley.Name = "Pulley";
pulley.Price = 3.99m;
var enterprise = ObjectSpace.CreateObject<SaleProduct>();
enterprise.Name = "Starship Enterprise";
enterprise.Price = 149999999.99m;
var lostArk = ObjectSpace.CreateObject<SaleProduct>();
lostArk.Name = "The Lost Ark";
lostArk.Price = 1000000000000m;
}
ObjectSpace.CommitChanges();
}
The Web API project uses its own mechanism to determine which data types can be accessed through the service. To make the test datatype available, find the block with the call AddXafWebApi
in the file Startup.cs
. There is a comment with an example line there, and you need to add your type there with a line like this:
options.BusinessObject<XAFApp.Module.BusinessObjects.SaleProduct>();
What remains is to make sure that the updater will be called to create the sample data, and for this demo scenario it is sufficient to do this from the Web API service. The Blazor app may be used every now and then, but it works with the same version of all data structures, so it doesn’t need to run its own updates. Edit the file Services/WebApiApplicationSetup.cs
in the WebApi
project and remove the comments in front of the last few lines. This establishes an event handler for the event application.DatabaseVersionMismatch
, which runs the updater.
public class WebApiApplicationSetup : IWebApiApplicationSetup {
public void SetupApplication(AspNetCoreApplication application) {
...
application.DatabaseVersionMismatch += (s, e) => {
e.Updater.Update();
e.Handled = true;
};
}
}
Run the application system
The time has come to start the sample setup! Using Docker, this is now possible with a single command:
> docker compose up --build -d
The command loads the configuration from the file docker-compose.yml
in the current directory automatically. The parameter --build
is not strictly required at this point, but if you repeat the command in the future it will often be required to make sure that the containers are regenerated with any major changes you’ve made — minor changes don’t require restarts, and so you’ll mostly want a rebuild if you restart at all.
The parameter -d
detaches the running containers from the console. If you are using Docker Desktop, you can easily access logs for each container there. On the console, you’ll need to use a command like docker compose logs -f xaf
to follow log output from each container. Generally this will only be necessary if something goes wrong.
If the application has started up correctly, you can now access the Blazor app in a browser at http://localhost:5274
. It will display the demo data for the SaleProduct
type as expected.
The Web API service runs on port 5273, and it publishes the Swagger UI at http://localhost:5273/swagger
. You can see and test the supported APIs there, as you would expect from a Swagger interface. Endpoint entries for the SaleProduct
type are included, since you added this type using the BusinessObject<>()
call.
If you test the sample call to the OData endpoint at /api/odata/SaleProduct
— just click the Try it out button! — then you will see the sample data available in the Web API service as well.
Test the reload behavior when source code changes
Since both containers use dotnet watch
to run the their services, and the source code folders are mounted into the containers, any changes to source code from the host side will be recognized by the running processes. They will then do their best to reload the relevant files, or even restart the processes since we set DOTNET_WATCH_RESTART_ON_RUDE_EDIT
.
With the application system running, you can test this behavior by making a small change. For instance, change the SaleProduct
class by adding a DisplayName
attribute to the Name
property:
using System.ComponentModel;
...
[DefaultClassOptions]
public class SaleProduct : BaseObject {
public SaleProduct() {
}
[DisplayName("Product Name")]
public virtual string Name { get; set; }
public virtual decimal? Price { get; set; }
}
Save your change, and watch the Docker logs for the Blazor app container if you like — you’ll see the reload happening. You may need to reload the Blazor app in the browser, but then the change to the column header for the Name
field will be recognized immediately.
Conclusion
Please let us know what you think! We hope that the sample is useful as a starting point and a demonstration of bringing modern development approaches to XAF and related technologies.