Alica's dev blog
Running ASP.NET API in Kubernetes

Introduction

When I was trying to run ASP.NET API with database in Kubernetes, it took me more time than I expected. Creating correct Dockerfile, exposing the right ports, configuring the database parameters correctly – these require you either to carefully read the documentation, or to take a shortcut and just read this post :-)

Run the API

Build connection string from environment variables

Configure your API to use a database connection string that is built from values read from environment variables (I have another post about that topic).

Create a Docker image

You can use this Dockerfile as a template to build a Docker image of your API.

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app

COPY ./net-api-in-kubernetes.csproj .
RUN dotnet restore net-api-in-kubernetes.csproj

COPY . .
RUN dotnet build net-api-in-kubernetes.csproj -c Release -o /out

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
WORKDIR /app

EXPOSE 80
COPY --from=build /out ./
ENTRYPOINT ["dotnet", "net-api-in-kubernetes.dll"]

There is one thing that you need to pay attention to, and that is port number.

If you use .NET Core (in that case you also need to use corresponding base images in FROM lines of the Dockerfile) and run your API locally, it will by default listen on port 5000. With .NET, it will listen on some randomly generated port number.

However, this doesn’t apply when you run the API in Docker – there, the default value is 80 (see this stackoverflow thread for explanation). I spent quite a lot of time troubleshooting why my container doesn’t send the right response before I discovered this default port.

If you set the URL and port explicitly in your API (there are at least 5 ways to do it), then you don’t need to worry about the defaults, as your setting will be used.

In the end, don’t forget to build and push your image to a registry which your Kubernetes cluster has access to:

docker build -t [tag] .
docker push [tag]

Create a deployment

Now let’s create a deployment using our API image.

Connect to the database

For the API to be able to connect to the database, we need to set values for the environment variables that are used in building the connection string.

For test purposes, we could set the values directly, but it is a better practice to you configmaps and secrets.

The configmap will store not-so-sensitive parts of the connection string:

apiVersion: v1
kind: ConfigMap
metadata:
  name: db-config
data:
  database: postgres
  server: localhost
  port: "5432"
  disableSsl: "true"

And the secret will store username and password:

apiVersion: v1
kind: Secret
metadata:
  name: db-secret
type: Opaque
data:
  username: dGVzdC11c2VyCg==  # "test-user" in base64
  password: MWYyZDFlMmU2N2RmdGVzdC1wYXNzd29yZAo=  # "test-password" in base64

Note: Although secrets are more secure than configmaps, they are not even close to being completely secure. Values in the manifests as above are encoded, but not encrypted. That means that whoever can read the secret can decode the values there. There are strategies to protect the secrets.

We then configure the pods in the deployment to use this configmap and secret.

Expose the port

Last thing is to expose container port 80 (the same 80 we mentioned when creating the Docker image).

Final yaml

So our deployment will look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: api-deploy
  name: api-deploy
spec:
  replicas: 1 # specify any number you want
  selector:
    matchLabels:
      app: api-deploy
  template:
    metadata:
      labels:
        app: api-deploy
    spec:
      containers:
      - name: api
        image: alica/net-api-kubernetes:1
        ports:
        - containerPort: 80
        env:
        - name: DB_NAME
          valueFrom:
            configMapKeyRef:
              name: db-config
              key: database
        - name: DB_SERVER
          valueFrom:
            configMapKeyRef:
              name: db-config
              key: server
        - name: DB_PORT
          valueFrom:
            configMapKeyRef:
              name: db-config
              key: port
        - name: DB_DISABLE_SSL
          valueFrom:
            configMapKeyRef:
              name: db-config
              key: disableSsl
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: username
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: password

Create the resources

Don’t forget to create the resources:

kubectl apply -f [yaml file with the configmap]
kubectl apply -f [yaml file with the secret]
kubectl apply -f [yaml file with the deployment]

Verify that it works

We can verify whether the API send back responses by using port forwarding and then sending the request from our local computer:

kubectl port-forward [pod name] [local port]:80

Which in my case was:

kubectl port-forward api-deploy-84cf9f8786-q96m7 1234:80

(I got the pod name from running kubectl get pods.)

Create a service (optional)

Optionally, you can also create a service to make your API deployment easily available for other applications in the cluster.

apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api-deploy
  ports:
    - protocol: TCP
      port: 4321 # or any other value you need
      targetPort: 80

You can the try if the service works, again using port forwarding:

kubectl port-forward service/api-service [local port]:4321

Run the database (also in Kubernetes)

This assumes that you need some kind of database for testing purposes, but not a persistent one (in which case you would need a proper database server).

If you are OK with this semi-persistent database (it will live until you kill the pod where it’s running), you then have two ways to initialize it:

  1. Manually run a script that will create the tables, set up the constraints, seed the data etc.
  2. Run migrations automatically during the start of your .NET API, e.g. with DbUp.

Create the deployment

We create a one-pod deployment with postgres container:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: db-deploy
  name: db-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db-deploy
  template:
    metadata:
      labels:
        app: db-deploy
    spec:
      containers:
      - name: db
        image: postgres
        env:
        - name: POSTGRES_HOST_AUTH_METHOD
          value: trust # this is generally not recommended (we only use it here for testing purposes)
        ports:
        - containerPort: 5432

Create the service

Now we create a service to be able to access the database from the API:

apiVersion: v1
kind: Service
metadata:
  name: db-service
spec:
  selector:
    app: db-deploy
  ports:
    - protocol: TCP
      port: 5432
      targetPort: 5432

Put it all together

When you now make a call to the API that writes something to the database, and then another one that reads something from the database, you should get the expected results.


Last modified on 2022-02-20