Skip to content

Development Environment

When developing in the Suite, we will need to run multiple microservices in order to have a working UI.

Any UI will require to have running:

  1. The Backend for Frontend (BFF)
  2. The Application Service
  3. All Backend Services which the Application service uses
  4. The broker, database, and any other infrastructure services we later on add, like caches.

Before starting

Please, make sure to follow this checklist before starting to prevent common issues:

  1. Ensure port 80/443 are free on your machine. i.e stop IIS.
    1. There have also been issues with VMWare Workstation using port 443.
    2. IIS may start up after restarting your laptop.
  2. Ensure you have followed the setup guide, particularly the setupEnvironment script.
  3. Ensure you have no docker containers running that may interfere.
  4. Ensure you have installed the Suite's local development certificates, which are placed in dotnet/docker. Install the ca.crt as Trusted Root CA.

Activating the deployment environment

To deploy and manage a Suite Deployment, we need a bunch of dependencies installed. Instead of installing everything on our local machine we will use a Docker Container that will have all of the dependencies installed there, and we'll run all scripts inside that "activated container".

Bash
1
2
3
cd deployments

./activate.sh

Note

DO NOT prefix the ./activate.sh with a . like we do for dotnet

This will take a bit while the image is pulled from the registry and then a zsh shell will open where you can start typing commands. Old timers may notice we did not do the az-login thingy, that's cause we are using a Service Principal and a secret to access the registry. Nice!

Inside the activated container you have tools like kubectl, k3d, helm, kustomize, etc.

Things to note:

  1. The shell is zsh, you have autocompletion using TAB, can use CTRL+BACKSPACE/DEL, CTRL+ARROW-KEYS, etc.
    1. oh-my-zsh is also installed so we can install plugins and other things later on, just ask!
  2. kubectl has autocomplete configured for names like pods, deployments, etc.

Installing the ITsynch Suite

To install the Suite, we run the Suite Installer. Makes sense huh? Inside the activated container you can run the below command to get a deployment going.

The installer will deploy and configure a k3s cluster, apply the infrastructure for local development like sql server and rabbitmq, and once that's ready, it will apply the applications you have selected.

Bash
suite-installer localdev/hub --apps=./k8s/environments/localdev/hub/teams/landings

Note

If you want to deploy all apps, not just "landings" team, remove the --app portion

You should start seeing the k3s cluster spinning up and eventually Infrastructure should start to be applied. Once infra pods start to spin up, they'll be logged there. SQL Server image takes a bit to pull.

Run a service using local source code

The Suite Installer creates local container registry at cr.localdev.suite.itsynch.com:5000.

To replace a service image with a local one, an image needs to be built and pushed to the local registry, and the local-dev kustomization needs to be changed to replace the default image with the new one.

All this can be done using the suite-replace script.

Bash
suite-replace <path-to-service> <kustomization>

For example:

Bash
suite-replace ../dotnet/src/services/Currencies/ITsynch.Suite.Currencies.Application \
    ./k8s/environments/localdev/hub/teams/landings
  1. <path-to-service> is the path to the dotnet service.

    1. The activated container's working directory is /deployment, hence the path should start with ../dotnet/src/services/[..]
  2. <kustomization> is the kustomization file that you deployed using the installer.

    1. This is optional and defaults to localdev.

After you execute suite-replace, you can check the image being used with kubectl describe pod <name> | grep Image

Adding new services or ui applications to the deployment

Convenience scripts are available in the activated container to generate applications and jobs. This will generate the basic files which you must review and deploy on localdev tu make sure they work as expected.

Bash
suite-gen-app <kind> <name>

<kind> can be any of the following:

  1. backend-service
  2. bff-service
  3. web-ui

<name> will be used for the Deployment name, docker image name, etc.

Note

Do not change the suffixes added to the metadata.name of the kubernetes resources. This end up modifying the cluster infrastructure, (i.e service name creates DNS records)

For example, to add a new Angular UI one would do:

Bash
suite-gen-app webui admin-center-ui

This will generate:

  1. A Deployment to deploy the pods to run the application
  2. A Service to expose your application to the cluster
  3. For bff-service and webui, an Ingress to expose your application outside of the cluster.
  4. A Kustomization file that references all the k8s files of your service.

Once the kustomization for your app is generated, you need to reference it in the environments you want to deploy it. We recommend you reference it in localdev and/or your team's kustomization and then reach the Suite Team to deploy to other environments.

Changing Ingress HostName

The Ingress Hostname will be the url prefix that will be used to access your application. For example, if your Ingress Hostname is defined as admin-center-ui, your app will be accessible using admin-center-ui.<suite-environment>.<suite-suffix>, which translates on local development to admin-center-ui.localdev.suite.itsynch.com.

UI applications will most likely need to change this to suite the public facing name of the app. At the very least, remove the -ui suffix.

Configuring backend services

To apply configurations to a service, we use ConfigMaps to define the environment variables that dotnet's IConfiguration expects. To support rolling updates of your config map, we use Kustomize's ConfigMapGenerator

Note

To configure your service differently for each environment keep reading the section below

Create a file named config.env next to your service's Deployment and Kustomization.

YAML
MyServiceModuleOptions__MyServiceKey=The Value

Reference the file in your service's kustomization configMapGenerator section.

YAML
1
2
3
4
5
6
# [...]

configMapGenerator:
    - name: my-service-config # This is the metadata.name of your config
      envs:
          - config.env # This is the path to the env file

Configuring Backend Services for a particular environment

If you need to do this we recommend creating a ticket with the Suite's Dev Op team. Just kidding, reach out to me on Teams (lchaia@itsynch.com)

To apply different configuration depending on the environment, we need to do the same steps as the section before for the each environment we want to Kustomize

Note

If your service doesn't have any config shared between all environments, you don't need to have a config file at your service as well, you can just create the config files per each environment as described below:

  1. Create the config file named <service-name>-config.env at the deployments/k8s/environments/<env name>/_config
  2. Edit the Kustomization file inside that same directory to add your ConfigMapGenerator

Making Changes to non-prod / prod environments

We recommend applying changes to the localdev and/or your teams kustomization environments only. Once that is confirmed to work reach out to the Suite's team regarding the rest of the environments.

Note

The environment kustomization files represent the current environment being deployed CD pipelines are being created to apply those files to the target environment automatically Until that is in-place, we recommend reaching out before modifying any environment other than localdev.

Developing Angular UI against Docker

For the UI to use and point to the docker containers, instead of localhost, we need to run it like so

Bash
1
2
3
cd angular
yarn install
yarn start admin-center --configuration=docker

You can then access http://localhost:4200 to see the AdminCenter UI.

Forwarding internal services to your workstation

kubectl port-forward can be used to forward traffic from a service to your local workstation. However, since we run kubectl from a container we need to change a bit the command to make it work.

Activate the dev environment mapping the container's port:

Bash
./activate.sh -p 8080:8080

Forward traffic from spares-service's http named endpoint to port 8080 on the container, which is port mapped to 8080 on your workstation by the activate command.

Bash
kubectl port-forward --address="0.0.0.0" services/spares-service 8080:http

Then, if we access http://localhost:8080/graphql/ on our workstation we should see the spares-service graphql editor.

Known Issues

Core DNS localdev configuration

On localdev, there's a CoreDNS customization to properly resolve calls inside a pod to the identity service, since the public DNS points to 127.0.0.1. There is a known issue where the config map customization we use is not applied after a cluster restart.

The issue manifest itself as the backend for frontend not being able to authenticate requests against identity.

To fix it, we can run the below:

Bash
kubectl -n kube-system rollout restart deployment coredns

Cleaning up the local cluster

When you are not developing or need to free up resources you can stop/remove the cluster

  1. Stop the cluster using k3d cluster stop <name>
    1. The name is the name of the environment like localdev.
    2. You can see a complete list with k3d cluster list
    3. You can start the cluster later on using k3d cluster start <name>
  2. Nuke the cluster: on an activated container run k3d cluster delete <name>
    1. You can recreate the cluster later on using the suite-installer
  3. Delete all kubernetes resources but keep the cluster alive:
    1. Delete the namespace with kubectl namespace suite-localdev-hub
    2. This will keep docker image cache so that when you deploy again it will be faster
    3. This is useful for local dev loop
  4. Delete only the apps apps keeping the infrastructure: kubectl kustomize --enable-alpha-plugins ../k8s/environments/localdev/hub/applications | kubectl delete -f
    1. Change the application layer accordingly