Development Environment¶
When developing in the Suite, we will need to run multiple microservices in order to have a working UI.
Any UI will require to have running:
- The Backend for Frontend (BFF)
- The Application Service
- All Backend Services which the Application service uses
- The broker, database, and any other infrastructure services we later on add, like caches.
Before starting¶
Please, make sure to follow this checklist before starting to prevent common issues:
- Ensure port 80/443 are free on your machine. i.e stop IIS.
- There have also been issues with VMWare Workstation using port 443.
- IIS may start up after restarting your laptop.
- Ensure you have followed the setup guide, particularly the
setupEnvironment
script. - Ensure you have no docker containers running that may interfere.
- Ensure you have installed the Suite's local development certificates, which
are placed in
dotnet/docker
. Install theca.crt
as Trusted Root CA.
Activating the deployment environment¶
To deploy and manage a Suite Deployment, we need a bunch of dependencies installed. Instead of installing everything on our local machine we will use a Docker Container that will have all of the dependencies installed there, and we'll run all scripts inside that "activated container".
Note
DO NOT prefix the ./activate.sh
with a .
like we do for dotnet
This will take a bit while the image is pulled from the registry and then a zsh shell will open where you can start typing commands. Old timers may notice we did not do the az-login thingy, that's cause we are using a Service Principal and a secret to access the registry. Nice!
Inside the activated container you have tools like kubectl, k3d, helm, kustomize, etc.
Things to note:
- The shell is zsh, you have autocompletion using TAB, can use
CTRL+BACKSPACE/DEL, CTRL+ARROW-KEYS, etc.
- oh-my-zsh is also installed so we can install plugins and other things later on, just ask!
- kubectl has autocomplete configured for names like pods, deployments, etc.
Installing the ITsynch Suite¶
To install the Suite, we run the Suite Installer. Makes sense huh? Inside the activated container you can run the below command to get a deployment going.
The installer will deploy and configure a k3s cluster, apply the infrastructure for local development like sql server and rabbitmq, and once that's ready, it will apply the applications you have selected.
Bash | |
---|---|
Note
If you want to deploy all apps, not just "landings" team, remove the --app
portion
You should start seeing the k3s cluster spinning up and eventually Infrastructure should start to be applied. Once infra pods start to spin up, they'll be logged there. SQL Server image takes a bit to pull.
Run a service using local source code¶
The Suite Installer creates local container registry at
cr.localdev.suite.itsynch.com:5000
.
To replace a service image with a local one, an image needs to be built and pushed to the local registry, and the local-dev kustomization needs to be changed to replace the default image with the new one.
All this can be done using the suite-replace
script.
Bash | |
---|---|
For example:
Bash | |
---|---|
-
<path-to-service>
is the path to the dotnet service.- The activated container's working directory is
/deployment
, hence the path should start with../dotnet/src/services/[..]
- The activated container's working directory is
-
<kustomization>
is the kustomization file that you deployed using the installer.- This is optional and defaults to localdev.
After you execute suite-replace
, you can check the image being used with
kubectl describe pod <name> | grep Image
Adding new services or ui applications to the deployment¶
Convenience scripts are available in the activated container to generate applications and jobs. This will generate the basic files which you must review and deploy on localdev tu make sure they work as expected.
Bash | |
---|---|
<kind>
can be any of the following:
backend-service
bff-service
web-ui
<name>
will be used for the Deployment
name, docker image name, etc.
Note
Do not change the suffixes added to the metadata.name
of the kubernetes resources.
This end up modifying the cluster infrastructure, (i.e service name creates DNS records)
For example, to add a new Angular UI one would do:
Bash | |
---|---|
This will generate:
- A
Deployment
to deploy the pods to run the application - A
Service
to expose your application to the cluster - For
bff-service
andwebui
, anIngress
to expose your application outside of the cluster. - A
Kustomization
file that references all the k8s files of your service.
Once the kustomization for your app is generated, you need to reference it in the environments you want to deploy it. We recommend you reference it in localdev and/or your team's kustomization and then reach the Suite Team to deploy to other environments.
Changing Ingress HostName¶
The Ingress Hostname will be the url prefix that will be used to access your
application. For example, if your Ingress Hostname is defined as
admin-center-ui
, your app will be accessible using
admin-center-ui.<suite-environment>.<suite-suffix>
, which translates on local
development to admin-center-ui.localdev.suite.itsynch.com
.
UI applications will most likely need to change this to suite the public facing
name of the app. At the very least, remove the -ui
suffix.
Configuring backend services¶
To apply configurations to a service, we use ConfigMaps
to define the
environment variables that dotnet's IConfiguration
expects. To support rolling
updates of your config map, we use Kustomize's ConfigMapGenerator
Note
To configure your service differently for each environment keep reading the section below
Create a file named config.env
next to your service's Deployment
and
Kustomization
.
YAML | |
---|---|
Reference the file in your service's kustomization configMapGenerator
section.
YAML | |
---|---|
Configuring Backend Services for a particular environment¶
If you need to do this we recommend creating a ticket with the Suite's Dev Op team. Just kidding, reach out to me on Teams (lchaia@itsynch.com)
To apply different configuration depending on the environment, we need to do the same steps as the section before for the each environment we want to Kustomize
Note
If your service doesn't have any config shared between all environments, you don't need to have a config file at your service as well, you can just create the config files per each environment as described below:
- Create the config file named
<service-name>-config.env
at thedeployments/k8s/environments/<env name>/_config
- Edit the Kustomization file inside that same directory to add your ConfigMapGenerator
Making Changes to non-prod / prod environments¶
We recommend applying changes to the localdev and/or your teams kustomization environments only. Once that is confirmed to work reach out to the Suite's team regarding the rest of the environments.
Note
The environment kustomization files represent the current environment being deployed CD pipelines are being created to apply those files to the target environment automatically Until that is in-place, we recommend reaching out before modifying any environment other than localdev.
Developing Angular UI against Docker¶
For the UI to use and point to the docker containers, instead of localhost, we need to run it like so
You can then access http://localhost:4200 to see the AdminCenter UI.
Forwarding internal services to your workstation¶
kubectl port-forward
can be used to forward traffic from a service to your
local workstation. However, since we run kubectl from a container we need to
change a bit the command to make it work.
Activate the dev environment mapping the container's port:
Bash | |
---|---|
Forward traffic from spares-service
's http
named endpoint to port 8080 on
the container, which is port mapped to 8080 on your workstation by the activate
command.
Bash | |
---|---|
Then, if we access http://localhost:8080/graphql/ on our workstation we should
see the spares-service
graphql editor.
Known Issues¶
Core DNS localdev configuration¶
On localdev, there's a CoreDNS customization to properly resolve calls inside a pod to the identity service, since the public DNS points to 127.0.0.1. There is a known issue where the config map customization we use is not applied after a cluster restart.
The issue manifest itself as the backend for frontend not being able to authenticate requests against identity.
To fix it, we can run the below:
Bash | |
---|---|
Cleaning up the local cluster¶
When you are not developing or need to free up resources you can stop/remove the cluster
- Stop the cluster using
k3d cluster stop <name>
- The name is the name of the environment like
localdev
. - You can see a complete list with
k3d cluster list
- You can start the cluster later on using
k3d cluster start <name>
- The name is the name of the environment like
- Nuke the cluster: on an activated container run
k3d cluster delete <name>
- You can recreate the cluster later on using the
suite-installer
- You can recreate the cluster later on using the
- Delete all kubernetes resources but keep the cluster alive:
- Delete the namespace with
kubectl namespace suite-localdev-hub
- This will keep docker image cache so that when you deploy again it will be faster
- This is useful for local dev loop
- Delete the namespace with
- Delete only the apps apps keeping the infrastructure:
kubectl kustomize --enable-alpha-plugins ../k8s/environments/localdev/hub/applications | kubectl delete -f
- Change the application layer accordingly