top of page

How to deploy open-appsec on MicroK8s


MicroK8s is a lightweight Kubernetes distribution that is designed to run on local systems, as well as in cloud and edge. Canonical, the open-source company that is the main developer of MicroK8s, describes the platform as a lightweight “zero-ops, pure-upstream Kubernetes“ distribution. MicroK8s is designed to offer the minimal scope of functionality necessary to run a production-grade cluster in a lightweight way. In addition, because MicroK8s can run on basic Linux, Windows, or macOS systems, it’s much easier to set up and host than most other Kubernetes variants.

Although MicroK8s can help you deploy a web application quickly, it cannot protect your application or API gateway from Zero-Day attacks. Zero-day attacks pose a significant threat to web applications, encompassing potential dangers that demand urgent attention. These attacks exploit vulnerabilities unknown to developers and security professionals, leaving organizations susceptible to malicious activities. The peril arises from the absence of available patches or security measures, granting attackers an advantage to infiltrate systems undetected. Zero-day attacks can compromise sensitive data, user privacy, and even lead to financial loss and reputational damage.

As signatures for new attacks by design can only be created after new attacks have been published, a traditional WAF solution that relies solely on signatures will never be able to protect preemptively against zero-day attacks. open-appsec does not rely on signatures, but instead is based on machine learning, so that it can provide true preemptive zero-day protection.

open-appsec for Kubernetes protects web applications and APIs running in Kubernetes environments. It integrates with the most popular NGINX Ingress Controller, serving as a secure HTTP/S load balancer for one or more services inside a Kubernetes cluster. It also integrates with Kong Gateway (API Gateway), securing distributed, exposed APIs at the API Gateway level.

In this blog, we will describe step-by-step the process of creating a MicroK8s Kubernetes cluster, on an Ubuntu machine, deploying an NGINX web service on this platform, and securing it using open-appsec based on NGINX ingress controller.

Please be aware that the commands provided here are tailored for Ubuntu (we tested on 20.04 and 22.04) and might need modifications when utilized on other Linux distributions.


  • Make sure you have a Linux Machine (preferably Ubuntu 20.04 or 22.04) and a wget command-line tool installed.

  • Make sure you have a basic understanding of Kubernetes Ingress


1. MicroK8s Installation and Configuration

Install MicroK8s using the following commands:

sudo apt update
sudo snap install microk8s --classic 
sudo microk8s.status --wait-ready 

The installed MicroK8s package includes a command-line utility, microk8s kubectl, for seamless cluster interaction. We can simplify this process by adding an alias to your shell initialization script to allow for kubectl command accessibility:

echo "alias kubectl='microk8s kubectl'" >> ~/.bash_aliases
source ~/.bash_aliases 

Note: we recommend removing the alias once you have completed following this blog.

Next, allow the “ubuntu” user to execute commands against the MicroK8s cluster without sudo by adding it to the 'microk8s' group. Note that “ubuntu” is the default user name in AWS EC2 Ubuntu instances, if you use a different username you must change it in the command below accordingly.

sudo usermod -a -G microk8s ubuntu
sudo chown -f -R ubuntu ~/.kube
newgrp microk8s 

Confirm your installation with:

kubectl get nodes
kubectl get services 

To enable Role-Based Access Control (RBAC) on the MicroK8s cluster, run:

microk8s enable rbac 

2. Helm Installation and Configuration with MicroK8s

Copy the following commands to install helm package manager and create a kubeconfig file for the MicroK8s cluster:

sudo snap install helm --classic
cd $HOME 
mkdir .kube
cd .kube

Only in case there’s already an existing kube config file, you should back it up now so you can restore it later:

cp config config.original

Create a kube config file for the microk8s cluster:

microk8s config > config
cd - 

3. Enabling MetalLB for LoadBalancer Support

In environments where LoadBalancer services aren't natively supported, MetaILB should be enabled by following the command:

microk8s enable metallb 

Upon enabling MetaILB, a range of available IP addresses for MetalLB's usage will be requested, please provide this range based on IP addresses available in your Linux machine’s network subnet. Note that for this lab you need to provide at least a range of 2 IP addresses.

4. NGINX Web App and Service Deployment

Begin by creating a deployment for the nginx web app, and then expose the deployment as a Service:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer 

Monitor your services using kubectl get services until the EXTERNAL-IP for the load balancer is obtainable (should be one of the IPs you provided during enabling MetalLB earlier).

5. NGINX Service Accessibility

The NGINX service can be accessed via the EXTERNAL-IP from any networked machine within your network (through the load balancer). Use the curl command to reach http://<EXTERNAL-IP>. This allows you to reach the web application directly from outside the microK8s cluster.

6. Installing open-appsec Ingress NGINX using Helm

Install open-appsec Ingress NGINX as per the following steps:


helm install open-appsec-k8s-nginx-ingress-latest.tgz \
    --name-template=open-appsec \
    --set appsec.mode=standalone \
    --set controller.ingressClass=appsec-nginx \
    --set \
    --set controller.ingressClassResource.controllerValue="" \
    --set appsec.persistence.enabled=false \
    --set appsec.userEmail="<your-email-address>" \
  -n appsec --create-namespace 

Make sure to replace <your-email-address> with your own email address in the above command.

Validate that open-appsec is installed and all pods are running by using the following command:

kubectl get pods -n appsec

7. Setting up an Ingress Rule

To divert specific host traffic to your NGINX service, create a Kubernetes Ingress resource by copying the following content to a new nginx-ingress.yaml file:

kind: Ingress
  name: nginx-ingress
  namespace: default
  - host: ""
      - pathType: Prefix
        path: "/"
            name: nginx
              number: 80

Apply the ingress file using:

kubectl apply -f nginx-ingress.yaml 

8. Adding Protection to the Ingress

At this stage, we will create a K8S policy resource.

kubectl apply -f -n appsec 

Find out the name of your relevant ingress resource:

kubectl get ing -A

Edit the ingress resource:

kubectl edit ing/nginx-ingress -n appsec

Adjust the ingressClassName to use the open-appsec deployment:

spec: ingressClassName: appsec-nginx 

Add the following annotation to activate open-appsec by referencing the open-appsec policy resource created earlier, you can read more about our policy here. open-appsec-best-practice-policy

9. Validate the open-appsec was deployed properly:

Note the name of the ingress nginx pod by running:

kubectl get pods -n appsec

Show the logs of the open-appsec agent container by running:

kubectl logs [ingress nginx pod name] -c open-appsec -n appsec

open-appsec is now successfully installed and can already be tested, yet we recommend connecting first to SaaS central management to use our monitoring features. You can check connectivity to the web app by running the following command:

curl -s -v -H "" http://[INGRESS-EXTERNAL-IP]

To get the INGRESS-EXTERNAL-IP run “kubectl get svc -A” and copy the external IP of the Ingress Controller. If you want to test open-appsec before connecting to SaaS central management you can run the following command, please make sure to generate some traffic before running the attack.

curl -s -v -H "" http://[INGRESS-EXTERNAL-IP]/?q=../../../etc/passwd

(Insert the same EXTERNAL-IP you identified further above)

The default policy is set to ‘Learn-Detect’ you can see the attack in logs using the following command, or change the policy.

kubectl logs [ingress nginx pod name] -c open-appsec -n appsec

Note that you might have to run this a couple of times until all open-appsec services are up and enforcing security.

10. Connecting Helm to the Management

First, register and/or sign in to open-appsec web portal and follow the steps described here to get a token. Identify the current name of the deployed open-appsec helm release by using:

helm list -A

Then you can connect to the management using the following helm upgrade command:

helm upgrade {open-appsec-helm-release-name} open-appsec-k8s-nginx-ingress-latest.tgz \
-n appsec \
--reuse-values \
--set appsec.mode="managed" \
--set appsec.agentToken={token}

Change the token with token received from web portal.

Please note that it could take up to 2 minutes for the agent to connect and get the policy, and an additional few minutes for logs to be displayed.

11. Testing the deployment

Validate the successful deployment by running a curl command with SQL injection.

curl -s -v -H "" http://[INGRESS-EXTERNAL-IP]/?q=../../../etc/passwd

If your policy (defined in step 8) is set to 'Prevent', this request will be blocked, otherwise, it will be detected.

You’ll find the request in the open-appsec portal under 'Monitoring' tab and under 'Logs'.

Once the lab has been completed, we recommend removing the alias. To remove the alias:

1. Edit the ~/.bash_aliases file - nano ~/.bash_aliases

2. Find the line that defines the alias for kubectl (e.g., alias kubectl='microk8s kubectl').

3. Delete or comment out the line by adding # at the beginning.

4. Save and exit the file (Ctrl + O, Enter, Ctrl + X).

5. Apply changes in the current terminal: source ~/.bash_aliases


open-appsec is an open-source project that builds on machine learning to provide pre-emptive web app & API threat protection against OWASP-Top-10 and zero-day attacks. It simplifies maintenance as there is no threat signature upkeep and exception handling, like common in many WAF solutions. To learn more about how open-appsec works, see this White Paper and the in-depth Video Tutorial. You can also experiment with deployment in the free Playground.


Experiment with open-appsec for Linux, Kubernetes or Kong using a free virtual lab

bottom of page