top of page

Secure Kong API Gateway With a Web Application Firewall (WAF)


Before Application Programming Interfaces (APIs) were developed, software developers used various communication methods between software systems or applications. One of them was point-to-point integrations.


This involved creating custom code to connect one application to another. This approach was time-consuming, expensive, and difficult to maintain because each integration had to be created from scratch. Any changes or updates to either application could break the integration.


With the development of APIs, building and maintaining complex software was simplified. They became the de facto standard for integrating software systems and applications, providing a standardized, platform-agnostic way for applications to communicate and share data.


In this article, we'll briefly discuss the function of the Kong API gateway in Kubernetes, why you should use a WAF in Kong, and a detailed step-by-step guide to installing open-appsec WAF in Kong Ingress Controller.


Understanding Kong and Its Function in Kubernetes

Kong is an open-source API gateway that provides a centralized point of control for managing and securing APIs. It acts as a reverse proxy and sits between the client and the backend API servers, providing features such as the following:

  • Traffic Management

  • Authentication

  • Authorization

  • Rate Limiting

  • Caching

  • Logging

As highlighted earlier that prior to the introduction of API gateways, software developers managed and secured APIs by incorporating security and traffic management directly into their applications using custom code or third-party libraries. This involved handling authentication, rate limiting, and caching tasks. But this approach was often complicated and challenging to maintain as each application had to implement its own security and management features independently, resulting in disparities and possible security weaknesses.


In recent years, API gateways like Kong have become popular due to their ability to provide a centralized point of control for managing and securing APIs.


In Kubernetes, the Kong API gateway simplifies API management and reduces the complexity of microservice architecture. Additionally, Kong's flexible architecture and extensibility make it a popular choice for organizations to customize and control complex API environments.


Why Should You Use a WAF in Kong?


One major reason application engineers hesitate to use a WAF in Kong Ingress Controller is that it can introduce additional complexity and overhead, particularly if the WAF is not well-tuned or configured for the application's specific needs. This can lead to performance issues, false positives or negatives, and other problems.


Another challenge is that WAFs may not be effective against all types of attacks and may not provide complete protection against sophisticated or advanced threats. Also, some companies feel that they do not have the expertise to install a WAF.


Finally, there is the issue of cost. Implementing a WAF can be expensive, particularly if a commercial solution is used, and may require ongoing maintenance and support. This can make it challenging for smaller organizations or those with limited resources to justify the expense of implementing a WAF.


Despite these challenges, using a web application firewall like open-appsec in a Kong Kubernetes Ingress cluster can provide several benefits, including:

  1. Improved Security (against both known and unknown attacks)

  2. Simplified Deployment

  3. Easy Integration

  4. Faster Threat Response

  5. Reduced False Positives

  6. Cost-effective (if using an open-source WAF like open-appsec)

  7. Easy and Simple Installation (like open-appsec)


Are you looking for a way to block attacks on your web application before they happen? Look no further, as open-appsec uses machine learning to continuously detect and preemptively block threats before they can do any damage. The open-appsec code has also been published on GitHub, and the effectiveness of its WAF has been successfully proven in numerous tests by third parties. Try open-appsec in the Playground today.


How to Install open-appsec WAF in Kubernetes Kong Ingress Controller

For this open-appsec Kong Gateway installation, we’ll use the Helm chart. This installation method will allow you to access built-in Kubernetes resources and create custom resources via Custom Resource Definitions (CRDs).


Requirements


To follow this guide, you should have the following:

  • A general understanding of Kubernetes, Ingress Controller, and NGINX Ingress

  • The Kubernetes tool, version 1.16.0 upwards

  • The Helm 3 package manager

  • The wget Linux command line installed to download files from containers running in a Kubernetes Pod

  • The kubectl command line installed to manage Kubernetes clusters and run commands against them

  • Role-based access control with admin permissions enabled


Installation


Step 1: Download Helm chart:

Run the following command to obtain the latest helm chart:

wget https://downloads.openappsec.io/helm/open-appsec-k8s-kong-latest.tgz


Step 2: Install open-appsec Helm Chart and CRDs.

Run the following command to install open-appsec together with NGINX Ingress Controller or Kong and create the open-appsec CRDs, which add new K8s resource types that will be used later for defining the protection policies, log settings, exceptions, user response, and more.


Note: If persistent storage is available in your cluster, please remove the "--set appsec.persistence.enabled=false" parameter in the following command to allow open-appsec to use persistent storage for learning. This is only shown for maximum compatibility reasons below.


helm install open-appsec-k8s-kong-latest.tgz \ --name-template=open-appsec \ --set appsec.mode=standalone \ --set ingressController.ingressClass=appsec-kong \ --set appsec.persistence.enabled=false \ --set appsec.userEmail="<your-email-address>" \ -n appsec --create-namespace



This installs Kong with open-appsec into a new namespace "appsec" in local management mode (stand-alone).


Optional open-appsec helm install parameters.

  1. -n <namespace>: select a namespace name that will include the open-appsec and NGINX Ingress Controller resources.

  2. --create--namespace: create namespace if it doesn't exist.

  3. --name-template: name of your deployment used for pod naming (optional).

  4. --set appsec.persistence.enabled: persistent volume includes machine learning information; if this is set to false, then machine learning information is lost when the appsec container is stopped/restarted. ● true: default is true ● false

  5. If this value is set to true (default, when not overriding with false), you must also specify appsec.persistence.learning.storageClass --set appsec.persistence.learning.storageClass: Specify storage class to be used for the learning pod. Note: storageClass name specified here must support ReadWriteMany (like AWS EFS or Azure Files).

  6. --set appsec.mode: is the deployment centrally managed? ● stand-alone: configures stand-alone mode (locally managed via CRDs). ● managed: configures centrally managed mode (using WebUI SaaS) when set appsec.Token must be provided as well.

  7. --set appsec.Token: must be provided when appsec.mode is set to manage.

  8. --set kind: select the deployment type ● AppSec: Installs open-appsec and Kong as K8s Deployment (default, recommended for most scenarios). Note: If required, in this mode, you can also switch to Daemonset using by additionally setting deployment.daemonset to true) ● AppSecStateful: Installs open-appsec and Kong as a K8s StatefulSet. ● Vanilla: (for debugging purposes only) Install regular Kong based on the Helm chart without open-appsec. Note: This can be useful when debugging if a potential issue with the Kong deployment is caused by open-appsec. ● NOTE: If Vanilla mode is used, the Kong/Kong Gateway image specified under image.repository/image.tag is used instead of the open-appsec specific Kong/Kong Gateway image specified here: appsec. Kong.image.repository / appsec. Kong.image.tag

  9. --set ingressController.ingressClass: specify desired Ingress class name.

For additional available configuration values, please check the values.yaml within the downloaded Helm chart and the Kong documentation


Step 3: Validate that open-appsec is installed and running.

kubectl get pods -n appsec


The READY column typically shows 3/3 (or 2/2 if, for example, Kong is deployed without the Kong ingress controller) for the Kong pod and 1/1 for the learning and shared storage deployment pods.


Step 4: Choose an appropriate setup option.

Here are the available options:


Add Protection to Existing Ingress Resource


You will typically have either Kong Controller or another Ingress Controller like NGINX deployed, which will proxy traffic to the Kong Gateway.


open-appsec will secure traffic integrating directly with the Kong Gateway container (not the Kong Controller), allowing open-appsec to inspect HTTPS traffic terminated at the Kong Gateway.


For traffic to reach your API Gateway, you can use the Kong Controller as an Ingress Controller alongside Kong API Gateway (Kong Controller will be deployed by default within the same pod as Kong Gateway as an additional container, but it is an optional component). Alternatively, you can use another Ingress controller of your choice.


If you use Ingress for proxying traffic to your Kong Gateway, you can easily update your existing K8S ingress resource to secure its traffic with open-appsec. Once you apply the change, the Ingress will reload, and traffic will be protected.


Note: Having an Ingress Resource defined for traffic to Kong Gateway is mandatory for protecting the traffic with open-appsec. The open-appsec policy resource has to be linked to an Ingress resource via an annotation (see the below steps). Additional options will be provided in the future.


a. Create an open-appsec policy resource

First, you must create a K8s open-appsec policy resource. There are multiple alternative ways to create a policy:

  • As explained here, use the available configuration tool to create a policy resource easily.

  • Run the following commands to create the "open-appsec-best-practice-policy" in K8s: kubectl apply -f https://downloads.openappsec.io/resources/open-appsec-policy.yaml -n appsec-nginx

  • Create your custom policy; you can find all of the details here.

b. Find out the name of your relevant Ingress resource:

kubectl get ing -A


c. Edit the Ingress resource:

kubectl edit ing/<ingress name> -n <ingress namespace>


d. Add this annotation to activate open-appsec: openappsec.io/policy: open-appsec-best-practice-policy


Note: The default mode of this policy is detect-learn. It will not block any traffic unless you change the policy mode to prevent-learn, either for a specific Ingress rule or the whole policy.


open-appsec will read and enforce the open-appsec policy specified in the Ingress resource by this annotation even though the actual enforcement is done in the Kong Gateway and not in the Ingress Controller (this is similar to how Kong implements its declarative policy).


Step 5: Validate that open-appsec works

Your existing or new Ingress is running, and you can try it out!

  1. Generate some traffic to one of the services defined in your Ingress.

  2. Run this command to see logs:

Note the name of the Kong pod by running:


kubectl get pods -n appsec

Show the logs of the open-appsec agent container by running the following:

kubectl logs [kong pod name] -c open-appsec -n appsec


Note: With the default policy, logging is done to stdout, so you can easily direct it with fluentd/fluentbit or similar to logs collector (ELK or other). It is possible to configure open-appsec to log also to syslog.


open-appsec automatically logs the first 10 HTTP requests and then, by default, will only log malicious requests. You can change this setting.

Step 6: Point your DNS to the New Ingress (skip if you chose existing in step 4)

After testing that your services are reachable, you can point your public DNS record to the new Ingress.


In case of a problem, at any time, you can either switch open-appsec off while running the same Ingress code or change your DNS back.


Note: For Production usage, you might want to switch from using the Basic to the more accurate Advanced Machine Learning model, as described here.


Conclusion


Understandably, using a WAF in Kong can seem unnecessary due to some of the security features that it offers as a Kubernetes Ingress Controller. However, we recommend you deploy a web application firewall (like open-appsec) to add an extra security layer to your application.


FAQs


What is the Kong gateway used for?


Kong Gateway is an open-source tool for managing APIs and microservices. It can improve your applications' reliability, security, and performance.


Is Kong free?


Yes, an open-source package is available for free, and paid plans which offer more advanced features are also available.


Is NGINX more secure than Apache?


NGINX and Apache are secure web servers with a good track record of application security. However, some architectural and design differences can impact their security level. Here are some of them:


  • NGINX has a smaller and more streamlined codebase than Apache, making it less vulnerable to security issues caused by complex code.

  • NGINX can handle large traffic volumes without using as many system resources as Apache. This can help reduce the risk of denial of service (DoS) attacks.

  • Apache has a long history and a larger user community, meaning that vulnerabilities are more likely to be discovered and fixed quickly. On the flip side, it could also have a lot of hidden vulnerabilities that are used as zero-day exploits.


Experiment with open-appsec for Linux, Kubernetes or Kong using a free virtual lab

bottom of page