top of page

Using Gamification to demystify the AI black-box in a Web Application Firewall (WAF) product

AI becomes increasingly ingrained in our daily lives. From voice assistants like Siri and Alexa to ChatGPT, AI systems constantly learn, evolve, and adapt to serve us better. But there's a challenge lurking beneath these advancements: making the opaque process of AI learning transparent to users. How can we visualize the learning progress of an AI and involve users in a way that's engaging and comprehensible? Learn how we did it with open-appsec, a machine learning based Web Application & API Security product.



The Black Box Dilemma

One of the most pervasive challenges with modern AI is its "black box" nature. For many, AI algorithms remain a mystery. We feed data into this box, and out comes a decision, a recommendation, or an action. But what happens inside?


Traditionally, most users are content with the results without diving into the intricacies. However, as reliance on AI grows, so does the need for transparency. Users want (and often need) to understand how AI comes to its decisions, especially when those decisions can have significant real-world consequences.


Gamification: Making Learning Tangible

Enter gamification. At its core, gamification involves applying game-design elements and game principles in non-game contexts. When it comes to AI, gamification can transform the esoteric learning process into something tangible, relatable, and engaging.

Imagine a progress bar that fills up as the AI learns, or a tree that grows taller and sprouts more branches and leaves as the AI gains more knowledge. These visual metaphors can help users relate to the learning process, providing a sense of progression and achievement


Graphical Metaphors: Bridging the Understanding Gap

Beyond gamified elements, graphical metaphors are potent tools for representing abstract processes. Here's how they can help in visualizing AI learning:

  1. Simplification: Graphical metaphors can reduce the complexity of AI's learning process, offering users a snapshot of its progress.

  2. Relatability: Using familiar symbols and imagery (like the growing tree) can make the abstract concept of machine learning more tangible.

  3. Engagement: Visual metaphors can be aesthetically pleasing, drawing users in and encouraging them to explore more about how the AI works.

  4. Feedback: They can provide immediate feedback, allowing users to see how their interactions or input data influence the AI's learning.


Potential Implementations

  1. Learning Landscapes: As AI processes new data, visualize it as a character traversing a landscape, climbing mountains (challenging concepts) and crossing rivers (linking different ideas).

  2. Puzzle Completion: Every piece of data or experience can be seen as a puzzle piece. As the AI learns, the puzzle becomes more complete, giving a visual representation of its progress.

  3. Building Blocks: Represent each learning phase or dataset as a block. As the AI progresses, it adds more blocks, constructing a tower or structure, indicating its growing knowledge base.




Gamification and engagement in open-appsec

In the evolving landscape of security, machine learning models have emerged as formidable sentinels. They adapt, evolve, and refine their protective measures based on continuous traffic observation.


open-appsec is an open-source Web Application & API security product powered by a fully automatic machine learning engine which continuously analyzes HTTP/S requests to Websites or APIs. Incoming HTTP requests are evaluated against two machine learning models:

  • a supervised model that was trained off-line with millions of malicious and benign requests

  • a non-supervised model that is built in real-time in the protected environment and is specific to its traffic patterns


For open-appsec to truly shine, it must distinguish between threats and genuine requests with impeccable accuracy. It's critical to avoid false positives because unnecessary blocks can lead to operational downtime and stack up administrative chores.


Explaining the Technology


To explain the technology to users, we created a short video that uses an analogy of "good" and "bad" fish that try reach an island that is protected by open-appsec.



In the Product Itself


The most important Gamification happens within the product itself. To communicate the model's current readiness and precision to users, we've instituted a system of learning levels, reminiscent of academic progression stages.


open-appsec's machine learning model advances methodically through these learning stages, with each level offering a snapshot of its current maturity. Upon reaching the Graduate level, the model exhibits a heightened capability to thwart potential threats. The zenith, the PhD level, signifies the model's optimal performance stage. Post this point, the returns on further learning are incremental.



To facilitate smooth transitions between these levels, we provide users with actionable insights and guidance as to what should happen in order for the ML model to progress between learning levels. This can involve:

  • Observing more traffic - the volume and diversity of incoming traffic enriches the model's learning.

  • Observing more users - the more user interactions it observes, the better it understands the typical behaviors and can discern anomalies.

  • Configuration adjustments - this might entail steps like setting up a baseline of trusted users or refining other specific parameters to boost accuracy.



Finally we also provide a clear recommendation:

Recommendation

​Action Required

Keep Learning

No action required. The machine learning model requires additional HTTP requests (and additional time).

Review Tuning Suggestions

The learning mechanism generated tuning suggestions. Review suggestions decide whether the events are malicious or benign.

Prevent Critical Severity Events

The system is ready to prevent critical severity events. Navigate to the Threat Prevention tab and change the Web Attacks practice Mode to Prevent for Critical Severity events.

Prevent High Severity And Above Events

The system is ready to prevent high severity (and above) events. Navigate to the Threat Prevention tab and change the Web Attacks practice Mode to Prevent for High and above Severity events.

The learning engine may ask user to review certain events, also called Tuning Suggestions. Providing feedback to these suggestions is not mandatory as the engine is capable of learning by itself. However doing this, allows the machine learning engine to reach a higher maturity level and therefore a better accuracy faster based on human guidance.


What our users tell us


From the feedback we've received, it's evident that users highly appreciate the clarity and transparency offered by the system, stating the following:

  1. Comprehensible Learning Levels: The analogy to academic stages, ranging from Kindergarten to PhD levels, offers a tangible sense of the model’s progression and current capabilities. Users can easily relate to these stages, making it more straightforward to gauge where the model stands in its learning journey.

  2. Actionable Insights: The system not only indicates its current learning level but also suggests actionable steps to advance to the next stage. This clear guidance ensures that users aren't just passive observers but active participants in the model's evolution.

  3. Transparent Decision-making Process: Users have reported a high level of trust in the model, thanks to insights into why specific decisions are made. This peek into the decision-making process demystifies the 'black box', offering a clearer picture of how open-appsec operates.


Facilitating Broader Communication

One of the standout points from the feedback is the ease with which users can communicate the model’s status and actions to their peers and superiors:

  • Peer Discussions: Understanding the system's workings allows users to have informed discussions with their peers, promoting collective decision-making and fostering a sense of ownership over security protocols.

  • Managerial Briefings: Being able to articulate the model's status and decision logic aids in presenting reports and updates to higher-ups. It ensures that management is always in the loop, reinforcing trust in the system.

  • Prompt Action: A clear understanding of the model's recommendations means users can act swiftly. Whether it's adjusting configurations, designating trusted users, or making other optimizations, users can confidently make decisions to bolster security.


Conclusion


As AI continues to permeate our lives, bridging the gap between its intricate operations and user understanding becomes essential. Through gamification and graphical metaphors, we can make the AI's learning journey more transparent, engaging, and relatable. By doing so, we not only demystify the black box but also foster trust and collaboration between humans and their digital counterparts.


In our example open-appsec's commitment to transparency is redefining user interaction with machine learning tools. By ensuring users can not only understand but also explain and act upon the model's workings, we're turning the enigmatic 'black box' into a clear, actionable, and collaborative tool.

 

open-appsec is an open-source project that builds on machine learning to provide pre-emptive web app & API threat protection against OWASP-Top-10 and zero-day attacks. It simplifies maintenance as there is no threat signature upkeep and exception handling, like common in many WAF solutions.


To learn more about how open-appsec works, see this White Paper and the in-depth Video Tutorial. You can also experiment with deployment in the free Playground.

Experiment with open-appsec for Linux, Kubernetes or Kong using a free virtual lab

bottom of page