Author: Eshwar Cherkuri
The ISC2 East Bay Chapter brought cybersecurity professionals and students together to address the security challenges that come with widespread AI adoption. The conference featured keynote sessions, live labs, panel discussions, and exhibitors, all centered on how organizations can build and run AI systems without creating new security gaps in the process. The term “Secure AI Revolution” refers to the shift happening right now, where companies are deploying generative AI at scale and security teams are racing to put controls in place before attackers find the openings first. Two major themes in the event were about the risks of sensitive data being exposed through AI interactions, and the challenge of preventing prompt-based abuse that lets bad actors manipulate AI systems into doing things they were never supposed to do. The featured speaker, Amit Kharat, Co-Founder of Mavs AI, presented a session on GenAI Runtime Security that showed how real-time controls and adaptive guardrails give organizations a way to scale AI usage without handing attackers a new surface to exploit. The biggest takeaway was that the security problem with AI is not theoretical. Organizations are already deploying generative AI in production, and the gap between how fast they are moving and how slowly safety controls are being built is exactly where the risk lives.

This session focused on how to secure generative AI systems while they are actively running and being used by real people inside an organization. Amit walked through how enterprises face exposure when employees or applications send sensitive data, personally identifiable information, or policy-violating prompts into AI models without any controls watching what goes in or comes out. The session demonstrated how runtime visibility, meaning the ability to monitor every AI interaction as it happens, gives security teams the awareness they need to catch problems before they become incidents.
The most important point Amit made was that most organizations have strong controls around how data is stored and transmitted, but they have almost nothing watching what gets typed into an AI prompt. A user sending a customer’s medical record or a confidential contract into a GenAI tool creates a data exposure event that the organization may never know happened, unless there is a layer sitting between the user and the model that enforces policy in real time. Mavs AI addresses this by applying adaptive guardrails that flag or block interactions based on the content of the prompt, not just who sent it.
This stood out because the attack surface being described is one that most people using AI tools every day are not thinking about. The risk is not that someone hacks the AI model itself. The risk is that trusted users inside an organization accidentally or intentionally send information they should not, and without runtime controls, there is no record and no response. For someone studying cybersecurity and working toward a career in this field, understanding that the new perimeter includes the prompt interface is something worth building skills around now.
This topic came up as part of Amit’s broader session and addressed how attackers use crafted input text to manipulate a generative AI model into ignoring its own instructions, bypassing filters, or revealing information it was configured to protect. The session explained that prompt injection is one of the most active and documented risks in the OWASP Top 10 for LLMs, and that defending against it requires more than just tuning the model. It requires controls at the infrastructure level that sit outside the model and enforce policy regardless of what the prompt says.
Amit explained that rogue application users, meaning people who interact with a company’s AI-powered product and try to manipulate it through the input field, represent a threat that most security teams are not yet staffed to handle. A traditional security operations center is built to watch network traffic and endpoint behavior, but it is not set up to parse the semantic meaning of AI prompts at scale. Mavs AI fills that gap by applying intelligent policy controls that look at the intent and content of every interaction, not just its metadata.
This connected directly to what I have been studying about web application security and input validation, because prompt injection follows the same logic as SQL injection. You trust the input field more than you should, and an attacker uses that trust to make the system do something unintended. Seeing that the same class of vulnerability showing up in AI systems told me that the fundamentals I am learning right now transfer directly into AI security work.
This helped me see that cybersecurity work in AI systems is not purely technical. It requires thinking about governance, employee rights, and organizational policy at the same time as you are building the controls. For someone studying both computer science and data ethics, this is a space where those two areas of knowledge directly meet and where having both gives you an edge. The response also clarified that logging and monitoring in AI security need to be designed with privacy as a constraint, which connects to compliance frameworks I have read about, including GDPR and HIPAA, and reminds me that understanding those frameworks is a skill worth developing alongside the technical tools.
This conference increased my interest in AI security in a specific way that general cybersecurity content has not done before. Before attending, I understood that AI systems had risks, but I thought of those risks as mostly model-level problems, like bias or hallucination, that researchers handled before deployment. What Amit’s session made clear is that the security problems in production AI are infrastructure problems, and they require the same skills that core cybersecurity work requires: understanding how data moves, where controls can be placed, what an attacker’s entry points are, and how to build systems that enforce policy without breaking the thing they are protecting. That framing made the field feel much more concrete and much more connected to what I am already learning. I also want to look seriously at the ISC2 Certified in Cybersecurity credential as a near-term goal, because the conference made it clear that having a recognized credential early signals seriousness to employers in a field where a lot of candidates have similar coursework.