It’s been a year since ChatGPT launched into a household name, paving the way for mainstreaming the discussion on Large Language Models (LLMs) and the significant shift in AI technology. And it definitely didn’t take that long for tech executives of companies large and small to recognize the potential for greatness or misuse; the complexity of securing these powerful models has become a major concern.
So the Hetz team got together to discuss the topic of LLM Security with advisors from the Hetz Executive Network - including CTOs, CISOs, heads of engineering and data. This summary features our panelists Bhawna Singh, CTO, Okta; Mandy Andress, CISO, Elastic; and Itamar Golan, CEO & co-founder of Prompt Security and OWASP LLM Top10 core team. The following are some highlighted takeaways from our discussion.
1. Let’s hear from a CTO, a CISO, and an OWASP expert
“Regarding LLMs, the initial hype is big, maybe too big in the short term, but in the long term, it’s too small - and not many people comprehend it yet. It’s going to change everything we know.”
Itamar Golan, CEO & Co-founder, Prompt Security; OWASP LLM Top10 core team
LLMs are increasingly used across various applications, raising significant security challenges. Key among these are the need for proactive defense against emerging attack vectors, including sophisticated bot detection, and enhancing internal productivity without compromising data security. The rapid adoption pace of LLMs necessitates urgent development of robust security measures to address data privacy, content misuse, and unauthorized access.
“It’s a good scare,” says Mandy Andress, CISO at Elastic, who is optimistic about what can be achieved in the near term and long term. She sees a lot of excitement around the usage of LLMs and AIs - whether for coding, internal chatbots for support teams, knowledge bases, etc. Since it’s part of their overall infrastructure, Mandy says they have to make sure they are employing top levels of security within their product and to enable their customers to bring their own LLMs. Her team works closely with legal and other relevant departments to make sure it all happens securely, with peace of mind for the company, for employees, for customers.
The current risks are tangible - they’re already here, says Itamar Golan, CEO & Co-founder, Prompt Security and OWASP LLM Top10 core team. For the average enterprise, it’s important to realize that your employees are already using more than 50 GenAI apps - from creating presentations, using marketing and content tools, through to developers building your own Gen AI application and with LLMs integrated with your own internal assets. It’s at the point where a mediocre programmer with OpenAI access can build incredible things within a week; it’s something tech leaders must acknowledge in order to stay ahead of it by constructing a strategy for building and securing Gen AI at the company.
Adapting is critical, even for the most risk-averse company, because being left behind is worse. Employees (and competitors) inevitably leverage it.
For a deeper view of the LLM challenges and risks discussed by Itamar, see the latest version of the OWASP Top 10 for LLM Applications.
2. Do we really need startups to develop LLM solutions?
“There will be more vectors that we learn about over time, that we don’t have on our radar, where we might not be as well-positioned to pivot and respond to as quickly, so startups can fit into the picture there.”
Mandy Andress, CISO, Elastic
It’s a spicy question for a startup investor to ask, but… is there actually a need for startups doing the guardrails? Are newcomer startups actually better positioned to address the latest threats?
It is likely that startups have an edge in LLM security due to their agility, and frankly, newness. Put in the ‘build vs buy’ context, as Bhawna Singh, CTO of Okta does, maybe it’s a little less obvious. After all, if you use a SaaS solution - ChatGPT and the like - you’ll have to follow your legal and compliance teams’ guidelines. Leaders may choose - and are choosing - to build based on their sensitivity to the data they are working on, which is certainly a shift from the usual build-vs-buy. But the classic build-vs-buy evaluation is very applicable here too.
Looking at it from the data angle, Bhawna says organizations have all talked about data for many years - AI and data-driven decision-making are not new - but the quality of data, access to it, and its usage have not always been there. With the LLM trend, organizations are investing in improving their data controls as well as capturing data that will help improve their team’s productivity.
“We have to rethink the control aspect, how we use it, and who has access to allow more data and types of data to be leveraged for better outcomes when we use GenAI in that space. A startup can think about all this without that baggage and can bring unique solutions forward faster,” says Bhawna.
3. Who’s the buyer? And whose budget?
“The question is - who should be responsible, and who tends to be? It depends where the data sits and who is invested in solving for that use case.”
Bhawna Singh, CTO, Okta
Budgeting for LLM security varies, often depending on organizational structure. The thing about LLM solutions is that the use cases to apply LLM are uncovered bottoms-up, by the data and AI teams or product uncovering customer use cases.
It does make it easier if the budget sits close to the data. If the data/AI teams sit under the CTO, then that’s where the budget will come from, because they are involved in building the solution. Often the enterprise data team sits under the CIO, and then it is their responsibility. But in any case, the CISO should absolutely be involved, says Bhawna. This highlights the importance of clear roles and budgetary understanding between various departments regarding LLM security.
4. The times & threats are changing & challenging
“You have to adapt even if you don’t prefer to take these risks, otherwise you’ll be left behind. It’s already being used, and needs to be addressed.”
Itamar Golan, CEO & Co-founder, Prompt Security; OWASP LLM Top10 core team
The ease of use of LLMs by non-technical individuals introduces unique security challenges. The rise of "LLM hackers" who exploit these models using natural language calls for a reevaluation of traditional security measures. This includes developing robust monitoring systems and real-time response mechanisms to address the non-deterministic nature of LLM outputs.
To make it more difficult, the threats can affect internally or externally; employees or customers/users. After all, LLMs are used both internally for operational efficiency and externally for customer-facing applications. Internally, LLMs can enhance data processing but raise concerns about data privacy and access control. Externally, customer-facing applications like chatbots necessitate strict security measures to protect sensitive data.
The integration of LLMs with existing security tools presents specific technical challenges. Traditional tools such as DLP and Cloud Access Security Brokers (CASB) might not be adequate for handling the scale and intricacies of LLM interactions. The risks associated with LLM outputs, which may elude traditional data control mechanisms, are a particular concern.
Understanding the evolving landscape of LLMs - and their security - is critical for all of us, from multinational corporate executives to startup founders to AI investors. The proliferation of LLM integration into infrastructure calls for watchful yet innovative and adaptable security solutions, plus a nuanced grasp of the technical and operational challenges.
There’s no doubt that after speaking to the advisors in the Hetz Executive Network that we here at Hetz Ventures are in good company. Top tech executives - CTOs, VP R&D, CISOs, data engineering leaders - are taking proactive and informed approaches to LLM security, keeping their companies innovative while safe.