The AI Deluge – Maintaining Security Amidst a Wave of New Tools
The large language model (LLM) movement has reached critical mass. From Chat GPT’s somewhat humble beginnings to generative AI advances and the agentic AI progression, the promise of LLM tools seems limitless. As the pace of AI evolution achieves impressive velocity, organizations are pressed to stay in lockstep with innovation. But doing so today means evaluating, then discarding or embracing, an ever-changing roster of newly available AI tools. Some are created with data security as a top priority—most are not. That puts the onus on companies to evaluate AI platforms not just for operational benefits, but also from a digital security standpoint. To help unpack the implications for organizations, we turned to Mark Viglione, recognized cybersecurity leader and Co-Founder of a stealth startup. Mark explains, “Sure, newer might be better from a capability standpoint, but that doesn’t always translate to tested and proven for risk prevention.” Here are four concrete steps you can take to support better AI security within your enterprise.
4 Ways to Stay on Top of AI Security
When it comes to AI tools, there are so many “shiny new things” available to try and buy. According to Mark, before investing in the latest and greatest (or even the tried and true), it’s crucial to take a step back and consider one vital question: how do I weigh Personally Identifiable Information (PII) security in determining what to adopt?
These steps can help you find your answer.
1. Start with Your Use Case
Before you can determine what tools to vet and what role security plays, you need to understand why you’re considering AI in the first place. Especially if you’ll be using LLMs.
Are you using it to enhance your recruiting process? Are you looking to streamline customer service? Is better data analytics your goal? Do you want to integrate new capabilities into existing AI workflows?
There are almost as many reasons to use or upgrade AI as there are tools to choose from. Identify your “why” at the outset, and use this as your guardrail in assessing security implications. Here are sample use cases as reference:
- Content creation. If using Gen AI applications for internal- or external-facing content creation, you’ll need to consider both the security of the tool and the stringency of the regulations you put into place around what employees can and cannot enter into the system.
- Customer data handling. PII protection is particularly crucial any time customer-specific data enters the mix. The risk of exposure or breach is high, and the repercussions can be catastrophic to your organization and your clients. If you’re in a highly regulated industry (think financial services, insurance, education), the pressure is amplified and the need for secure tools more urgent.
- Research/analytics. Any endeavor that depends on the absolute accuracy of data (and the insights derived from it) is vulnerable to manipulation and error. When AI tools are layered into research protocols, that danger intensifies. It only takes one instance of compromised data, unencrypted access, or erroneous AI hallucinations to corrupt a dataset and lead to life-altering consequences.
These use case examples illustrate why knowing how you intend to use AI is the key to ensuring the right security considerations take center stage. But this is just the start.
Once you know your why, take the time to look “under the security hood” of the tools you’re considering for support.
2. Take Your Time Vetting
You quickly realize there’s a lot out there. Some tools have been in use for years. Some just launched last week.
You’ve taken the time to outline your use cases, but now it’s time to do the legwork. With your “why” in hand, narrow down a shortlist of AI technologies that might work. They check all the boxes from a usability, capability, and interoperability perspective. But do they pass the security test?
As Mark explains, “Most AI tools today use their own agent and LLM copilot. But having their own agent and copilot takes on a very different meaning when you’re comparing an established software from the likes of a Microsoft against something that is new and unproven in the market.”
So be unapologetically methodical in your evaluation of potential enterprise AI investments:
- Thoroughly research the company. Are they a well-known entity that you would trust with your data? Is technology their primary focus, or were they once in a completely different industry? Are there any security red flags in their past (lawsuits, data breaches, controversial news headlines)?
- Carefully review all terms and services. If a company is newer or a tool hasn’t been proven at scale, this step is non-negotiable. But it’s still important for all because a provider can claim to use proprietary LLMs, but in the background is technically relying on a highly exposed open-source model. For this reason:
- Comb their website—every provider is required to explicitly state how they use your data, how it is being stored, and their data privacy protocols.
- Talk to an account repAsk them straight out what models they use. Depending on their answer, you might be able to tell if it’s associated open-source options, even if they run a self-hosted environment.
In lockstep with vetting and selecting your next AI investment, you’ll want to lay the groundwork for employee training. Ignoring this step, or putting it off until the last minute, can cancel out the risk prevention efforts you’ve already put in.
3. Prioritize Training
Today’s workforce is varied in generational knowledge, practical skillsets, and areas of specialization.
Depending on their role, some employees will be advanced AI practitioners while some will be novices, at best. While the specific training approach will vary depending on AI use case, industry compliance requirements, and end-user familiarity, you’ll want to have a plan and process in place that covers at least the following:
- Role-based access permissions: a project manager and a CEO should not have equal ability to enter or retrieve sensitive data.
- Data entry guardrails: make it clear what type of information can and cannot be shared with the AI tool, and for what workflows it is allowed vs. restricted.
- Zero Shadow AI tolerance: spell out that employees are only allowed to use approved AI tools and are not permitted to download “any old tool” to try and circumvent the approval process.
- Compliance regulations: make sure everyone understands compliance regulations your organization must adhere to and how AI must be used to adhere to them.
4. Recognize Third Parties as Weak Links
Many security breaches can be traced back to internal user error. But just as many can be linked to the inherent risk that comes when third-party vendors are involved.
Depending on the size of your organization, you could be working with dozens or even hundreds of vendors. And, unfortunately, you cannot control the security protocols they embrace or dismiss.
Any one of them could get compromised, creating an open back door into your digital environment.
Mark shares, “The best defense in this era of AI is a strategic offense. That means taking a security-first approach to AI technology adoption at every opportunity and getting clarity from your AI partners around what third-party providers they use and what security restrictions they impose on them.”
Final Advice for Your AI Security Strategy
If there’s one key takeaway you get from this blog, Mark wants it to be this, “There’s a lot of exciting and groundbreaking new technology out there. It’s tempting to want to try it out before competitors get their hands on it. But before you dive in, do your research. Understand how the AI tool can help, but also how it can hurt your organization. And determine if it’s worth the risk.”
If you do proceed with the AI investment, Mark outlines three additional tips to support security success:
- Prioritize prompt engineering. Make sure users understand how to ask the right questions and ensure queries are robust enough to elicit accurate responses. Quality prompts are critical to data accuracy and also help you realize if a model has gone rogue or is hallucinating (which can also lead to security issues).
- Use only quality data. Make sure your data is clean, accurate, and complete. If it’s not, any actions taken as a result of data derived from the model could create a security risk.
- Hire for the right skills. If hiring an AI engineer or AI developer, look for talent that has data science experience and has worked with both machine learning models and back-end APIs. This combination of experience, paired with platform engineering skills, will give you the best chance of capitalizing on modern AI platforms.
How We Can Help
Finding the right talent to spearhead the security of your AI initiatives takes precision and an understanding of emerging AI technologies.
IntePros understands what it takes to ensure that AI tools and processes are designed to minimize risk, and we have access to uniquely skilled AI professionals trained on the latest security protocols. Call or contact us anytime to start building your AI security talent stream.