AI and Cybersecurity – A New Era

Sep 04, 2024

Almost everyone will have heard of Artificial Intelligence (AI) and many organisations are starting to question if they should be allowing employees to use AI in corporate environments.

Let’s step back a moment to better understand AI, which is a branch of computer science that aims to create intelligent machines that will think like humans.

AI involves creating algorithms and language models (known as Large Language Models or LLMs), that will allow machines to perform tasks that typically require human intelligence. Some functions that have been focused on are learning, problem solving, and decision making. AI systems are designed to process vast amounts of data, recognise patterns, and draw conclusions to decide by itself or assist humans.

There are many uses of AI currently from virtual assistants on our smartphones (Siri for example), fraud detection in finance, retail chatbots, cybersecurity for analysing patterns to identify anomalies and even autonomous vehicles, just to name a few. Currently one of the more commonly known LLMs is Generative Pretrained Transformer (GPT), made famous by OpenAI via ChatGPT.

In many cases, those who have started using AI more recently will have been doing so on a platform like ChatGPT and this is where many of the questions from organisations are coming from, like around how they should handle the platforms. One of the first things to consider is – if you ban them, employees will find other creative ways to use the platforms, just ways the organisation has no control over.

Let’s face it, AI is rapidly growing, and organisations will need to start embracing it, and part of that is having the controls in place to protect the organisation and its people. Any employee who is trying to use AI to help with their job is not trying to cheat the system or compromise company data by entering it into a chatbot but let’s acknowledge that it may happen (as it already has for Samsung) and try and protect ahead of what would likely be considered a breach.

As with any new platform that is introduced into an organisation, there needs to be an assessment of the platform completed, this however isn’t always easy when there is no other software other than a browser used for the platform. AI is one of these platforms in many cases. Yes, sure the IT or SecOps team can play whack-a-mole with internet domains or move to an approved internet domain list only, but this is not an effective use of their time and therefore alternatives need to be put in place to ensure the safety of organisational data and staff. Remember organisations shouldn’t be stopping users from using AI, just making sure they do it safely, which is where controls come into place.


One of the first controls organisations should be changing, is their acceptable use policies to include AI. Keys areas to ensure are covered is what sort of data is and isn’t allowed to be used within AI platforms.

What are the supported platforms and what platforms should be avoided and why. It may be that one platform processes the data analysis within the organisation’s own data environment, or that one has had a breach while another hasn’t. The important elements for an organisation to consider is what categories of data they want to allow in AI platforms, which in most cases will or should only be public.

A key point to note, is that whatever data is sent to an AI engine as a prompt (the statement, question or scenario being used as a starting point or followup to generate a response from an AI platform) will be used and combined with existing data to produce the answer to the prompt. It is safe to assume that this data may remain within the AI platform, even if it is not augmented into the LLM for future use, the data is still no longer in the organisation’s control. This however is no different to the internet search engines that everyone already uses on a day-to-day basis, and how many organisational secrets are being accidentally put on the internet already?

There are other policies that will need to be reviewed to cater for AI, such as data privacy, security, and ethics, just to name a few. Ethics is a big one, as depending on the model used there could be inherit bias involved because of the data that the model has been trained on. Another part to the ethical and data privacy sides, is whether your organisation can use its data or customer data with an AI platform or LLM.

Now the policies have been reviewed and drafted, the next step is getting the controls in place to back up the policy and make the environment safe for the organisation and the employees. Employees will need to be trained on the risks of using AI platforms, including ethical, privacy and most importantly trust.

AI platforms should not always be relied upon for their accuracy as they are only as good as the data they are trained on. At no point in time should a result from AI be 100% trusted. The best option is to trust through verification, by seeking out other means to confirm the validity of the answer if you don’t have the expertise in the area being researched using AI, there are however exceptions to this rule. AI awareness training will be an ongoing activity like cybersecurity awareness.


Organisations need to ensure their Data Loss Prevention (DLP) systems are up to date with their policies and the digital assets are correctly classified.

DLP solutions should not be relied on alone to protect organisational data, with many of the AI solutions being via web applications. Having a corporate proxy solution capable of inspecting and classifying data is important to prevent accidental exposure of sensitive organisational data. This is to ensure the organisational data does not end up being submitted as a prompt to AI solutions. Maybe organisations will already have these solutions in place to prevent the loss of organisational data to web search engines, however they should be tested to ensure they will work with the AI platforms.

Organisational data is not the only data that needs to be carefully considered when submitting data to AI, special considerations need to be given to other types of data that may not be classified but is still considered sensitive. Examples of these are personally identifiable information (PII), financial data such as credit cards (PCI), health information (HIPAA), biometric (fingerprints, iris scans, etc), geolocation, and government issued ids such as tax or social security numbers. Although this list is long, some of them may be covered by existing organisational solutions, but some won’t be, and this could be because of the small quantities that may be submitted through a prompt.

There are, however, solutions out there that still allow people to use these solutions within the corporate environment safely, but they will require a little work. An option is to implement a chatbot that is under organisational control where the data can be scrubbed before being sent to the AI platforms. DEFEND has been working with tools supplied by Microsoft to identify these categories of data and sanitise them before being sent to the AI platforms with great results.

Some of the development DEFEND has been working on is allowing the Security Operations teams work smarter and faster through interactions with our internal bot that is connected to AI. Operations that would normally take anything from minutes to hours can be completed in a few prompts with the results provided in seconds. The results can then be used to verify other activities within operations that can confirm the outcome of a potential incident. With AI being able to use vast amounts of data to draw a conclusion and then being augmented with other live data components allows the operations teams to conclude quickly, and potentially avoiding a situation that could become much bigger faster because of the background analysis needed to come to the same conclusion.

It is not currently available straight out of the box, but we have been able to use the technology to help the operational teams become more effective. The important elements of this are to ensure we are doing this in a safe way that doesn’t compromise organisational data, especially considering the organisational data could be client data. We have put safety mechanisms in place to ensure no private data is accidentally sent to AI as a prompt. For that reason, our teams only use the DEFEND bot to communicate with AI for operational purposes as the sensitive data redactions have been put in place before being sent to AI.

A basic example of this is: “My name is Phil and I have lost my phone, what should I do?” This would get changed to “My name is **** and I have lost my phone, what should I do?”, which would be sent to AI, noting the name was redacted and the redated text does not have any impact on the data the AI platform supplies back. By doing this we have allowed our team to use the powerful nature of AI, but keep the sensitive data safe, and thus improving the operational experience.


With the fast pace at which AI is developing many more solutions will be coming out to help organisations and users process the tasks that take time today, be completed at pace in the future.

Many industries are already seeing productivity gains through the use of AI, for example the use of GitHub Copilot is helping developers develop faster, but again the code that is developed by Copilot needs to be verified, but is still speeding up the development of applications. Microsoft has been working on many AI pieces of work to help speed up general day to day tasks, allowing more focus on the important things, one of these is Security Copilot. Many people in the Security space wanting to know more about this as it was talked about at a Microsoft security event in early 2023, but there is still very little information about what it might do and how it will help security operations teams.

Download the Whitepaper

Fill out the form below to read the Whitepaper

"*" indicates required fields

Name*
By submitting , I agree to the process of my personal data by DEFEND as described in the Privacy Policy.
This field is for validation purposes and should be left unchanged.

Get in touch with us

Contact Us