7.4 C
New York
Thursday, April 3, 2025

How AI Is Altering the Cloud Safety and Danger Equation


The AI increase is amplifying dangers throughout enterprise knowledge estates and cloud environments, in response to cybersecurity skilled Liat Hayun.

In an interview with TechRepublic, Hayun, VP of product administration and analysis of cloud safety at Tenable, suggested organisations to prioritise understanding their danger publicity and tolerance, whereas prioritising tackling key issues like cloud misconfigurations and defending delicate knowledge.

Profile photo of Liat Hayun.
Liat Hayun, VP of product administration and analysis of cloud safety at Tenable

She famous that whereas enterprises stay cautious, AI’s accessibility is accentuating sure dangers. Nonetheless, she defined that CISOs right this moment are evolving into enterprise enablers — and AI might in the end function a robust device for bolstering safety.

How AI is affecting cybersecurity, knowledge storage

TechRepublic: What’s altering within the cybersecurity setting attributable to AI?

Liat: To start with, AI has grow to be far more accessible to organisations. If you happen to look again 10 years in the past, the one organisations creating AI needed to have this specialised knowledge science group that had PhDs in knowledge science and statistics to have the ability to create machine studying and AI algorithms. AI has grow to be a lot simpler for organisations to create; it’s nearly identical to introducing a brand new programming language or new library into their setting. So many extra organisations — not simply massive organisations like Tenable and others — but additionally any start-ups can now leverage AI and introduce that into their merchandise.

SEE: Gartner Tells Australian IT Leaders To Undertake AI At Their Personal Tempo

The second factor: AI requires loads of knowledge. So many extra organisations want to gather and retailer greater volumes of information, which additionally generally has greater ranges of sensitivity. Earlier than, my streaming service would have solely saved only a few particulars on me. Now, perhaps my geography issues, as a result of they’ll create extra particular suggestions primarily based on that, or my age and my gender, and so forth. As a result of they’ll now use this knowledge for his or her enterprise functions — to generate extra enterprise — they’re now far more motivated to retailer that knowledge in greater volumes and with rising ranges of sensitivity.

TechRepublic: Is that feeding into rising utilization of the cloud?

Liat: If you wish to retailer loads of knowledge, it’s a lot simpler to try this within the cloud. Each time you determine to retailer a brand new kind of information, it will increase the quantity of information you’re storing. You don’t must go inside your knowledge middle and order new volumes of information to put in. You simply click on, and bam, you might have a brand new knowledge retailer location. So the cloud has made it a lot simpler to retailer knowledge.

These three parts kind a form of circle that feeds itself. As a result of if it’s simpler to retailer knowledge, you possibly can improve extra AI capabilities, and then you definately’re motivated to retailer much more knowledge, and so forth. In order that’s what occurred on this planet in the previous couple of years — since LLMs have grow to be a way more accessible, widespread functionality for organisations — introducing challenges throughout all these three verticals.

Understanding the safety dangers of AI

TechRepublic: Are you seeing particular cybersecurity dangers rise with AI?

Liat: Using AI in organisations, in contrast to using AI by particular person individuals the world over, continues to be in its early phases. Organisations need to make it possible for they’re introducing it in a manner that, I might say, doesn’t create any pointless danger or any excessive danger. So by way of statistics, we nonetheless solely have just a few examples, and they aren’t essentially an excellent illustration as a result of they’re extra experimental.

One instance of a danger is AI being educated on delicate knowledge. That’s one thing we’re seeing. It’s not as a result of organisations are usually not being cautious; it’s as a result of it’s very troublesome to separate delicate knowledge from non-sensitive knowledge and nonetheless have an efficient AI mechanism that’s educated on the fitting knowledge set.

The second factor we’re seeing is what we name knowledge poisoning. So, even when you have an AI agent that’s being educated on non-sensitive knowledge, if that non-sensitive knowledge is publicly uncovered, as an adversary, as an attacker, I can insert my very own knowledge into that publicly uncovered, publicly accessible knowledge storage and have your AI say issues that you just didn’t intend it to say. It’s not this all-knowing entity. It is aware of what it’s seen.

TechRepublic: How ought to organisations weigh the safety dangers of AI?

Liat: First, I might ask how organisations can perceive the extent of publicity they’ve, which incorporates the cloud, AI, and knowledge … and every little thing associated to how they use third-party distributors, and the way they leverage totally different software program of their organisation, and so forth.

SEE: Australia Proposes Obligatory Guardrails for AI

The second half is, how do you determine the crucial exposures? So if we all know it’s a publicly accessible asset with a high-severity vulnerability to it, that’s one thing that you just most likely need to tackle first. Nevertheless it’s additionally a mix of the impression, proper? If in case you have two points which can be very related, and one can compromise delicate knowledge and one can not, you need to tackle that first [issue] first.

You additionally must know which steps to take to deal with these exposures with minimal enterprise impression.

TechRepublic: What are some huge cloud safety dangers you warn towards?

Liat: There are three issues we often advise our clients.

The primary one is on misconfigurations. Simply due to the complexity of the infrastructure, complexity of the cloud, and all of the applied sciences it supplies, even when you’re in a single cloud setting — however particularly when you’re going multi-cloud — the possibilities of one thing turning into a difficulty simply because it wasn’t configured appropriately continues to be very excessive. In order that’s undoubtedly one factor I might deal with, particularly when introducing new applied sciences like AI.

The second is over-privileged entry. Many individuals suppose their organisation is tremendous safe. But when your own home is a fort, and also you’re giving your keys out to everybody round you, that’s nonetheless a difficulty. So extreme entry to delicate knowledge, to crucial infrastructure, is one other space of focus. Even when every little thing is configured completely and also you don’t have any hackers in your setting, it introduces extra danger.

The side individuals take into consideration essentially the most is to determine malicious or suspicious exercise as early because it occurs. That is the place AI might be taken benefit of; as a result of if we leverage AI instruments inside our safety instruments inside our infrastructure, we will use the truth that they’ll have a look at loads of knowledge, they usually can do that actually quick, to have the ability to additionally determine suspicious or malicious behaviors in an setting. So we will tackle these behaviors, these actions, as early as doable earlier than something crucial is compromised.

Implementing AI ‘too good of a chance to overlook out on’

TechRepublic: How are CISOs approaching the dangers you’re seeing with AI?

Liat: I’ve been within the cybersecurity trade for 15 years now. What I really like seeing is most safety consultants, most CISOs, are in contrast to what they was like a decade in the past. Versus being a gatekeeper, versus saying, “No, we will’t use this as a result of it’s dangerous,” they’re asking themselves, “How can we use this and make it much less dangerous?” Which is an superior development to see. They’re turning into extra of an enabler.

TechRepublic: Are you seeing the great facet of AI, in addition to the dangers?

Liat: Organisations have to suppose extra about how they’re going to introduce AI, slightly than considering “AI is just too dangerous proper now”. You’ll be able to’t do this.

Organisations that don’t introduce AI within the subsequent couple of years will simply keep behind. It’s a tremendous device that may profit so many enterprise use circumstances, internally for collaboration and evaluation and insights, and externally, for the instruments we will present our clients. There’s simply too good of a chance to overlook out on. If I can assist organisations obtain that mindset the place they are saying, “OK, we will use AI, however we simply have to take these dangers into consideration,” I’ve completed my job.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles