9.8 C
New York
Thursday, May 22, 2025

Nick Kathmann, CISO/CIO at LogicGate – Interview Sequence


Nicholas Kathmann is the Chief Data Safety Officer (CISO) at LogicGate, the place he leads the corporate’s info safety program, oversees platform safety improvements, and engages with prospects on managing cybersecurity danger. With over 20 years of expertise in IT and 18+ years in cybersecurity, Kathmann has constructed and led safety operations throughout small companies and Fortune 100 enterprises.

LogicGate is a danger and compliance platform that helps organizations automate and scale their governance, danger, and compliance (GRC) applications. Via its flagship product, Threat Cloud®, LogicGate permits groups to establish, assess, and handle danger throughout the enterprise with customizable workflows, real-time insights, and integrations. The platform helps a variety of use instances, together with third-party danger, cybersecurity compliance, and inside audit administration, serving to firms construct extra agile and resilient danger methods

You function each CISO and CIO at LogicGate — how do you see AI reworking the duties of those roles within the subsequent 2–3 years?

AI is already reworking each of those roles, however within the subsequent 2-3 years, I feel we’ll see a significant rise in Agentic AI that has the ability to reimagine how we take care of enterprise processes on a day-to-day foundation. Something that might normally go to an IT assist desk — like resetting passwords, putting in purposes, and extra — will be dealt with by an AI agent. One other important use case will probably be leveraging AI brokers to deal with tedious audit assessments, permitting CISOs and CIOs to prioritize extra strategic requests.

With federal cyber layoffs and deregulation traits, how ought to enterprises method AI deployment whereas sustaining a robust safety posture?

Whereas we’re seeing a deregulation pattern within the U.S., rules are literally strengthening within the EU. So, in the event you’re a multinational enterprise, anticipate having to adjust to world regulatory necessities round accountable use of AI. For firms solely working within the U.S., I see there being a studying interval by way of AI adoption. I feel it’s necessary for these enterprises to type robust AI governance insurance policies and preserve some human oversight within the deployment course of, ensuring nothing goes rogue.

What are the largest blind spots you see immediately in relation to integrating AI into present cybersecurity frameworks?

Whereas there are a few areas I can consider, probably the most impactful blind spot can be the place your knowledge is situated and the place it’s traversing. The introduction of AI is barely going to make oversight in that space extra of a problem. Distributors are enabling AI options of their merchandise, however that knowledge doesn’t at all times go on to the AI mannequin/vendor. That renders conventional safety instruments like DLP and internet monitoring successfully blind.

You’ve mentioned most AI governance methods are “paper tigers.” What are the core elements of a governance framework that truly works?

Once I say “paper tigers,” I’m referring particularly to governance methods the place solely a small group is aware of the processes and requirements, and they aren’t enforced and even understood all through the group. AI could be very pervasive, that means it impacts each group and each group. “One measurement matches all” methods aren’t going to work. A finance group implementing AI options into its ERP is totally different from a product group implementing an AI characteristic in a selected product, and the checklist continues. The core elements of a robust governance framework differ, however IAPP, OWASP, NIST, and different advisory our bodies have fairly good frameworks for figuring out what to judge. The toughest half is determining when the necessities apply to every use case.

How can firms keep away from AI mannequin drift and guarantee accountable use over time with out over-engineering their insurance policies?

Drift and degradation is simply a part of utilizing know-how, however AI can considerably speed up the method. But when the drift turns into too nice, corrective measures will probably be wanted. A complete testing technique that appears for and measures accuracy, bias, and different pink flags is critical over time. If firms need to keep away from bias and drift, they should begin by making certain they’ve the instruments in place to establish and measure it.

What position ought to changelogs, restricted coverage updates, and real-time suggestions loops play in sustaining agile AI governance?

Whereas they play a job proper now to cut back danger and legal responsibility to the supplier, real-time suggestions loops hamper the power of consumers and customers to carry out AI governance, particularly if modifications in communication mechanisms occur too often.

What considerations do you’ve got round AI bias and discrimination in underwriting or credit score scoring, notably with “Purchase Now, Pay Later” (BNPL) providers?

Final 12 months, I spoke to an AI/ML researcher at a big, multinational financial institution who had been experimenting with AI/LLMs throughout their danger fashions. The fashions, even when educated on giant and correct knowledge units, would make actually shocking, unsupported selections to both approve or deny underwriting. For instance, if the phrases “nice credit score” have been talked about in a chat transcript or communications with prospects, the fashions would, by default, deny the mortgage — no matter whether or not the shopper mentioned it or the financial institution worker mentioned it. If AI goes to be relied upon, banks want higher oversight and accountability, and people “surprises” should be minimized.

What’s your tackle how we should always audit or assess algorithms that make high-stakes selections — and who ought to be held accountable?

This goes again to the excellent testing mannequin, the place it’s essential to constantly take a look at and benchmark the algorithm/fashions in as near actual time as attainable. This may be troublesome, because the mannequin output might have fascinating outcomes that may want people to establish outliers. As a banking instance, a mannequin that denies all loans flat out may have an important danger score, since zero loans it underwrites will ever default. In that case, the group that implements the mannequin/algorithm ought to be liable for the end result of the mannequin, identical to they’d be if people have been making the choice.

With extra enterprises requiring cyber insurance coverage, how are AI instruments reshaping each the danger panorama and insurance coverage underwriting itself?

AI instruments are nice at disseminating giant quantities of information and discovering patterns or traits. On the shopper aspect, these instruments will probably be instrumental in understanding the group’s precise danger and managing that danger. On the underwriter’s aspect, these instruments will probably be useful to find inconsistencies and organizations which are turning into immature over time.

How can firms leverage AI to proactively cut back cyber danger and negotiate higher phrases in immediately’s insurance coverage market?

In the present day, the easiest way to leverage AI for lowering danger and negotiating higher insurance coverage phrases is to filter out noise and distractions, serving to you give attention to an important dangers. For those who cut back these dangers in a complete approach, your cyber insurance coverage charges ought to go down. It’s too straightforward to get overwhelmed with the sheer quantity of dangers. Don’t get slowed down attempting to deal with each single subject when specializing in probably the most important ones can have a a lot bigger influence.

What are a couple of tactical steps you advocate for firms that need to implement AI responsibly — however don’t know the place to start out?

First, you have to perceive what your use instances are and doc the specified outcomes. Everybody needs to implement AI, but it surely’s necessary to think about your targets first and work backwards from there — one thing I feel loads of organizations battle with immediately. Upon getting a very good understanding of your use instances, you possibly can analysis the totally different AI frameworks and perceive which of the relevant controls matter to your use instances and implementation. Sturdy AI governance can be enterprise important, for danger mitigation and effectivity since automation is barely as helpful as its knowledge enter. Organizations leveraging AI should accomplish that responsibly, as companions and prospects are asking powerful questions round AI sprawl and utilization. Not figuring out the reply can imply lacking out on enterprise offers, immediately impacting the underside line.

For those who needed to predict the largest AI-related safety danger 5 years from now, what would it not be — and the way can we put together immediately?

My prediction is that as Agentic AI is constructed into extra enterprise processes and purposes, attackers will interact in fraud and misuse to control these brokers into delivering malicious outcomes. We’ve got already seen this with the manipulation of customer support brokers, leading to unauthorized offers and refunds. Risk actors used language tips to bypass insurance policies and intervene with the agent’s decision-making.

Thanks for the good interview, readers who want to be taught extra ought to go to LogicGate

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles