How generative AI is set to make cybersecurity stronger in a world where trust is hard to come by.

CISOs and CIOs still grapple with whether it is better to use generative AI as a real-time learning engine, which records information regarding behavior, telemetry, intrusion, and breach, than to consider the associated risks. It aims at establishing new muscle memory on threat intelligence to predict, prevent, and enhance SecOps activities.

However, faith in GAI is divided. During an interview conducted by VentureBeat with several CISOs in different manufacturing and service companies, it discovered that although there is much opportunity for productivity improvement on the side of marketing, as well as the operations and particularly security departments, board members frequently wonder about the risk of leakage and loss of Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? it calculates the directions in the voice of CISOs interviewed by VentureBeat. The study revealed that out of all organizations that adopted generative AI tools, 69%, of cybersecurity professionals believe their companies are highly vulnerable to any attack. CISOs and security leaders believe that these weaponized AI attacks will happen, at 88%.

Eighty-five percent indicate that gen AI probably fueled current attacks, noting the revival of WormGPT which is a new generative AI advertised to attackers interested in executing phishing and BEC attacks. Gen AI “weapons”, available on the Darknet and Telegram, get to be top hits. For instance, it took FraudGPT only five months to reach its third-thousand subscribers in July.

As such, many cases and cases continue to consider how this type of technology can be deployed as a continuous learning engine that is always recording behavioral, telemetry, intrusion, and breach data over the costs or risks it might create. It should create the muscle memory of threat intelligence that assists in anticipating the breach as well as making the SecOps tasks seamless.

On the other hand, trust in gen AI is mixed. In an interview with VentureBeat earlier this year, several CISOs within different industry sectors including manufacturing and service admitted that such productivity enhancers as marketing automation, or marketing can lead to breaches of intellectual property (IP) and sensitive information confidentiality which is still a great concern Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? measure the trends that Venture Beats hears from CISO discussions. Research revealed that approximately two-thirds of organizations use generative artificial intelligence tools, and half of security experts regard such technology as an additional target for hackers. According to a report, eighty-eight percent of CISOs and security leaders believe that weaponized AI attacks cannot be avoided.

Eighty-five percent feel these attacks have been fueled by gen AI with WormGPT resurfacing as a new generative AI advertised to hackers keen on undertaking phishing and e-mail compromise attacks on targets. Bestsellers become weapons-gen AI for sale on the dark web or via Telegram. For instance, it took FraudGPT three months to have over 3,000 subscriptions by July.

According to Sven Krasser, the chief scientist and senior vice president at CrowdStrike, cyber attackers are quickly adapting to use large language models (LLMs) and generative AI for malicious purposes. Krasser highlighted that cybercriminals are incorporating LLM technology into activities like phishing and malware. He pointed out, though, that while this increases the speed and volume of attacks, it doesn’t significantly change the quality of the attacks.

Krasser added that cloud-based security, utilizing AI to analyze signals globally, is an effective defense against these new threats. He noted that generative AI doesn’t necessarily raise the bar for malicious techniques, but it does make it easier for less skilled adversaries to be more successful.

Max Heinemeyer, the director of threat hunting at Darktrace, stressed the importance of businesses adopting cyber AI for defense before offensive AI becomes widespread. He emphasized that in a scenario where algorithms are pitted against each other, only autonomous responses can effectively counter AI-augmented attacks at machine speeds.

A strong pro of Gen AI is its continuous ability to learn. Particularly where it concerns making sense of the huge volumes of information that sensors generate. New use cases that will be used by the CISOs and CIOs are driven by continuous updating of the threat assessment and risk prioritization algorithms, which also become compelling. The latest partnership between Ivanti and Securin seeks to improve on the precision and real-time risk prioritization algorithms as it targets different main objectives meant to better prepare their clients for more secure positions.

Ivanti and Securin in order to refresh the risk prioritization algorithms via combining Securin’s Vulnerability intelligence and ivanti neurons for a vulnerability knowledge base which offers its customer security expert an opportunity for a swift vulnerability assessment and prioritization process. Through the partnership with Securin, we can offer clients on all vulnerability sources thorough intelligence, as well as an approach to risk prioritization through employing augmented AI in humans, noted as the chief product officer of Ivanti, Doctor Srinivasa Mukkamala Generative AI-based cybersecurity platform, system, and solution are forecasted to show a compound annual growth rate of 22 percent to reach USD 11.2 billion by 2032 compared to USD 1.6 billion in 2022. Generative AI will be supporting over 70 percent of business’ cyber security in 5 years according to Canalys.

Forrester defines generative AI use cases into three categories: creating content, forecasting behaviors, and expressing knowledge. Security systems have utilized both AI and ML for some time now.6itempty Most of the security tools that have been developed in the last decade rely on ML to a large extent. For instance, adaptive and contextual authentication can be employed in developing risk scoring based on the heuristic rule, naïve Bayes, and logistic regression analytics as Mellen, Allie Forrester’s principal analyst puts it.

A distinctive advantage of Gen AI is its continuous learning capacity. Particularly in identifying huge volumes of data generated by endpoints. Compelling new use cases are expected to be generated because of having continuously updated threat assessment and risk prioritization algorithms, which CISOs and CIOs anticipate will change behavior and facilitate early threat prediction. In this case, Ivanti is partnering with Securin in order to improve the accuracy of their risk prioritization algorithms in real-time to achieve other goals that will bolster the organization’s security posture.o

Ivanti has partnered with Securin to enhance risk prioritization algorithms by marrying up Securin Vulnerability Intelligence (VI) and Ivanti Neurons for Vulnerability Knowledge Base, which will provide near-real vulnerability threat intelligence for their customer’s security teams In collaboration with Securin, we deliver comprehensive situation awareness on all threats regardless of the origin through AI augmented-human intelligence; by leveraging state-of-the-art Gen AI solutions we enable organizations to identify cybersecurity risks earlier in their development stage. Despite the fact that trust is today This will see market values rise from 1.6 bln dollars in 2022 to 11.2 bln dollars by 2032 at a 22% CAGR. Generative AI should support over 70% of business’s cyber security operations in under five years according to Canalys.

Forrester defines generative AI use cases into three categories: content creation, predictive behaviors, and knowledge representation. The use of AI and ML by security devices, however, is not a novelty. Most of the advanced security tools invented over the last decade relied on ML in one or other variations. For instance, adaptive and contextual authentication have been used as the basis for constructing risk-scoring logic through heuristic rules and naive Bayesian classification and logistic regression analytics—Allie Mellen – Forrester Principal Analyst.

How security leaders, including CISOs and CIOs, guide their boards in navigating the risks and benefits of generative AI will play a crucial role in shaping the technology’s future. Gartner predicts that by 2026, 80% of applications will incorporate generative AI capabilities, setting a trend already observed in most organizations.

CISOs who find the most value in the initial generation of generative AI applications emphasize the importance of adaptability to their teams’ working methods. This adaptability extends to how generative AI technologies can complement and reinforce the broader zero-trust security frameworks they are actively developing.

For CISOs venturing into the realm of generative AI, their focus is on taking a zero-trust approach to every interaction involving these tools, apps, platforms, and endpoints. This approach necessitates continuous monitoring, dynamic access controls, and constant verification of users, devices, and the data they use both at rest and in transit.

The primary concern for CISOs is the potential introduction of new attack vectors by generative AI, catching them unprepared for protection. Particularly for enterprises constructing large language models (LLMs), safeguarding against query attacks, prompt injections, model manipulation, and data poisoning becomes a top priority. To fortify infrastructure against emerging attack surfaces, CISOs, and their teams are intensifying their commitment to the principles of zero trust. Source: Key Impacts of Generative AI on CISO, Gartner.

Gen AI is used mainly as a knowledge management tool among security teams, and also as an alternative to the expensive and laborious system integration projects in large-scale business environments. This year, RSAC 2023 was overwhelmed by copilots based on chatbot technology. Vendors like Google Security AI workbench, Microsoft security copilot (launched before the show), Recorded Future, security score card, and Sentinel One were also launching chat GPT solutions.

Given the know-how Ivanti has of its IT Service Management (ITSM) including cybersecurity and network security needs, it is leading in that domain. Ivanti is currently organizing a webinar on transforming IT service management using generative AI with Susan Fung as a guest speaker and principal product manager, AL/ML at Ivanti. In March at CrowdStrike Fal. con 2023 they announced twelve (IV) new solutions for cybersecurity. Falcon leverages Charlotte AI for fast detection, thorough investigations, and quick responses by means of natural language communication. Charlotte AI develops an LLM report that helps an analyst quickly analyze hacking incidents.

In the coming nine months, the Raptor platform will start the initial updates of Charlotte AI for release by CrowdStrike Falcon to all its clients. Raj Rajamani, CrowdStrike CPO said that Charlotte AI makes security analysts become “two/three times more productive”. He told VentureBeat how Crowdstrike invested in Graph Database Architecture to feed up the power of Charlotte in endpoints, cloud & identities.

It comes in handy at the moment of organizing knowledge within security teams or as an alternative to highly-priced integrational projects related to large-scale enterprises. The year’s dominance at RSAC was by chatbot-based copilots. Some of the vendors with ChatGPT solutions that also launched included Google Security AI Workbench, Microsoft Security Copilot (before the show), Recorded Future, Security Scorecard, and SentinelOne.

Considering the awareness Ivanti has of its customers’ ITSM, cybersecurity, and network security needs this leader has become one of the leaders in this field. The company is holding an online seminar named How to Turn Service Management into Generative AI with Fung, principal product manager, of artificial language and machine learning Ivanti. This year, back in February at CrowStrike Fal. Con 2023 the provider announced no less than twelve new products, all of them on the same day. The Falcon portal has been enhanced in terms of enhancing threat detection and response by interacting with the Charlotte AI using natural language in the conversed mode. Charlotte AI produces an LLM-driven incident summary for quickening security investigators’ analysis of breach occurrences.

For the next year, it will launch the Charlotte AI across the CrowdStrike Falcon customers using the Raptor platform, with the first updates coming in the period between late September and late December 2023. According to Raj Rajamani, the Chief Product Officer, of Crowd Strike, Charlotte, an AI product, makes security analysts two or three times more productive by automating repetitive functions. Rajamani explained that Crowd Strike invested a lot in its graph database platform that powers Charlotte’s functionality across.

Exploits targeting cloud systems surged by 95% year-over-year as attackers continuously refined their tactics to exploit weaknesses in cloud configurations. Defending against these attacks has become one of the fastest-growing challenges for businesses.

Looking ahead to 2024, VentureBeat anticipates an increase in mergers, acquisitions, and joint ventures specifically aimed at addressing security gaps in multi-cloud and hybrid-cloud environments. The recent acquisition of Bionic by CrowdStrike marks the beginning of a broader trend, signaling a collective effort to empower organizations to enhance their application security and overall posture management. Notable previous acquisitions in the cloud security realm include Microsoft’s acquisition of CloudKnox Security, CyberArk’s acquisition of C3M, Snyk’s acquisition of Fugue, and Rubrik’s acquisition of Laminar.

This strategic acquisition not only bolsters CrowdStrike’s capability to offer consolidated cloud-native security but also underscores a broader industry trend. Bionic aligns well with CrowdStrike’s customer base, which primarily consists of organizations prioritizing cloud technologies. This reflects a growing reliance on acquisitions to enhance the potential of generative AI in the field of cybersecurity.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *