Staff Pick Archives - Best Information Security SIEM Tools, Software, Solutions & Vendors https://solutionsreview.com/security-information-event-management/category/staff-pick/ Buyer's Guide and Best Practices Wed, 10 Jul 2024 16:21:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://solutionsreview.com/security-information-event-management/files/2024/01/cropped-android-chrome-512x512-1-32x32.png Staff Pick Archives - Best Information Security SIEM Tools, Software, Solutions & Vendors https://solutionsreview.com/security-information-event-management/category/staff-pick/ 32 32 What’s Changed: 2024 Gartner Magic Quadrant for Security Information and Event Management (SIEM) https://solutionsreview.com/security-information-event-management/whats-changed-2024-gartner-magic-quadrant-for-security-information-and-event-management-siem/ Wed, 10 Jul 2024 14:00:21 +0000 https://solutionsreview.com/security-information-event-management/?p=5388 The editors at Solutions Review highlight what’s changed in Gartner’s 2024 Magic Quadrant for Security Information and Event Management (SIEM) and provide an analysis of the new report. Analyst house Gartner, Inc.’s 2024 Magic Quadrant for Security Information and Event Management has arrived. Gartner defines SIEM as “aggregating the event data that is produced by monitoring, assessment, […]

The post What’s Changed: 2024 Gartner Magic Quadrant for Security Information and Event Management (SIEM) appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Gartner Magic Quadrant for Security Information and Event Management

The editors at Solutions Review highlight what’s changed in Gartner’s 2024 Magic Quadrant for Security Information and Event Management (SIEM) and provide an analysis of the new report.

Analyst house Gartner, Inc.’s 2024 Magic Quadrant for Security Information and Event Management has arrived. Gartner defines SIEM as “aggregating the event data that is produced by monitoring, assessment, detection and response solutions deployed across application, network, endpoint, and cloud environments.” Capabilities include threat detection through correlation, user and entity behavior analytics (UEBA), and response integrations commonly managed through security orchestration, automation, and response (SOAR). Security reporting and continuously updated threat content through threat intelligence platform (TIP) functionality are also common integrations. Although SIEM is primarily deployed as a cloud-based service, it may support on-premises deployment.

What’s Changed: 2024 Gartner Magic Quadrant for Security Information and Event Management (SIEM)


The general buying market for SIEM is security and risk management leaders in need of a security system of record with comprehensive threat detection, investigation, and response capabilities at an enterprise level.

Gartner highlights the following providers in the SIEM market: Splunk, Microsoft, IBM, Securonix, Exabeam, Sumo Logic, Rapid7, Fortinet, Gurucul, Google, Devo, Elastic, OpenText, LogRhythm, Logpoint, Huawei, ManageEngine, Venustech, NetWitness, Odyssey, QAX, and Logz.io.

In this Magic Quadrant, Gartner evaluates the strengths and weaknesses of 11 providers that it considers most significant in the marketplace and provides readers with a graph (the Magic Quadrant) plotting the vendors based on their ability to execute and completeness of vision. The graph is divided into four quadrants: Leaders, Challengers, Visionaries, and Niche Players. At Solutions Review, we read the report, available here, and pulled out the key takeaways. This is not an in-depth analysis, only an observation of notable changes since the 2023 report.

Leaders

The Leaders quadrant saw most of its tenants from last year maintain their same spot, with the exception of Splunk, who, instead, claimed the top of the quadrant for themselves. Splunk is joined by returning Leaders Microsoft, IBM, Securonix, and Exabeam. Splunk’s Enterprise Security application is delivered either on-premises or via SaaS. Splunk offers pricing flexibility based on either daily ingest, or on cloud workloads, known as Splunk Virtual Compute. The majority of Splunk’s clients are larger North America-based enterprise organizations. With Cisco completing its acquisition of Splunk back in March, it will be interesting to revisit this quadrant again in 2025.

Challengers

The new top Challenger this year is former Visionary Sumo Logic. They are joined by returning Challengers, Rapid7 and Fortinet, while Devo moves over to the Visionaries quadrant. Sumo Logic Cloud SIEM Enterprise, is delivered as a SaaS-only solution as part of its SaaS log analytics platform. Licensing Cloud SIEM Enterprise is subscription-based (with pricing based on data ingestion) or credit-based(with credits being used to enable specific resource usage, such as for occasional search or continuous analytics), with tiering and packaging options. Sumo Logic’s customer base is a mix of small, midsize and enterprise customers, with the majority based in North America; however, it has a growing presence in Europe, Latin America and Asia/Pacific.

Visionaries

In the Visionaries quadrant, Gurucul holds the top position of the quadrant, while Elastic moves down to make room for Google and Devo. Micro Focus is now a part of OpenText, and they hold their position in the quadrant. Gurucul’s next-gen SIEM offers UEBA, identity analytics, fraud analytics, network analysis and SOAR. Gurucul offers flexible pricing options including all-inclusive per-asset/user pricing, ELAs, module-based, data volume/EPS-based pricing, and platform-based pricing. The extensive use of analytics for building risk-based behavioral detections should appeal to enterprise clients requiring complex or fraud-based detections. Gurucul’s customer base is composed primarily of large enterprises based in North America, EMEA, and APAC.

Niche Players

The Niche Players saw the most changes this year. ManageEngine saw themselves moving down a few spaces in the quadrant, while LogPoint and Huawei maintained their positions. New Niche Players this year include former Challenger LogRhythm, Venustech, NetWitness, Odyssey, QAX, and Logz.io; with LogRhythm claiming the top of the quadrant. LogRhythm has three platforms in the SIEM category: LogRhythm SIEM includes several add-on components to deliver endpoint, network and UEBA capabilities; LogRhythm Cloud is a cloud-hosted version of SIEM; and LogRhythm Axon is a cloud-native SIEM platform. Licensing is available on a perpetual or subscription basis (messages per second per day)or an unlimited basis (priced by the number of identities) for the self-hosted option. LogRhythm Cloud is licensed by messages per second and terabytes of online storage. LogRhythm Axon is licensed by daily ingest rate and days of searchable data. The majority of its customers are in North America and Europe. Customers are both large enterprises and midsize customers.


Read Gartner’s 2024 Magic Quadrant for Security Information and Event Management.

The post What’s Changed: 2024 Gartner Magic Quadrant for Security Information and Event Management (SIEM) appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
The Nuances of BYOK and HYOK https://solutionsreview.com/security-information-event-management/the-nuances-of-byok-and-hyok/ Wed, 05 Jun 2024 18:17:20 +0000 https://solutionsreview.com/security-information-event-management/?p=5360 Min-Hank Ho of Baffle offers commentary on the nuances of BYOK and HYOK, and which one might be right for your enterprise’s needs. This article originally appeared in Insight Jam, an enterprise IT community enabling the human conversation on AI. A modern data security posture is more complex than ever because the way companies use […]

The post The Nuances of BYOK and HYOK appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
BYOK

Min-Hank Ho of Baffle offers commentary on the nuances of BYOK and HYOK, and which one might be right for your enterprise’s needs. This article originally appeared in Insight Jam, an enterprise IT community enabling the human conversation on AI.

A modern data security posture is more complex than ever because the way companies use data is multifaceted. Data analytics has transformed data from something that must be stored away and protected to an asset that yields market-differentiating insight. But, as we know, it must still be protected. In fact, industry and governmental privacy regulations stipulate clear mandates for more stringent data security.

Two emerging data security methods that reflect the evolving nature of data use are Bring Your Own Key (BYOK) and Hold Your Own Key (HYOK). Both ensure that data is encrypted and decrypted using a key management system. Using keys, organizations can feel confident that only those with access to encryption keys will be able to access data.

While BYOK and HYOK share similarities, the two methods have very different use cases. Understanding the difference between BYOK and HYOK will help organizations determine which approach makes the most sense, depending on their specific needs.

The Nuances of BYOK and HYOK


Understanding BYOK

In a BYOK model, companies storing cloud data in a multi-tenant environment — which is most common — generate and manage their encryption keys in a multi-tenant, cloud-based key management system (KMS). Users can create, encrypt and rotate keys and then provide these keys to the cloud service provider (CSP). Here is a breakdown of BYOK’s benefits and challenges.

Benefits:

  • Regulatory compliance: BYOK can help organizations comply with data protection regulations, requiring them to maintain control over encryption keys and demonstrate exclusive access to them.
  • Data sovereignty: Companies that operate in multiple global regions can use BYOK to comply with data sovereignty laws.
  • Key control: BYOK offers more stringent data control and ensures data remains within prescribed geographic boundaries.
  • Isolation from CSP: BYOK isolates the encryption keys from the CSP, which reduces the risk of the CSP gaining unauthorized access to sensitive data.
  • Flexibility: Organizations can use their preferred encryption algorithms and key management practices, allowing them to tailor their security measures to their unique requirements.

Challenges:

  • Complexity: BYOK may require additional infrastructure and processes for key management.
  • Key management overhead: Managing encryption keys may require additional resources to address long-term planning and maintenance.
  • Potential data loss: Should a company lose its keys, it risks permanent data loss. It would require a comprehensive backup and recovery plan, which can also be costly.
  • Key distribution challenges: Distributing encryption keys securely in multi-cloud or hybrid environments can be difficult, given the stringent security requirements.

BYOK is a logical option for large, multinational companies in highly regulated industries, such as healthcare and financial services. Such organizations have the resources to invest in the security necessary to avoid significant fines that can impact reputational damage and erode trust.

It is also important to note the emergence of KYOK, similar to BYOK. Still, instead of using a multi-tenant, cloud-based KMS, users manage keys through a dedicated hardware security module (HSM) that it — not the CSP — controls.

Understanding HYOK

When organizations have cloud-based datasets that are not being used in data analytics computations, HYOK makes more sense. HYOK is a model in which the customer possesses and manages the encryption keys outside the cloud infrastructure. Encryption occurs before cloud migration and remains encrypted during its life cycle. Decryption only occurs once data is back on-premises. Here is a breakdown of HYOK’s benefits and challenges.

Benefits:

  • Maximum security: HYOK provides the highest security and control over encryption keys because the CSP can never access them. This reduces unauthorized access to its lowest level possible.
  • Data isolation: HYOK ensures data remains isolated, drastically reducing the impact of a potential cloud breach.
  • Regulatory compliance: With complete control over keys, HYOK supports strict regulatory requirements where organizations must demonstrate full control over encryption keys. This is especially helpful when operating in areas with data sovereignty regulations.
  • Key management flexibility: Organizations can determine the encryption algorithms, key lengths and key management practices that make the most sense for their needs.

Challenges:

  • Complexity/overhead: HYOK can require HSMs or other secure key storage solutions.
  • Data loss: Like BYOK, data can be permanently lost if encryption keys are lost.
  • Dependency on physical hardware: Because keys are not stored in the cloud, HYOK can require physical hardware for key storage. In addition to cost and complexity, hardware can create additional vulnerabilities (theft, damage, etc.).
  • Cost: HYOK is often expensive to set up and maintain. Costs can include HSMs or secure key storage devices.

HYOK is ideal for an organization with even higher data privacy and protection requirements than those that use BYOK, such as defense and financial services. When insider threats are a serious concern, HYOK offers an extra layer of protection.

Organizations with the most stringent security requirements may choose HYOK because it ensures that the CSP never possesses or has access to the encryption keys. Examples include government or military information, where data access control must be absolute. Further, HYOK can help organizations isolate their data from potential CSP-related vulnerabilities or breaches.

Final Thoughts on BYOK and HYOK

The value companies extract from data must be balanced, so companies need to remain vigilant in protecting it. By employing forward-thinking security measures like BYOK and HYOK — and understanding which method is appropriate for each use case — organizations can ensure their data is protected at all times and reduce the risk of non-compliance.

The post The Nuances of BYOK and HYOK appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
How AI Will Amplify Foreign Election Interference in 2024 https://solutionsreview.com/security-information-event-management/how-ai-will-amplify-foreign-election-interference-in-2024/ Fri, 17 May 2024 19:43:57 +0000 https://solutionsreview.com/security-information-event-management/?p=5352 Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Chris Olson of The Media Trust examines the impact of AI as a psyops weapon and how it will be utilized in foreign election interference. With seven months left until the 2024 presidential election, a report from the Microsoft […]

The post How AI Will Amplify Foreign Election Interference in 2024 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Foreign Election Interference

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Chris Olson of The Media Trust examines the impact of AI as a psyops weapon and how it will be utilized in foreign election interference.

With seven months left until the 2024 presidential election, a report from the Microsoft Threat Analysis Center (MTAC) has found that Russian, Chinese, and Iranian actors are already targeting American voters with online election interference campaigns. Their objectives vary in specifics, but in general, they all seek to push the U.S. in a direction favorable to the goals of their nations, worsen the country’s political divide, and distract the American public from issues that could provoke a U.S. response.

None of this is new: the same groups have been playing the same game since 2016. But with the arrival and ongoing progress of generative AI, many wonder how the game might change in 2024. Some have predicted doom, while others argue that the impact of AI is greatly exaggerated.

In isolation, the anti-alarmists have a point– but if we lay aside the hype and focus on the results AI is already delivering for global businesses, it’s clear that reality favors those who are worried: at its current stage of advancement, AI can amplify disinformation created by foreign actors through hyper-targeted digital campaigns of unprecedented scale and relevance to individual voters.

How AI Will Amplify Foreign Election Interference in 2024


AI Makes Everything More Efficient

There is a danger to sensationalistic reporting on AI: the more it happens, the less people are attuned to the ways AI is actually changing the world, and how it might affect them in the near future. Only two percent of Americans are more concerned than excited about AI. It’s a classic “boy who cried wolf” scenario that can only be preempted by acknowledging the limits of AI as it currently exists.

For some time, experts feared an Internet overwhelmed by deepfake videos, automated social accounts, and fully synthetic content, leading to mass deception and confusion. As pointed out in MTAC’s recent report, these fears have not come to pass– but they are also not realistic scenarios for how AI will intersect with U.S.-focused influence operations.

In the world of business, AI has not automated away most jobs– instead, it has taken on routine tasks and improved the efficiency of core business functions in non-trivial ways. Likewise, in the world of cyber-crime, AI has not replaced human hackers, but it has made reconnaissance, exploit writing, and social engineering conversations easier.

The bottom line is this: whatever people are already doing– AI helps them do it better. Nation-state actors who are already experienced in foreign election interference campaigns will simply use AI to make their jobs easier and more effective. Among other things, they’ll quickly discover how it can help them reach the right people with the right message.

Better Messages, Better Targeting

Since 2016, we have tended to assume that content on the Internet spreads organically. But this assumption is grossly out of date: today, the Internet largely runs on user targeting and content recommendation algorithms. Even paid content masquerades as real content, while rich, targeted advertisements follow us everywhere we go.

There is an invisible labyrinth of technology which makes this state of affairs possible, capturing data that flows through all the sites we browse and the devices that we use – even passive ones that sit in a living room. Before the arrival of ChatGPT, it was already good enough that advertisers could use it to target a single individual in a specific location.

With the arrival of advanced AI, it’s getting even better, and the benefits to foreign actors will be numerous:

  1. It is easier than ever to analyze large volumes of qualitative data about users – not merely numbers or quants, but conversational data as well, driving informed inferences about a user’s beliefs, personalities, and voting preferences. MTAC observes that Chinese actors have been putting questions to Americans through social media sock puppets, gathering intelligence on political sentiments. AI makes sifting through this information and utilizing it far easier, especially for non-native English speakers.
  2. It’s improving the technology that social media platforms and AdTech companies use to deliver messages to the right users. According to Google, advertisers who used its AI-driven Performance Max features achieved 18 percent higher conversion than advertisers who didn’t. Meanwhile, Meta is integrating AI features across all of its products – including tools for advertisers – and so are leading firms like WPP.
  3. It is possible to personalize messages on a scale never before imaginable in real-time. In the past, organizations that wanted to tailor their message for different groups had to define the parameters of those groups and write multiple versions of the same message. Now, AdTech will be able to do this on the fly – not for small groups, but for specific individuals, as they browse the Web.
  4. Translations are of higher quality. According to one study, AI has improved machine translations to an accuracy of 97 percent. This is undoubtedly one reason that threat groups – according to MTAC – are targeting not only English speakers but French, Arabic, Finnish, and Spanish speakers as well.

Until now, nation-state actors targeting Internet users across the world have faced many barriers to entry – language is one of them, and reaching the right users is another. Thanks to AI, those barriers are simply disappearing. It’s a brave new world, and the long-term consequences are unclear. But the immediate consequences are straightforward and predictable.

Individuals Are the Battleground

No matter the advances AI makes, individuals are usually overconfident in their ability to detect AI-generated content. They are more likely to worry that society as a whole will fall into AI-driven deceptions. However, research indicates that these are precisely the wrong priorities: according to MTAC, “collectively, crowds do well in sniffing out fakes on social media. Individuals independently assessing the veracity of media, however, are less capable”.

Individuals, then, are the ideal target for misinformation and propaganda – not the Internet as a whole. The same is true for cyber-crime: threat groups like BlackCat – responsible for the recent United Health breach – already depend on digital advertising as a primary delivery mechanism for ransomware attacks. Combined with cutting-edge AdTech, generative AI gives foreign actors the exact tools they need to replicate the approach that has worked so well in other domains.

We have slept on the potency of digital advertising, user targeting, and content recommendation algorithms that put messages directly in front of users, and the risk they pose to our democracy. As we advance, the battle of Moscow and Beijing for the soul of America will be intensely personal, and that should be reflected in our approach to foreign election interference and malign influence.

The post How AI Will Amplify Foreign Election Interference in 2024 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
What to Expect at Solutions Review’s Spotlight with Cyera and Nasuni on March 21 https://solutionsreview.com/security-information-event-management/what-to-expect-at-solutions-reviews-spotlight-with-cyera-and-nasuni-on-march-21/ Fri, 01 Mar 2024 16:49:16 +0000 https://solutionsreview.com/security-information-event-management/?p=5304 Solutions Review’s Solution Spotlight with Cyera and Nasuni will be 30-minute discussions and software demos focusing on Cyera’s AI-powered Data Security Posture Management (DSPM) and Nasuni’s Ransomware Protection. What is a Solutions Spotlight? Solutions Review’s Solution Spotlights are exclusive webinar events for industry professionals across enterprise technology. Since its first virtual event in June 2020, Solutions Review […]

The post What to Expect at Solutions Review’s Spotlight with Cyera and Nasuni on March 21 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Nasuni

Solutions Review’s Solution Spotlight with Cyera and Nasuni will be 30-minute discussions and software demos focusing on Cyera’s AI-powered Data Security Posture Management (DSPM) and Nasuni’s Ransomware Protection.

What is a Solutions Spotlight?

Solutions Review’s Solution Spotlights are exclusive webinar events for industry professionals across enterprise technology. Since its first virtual event in June 2020, Solutions Review has expanded its multimedia capabilities in response to the overwhelming demand for these kinds of events. Solutions Review’s current menu of online offerings includes the Demo Day, Solution Spotlight, best practices or case study webinars, and panel discussions. And the best part about the “Spotlight” series? They are free to attend!

Why You Should Attend

Solutions Review is one of the largest communities of IT executives, directors, and decision-makers across enterprise technology marketplaces. Every year, over 10 million people come to Solutions Review’s collection of sites for the latest news, best practices, and insights into solving some of their most complex problems.

With the next Solutions Spotlight event, the team at Solutions Review has partnered with data management and security providers, Cyera and Nasuni. Join Cyera’s Head of Sales Engineering, Shane Coleman, as he demo’s  their AI-powered data security platform and its advanced DSPM capabilities that allow security teams to understand who can access it, how it is secured and managed, and the exposures that increase their risk across domains and platforms. Then jump in with Nasuni’s Solutions Architect, Christoph Ertl, to experience how easy and straightforward it is to configure Ransomware Protection in Nasuni’s File Data Platform.

Featured Speakers:

Shane Coleman, Head of Sales Engineering at Cyera

Shane ColemanShane Coleman brings more than 20 years of combined security and compliance experience to his role as Head of Sales Engineering at Cyera. Shane brings this expert industry knowledge and experience to support Cyera’s customers as they navigate the evolving landscape of data risk and governance. Holding a Master of Science in Operations & Technology Management from the Illinois Institute of Technology, as well as ISC2’s CISSP Certification, Shane lives in Chicago, Illinois, with his wife and children.

Christoph Ertl, Solutions Architect at Nasuni

Christoph ErtlAs a Presales consultant at Nasuni, Christoph Ertl leverages his extensive experience from 15 years spent at NetApp to deliver unparalleled expertise in cloud storage solutions. With a background grounded in computer science, Christoph’s tenure at NetApp provided invaluable insights into data management, protocols, and storage systems. Transitioning to Nasuni, he assumed a pivotal role in guiding clients through the complex landscape of cloud-native file storage. Christoph’s adeptness in understanding client needs, coupled with his technical proficiency, enables him to tailor innovative solutions that maximize efficiency and scalability. His tenure at both companies underscores his commitment to driving transformative change in the realm of data management.

About Cyera:

Cyera is the data security company that gives businesses deep context on their data, applying correct, continuous controls to assure cyber-resilience and compliance. Cyera takes a data-centric approach to security across your data landscape, empowering security teams to know where their data is, what exposes it to risk, and take immediate action to remediate exposures. Backed by leading investors including Accel, Sequoia, Cyberstarts, and Redpoint, Cyera is redefining the way companies do cloud data security.

About Nasuni:

NasuniNasuni is a privately-held hybrid cloud storage company with headquarters in Boston, Massachusetts. Nasuni integrates with public cloud storage platforms, such as Google Cloud Storage, Amazon Web Services, and Microsoft Azure, and private cloud storage platforms such as IBM Cloud Object Storage and EMC Elastic Cloud Storage (ECS). Such storage platforms provide an object-based storage infrastructure, on top of which UniFS creates a complete versioned file system. The Nasuni platform stores customer data as a sequence of snapshots that include every version of every file. The firm has demonstrated the ability to store more than one billion objects in a single storage volume.

FAQ

  • What: “AI-Powered DSPM by Cyera” and “Nasuni’s Ransomware Protection”
  • When: Thursday, March 21, 2024 at 11:00am Eastern Time with Cyera, 11:30am with Nasuni
  • Where: Zoom meeting (see registration page for more details)

Register for Solutions Review’s Solution Spotlight with Cyera and Nasuni FREE

The post What to Expect at Solutions Review’s Spotlight with Cyera and Nasuni on March 21 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Best Practices for Handling Incident Response During a Merger and Acquisition https://solutionsreview.com/security-information-event-management/best-practices-for-handling-incident-response-during-a-merger-and-acquisition/ Fri, 19 Jan 2024 21:56:21 +0000 https://solutionsreview.com/security-information-event-management/?p=5247 Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Craig Jackson and Nate Pors of Cisco Talos go deep into how enterprises should handle incident response during a Merger and Acquisition. The authors would like to thank Caitlin Huey for her contributions to the initial research […]

The post Best Practices for Handling Incident Response During a Merger and Acquisition appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Incident Response

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Craig Jackson and Nate Pors of Cisco Talos go deep into how enterprises should handle incident response during a Merger and Acquisition.

The authors would like to thank Caitlin Huey for her contributions to the initial research for this blog.

Incident response can be a complex undertaking even under normal circumstances, even when a company isn’t going through a major period of restructuring or mergers. Factoring in another organization, its infrastructure, incident response team, executives, subsidiaries, and customers during an incident increases that level of complexity by orders of magnitude.

Organizations affected by a security incident during a merger or acquisition may find that their Incident Response Plan (IRP) and playbooks – foundational elements of an organization’s incident response capability – are of limited use when only a portion of the collective incident response personnel have defined roles and responsibilities. A breakdown in such a fundamental component of an organization’s incident response capability can hamstring response efforts before they even begin.

Of course, adversaries know all of this. They know the upheaval associated with M&A creates the perfect operating environment for malicious activity. They know threat models change overnight, risk profiles shift, and gaps in security monitoring appear as assets and infrastructure are transitioned. They watch for M&A announcements on public forums to identify potential targets. Some may be already watching from within the organizations themselves.

It’s up to the combined security leadership and technical personnel from all entities involved in the M&A process to recognize these challenges and implement proactive measures that not only protect digital assets during the merger and ensure that incident response efforts will be coordinated and effective.

NOTE: The authors are digital forensics and incident response (DFIR) professionals. The guidance and recommendations provided are focused on supporting an organization’s cybersecurity practice and should not be considered legal advice.

Best Practices for Handling Incident Response During a Merger and Acquisition


Case Study Scenarios

Scenario 1: The Acquired Entity’s Infrastructure is Found to be Compromised

The integration of Company B’s IT assets into Company A’s infrastructure begins on schedule. After several days of hard work and late nights, IT leaders from both organizations are optimistic that infrastructure consolidation might wrap up ahead of schedule. But before the high-fives start flying, Company A’s cybersecurity team detects unusual network activity between one of Company B’s servers and a group of Company A’s critical servers. When asked whether they’d noticed any related activity before the merger, Company B admits they didn’t have the ability to monitor east-west traffic internally. Further investigation reveals that an adversary had been present in the Company B network for months prior to the merger but went unnoticed because of the gap in security tooling.

This is one of the most common concerns anticipated by security teams during M&A preparation. In fact, many acquiring organizations require that the anticipated acquisition complete a third-party risk or vulnerability assessment as a precaution before any connection or crossover is made between their IT environments. While such an assessment should have identified Company B’s limited internal network visibility before the merger, the joint leadership team may have decided – or been told – to accept the risk to keep the merger on schedule. This scenario represents a worst-case outcome due to an assessment oversight.

Scenarios like this underscore the need for fundamental incident response capabilities during M&A. Despite cybersecurity teams’ best efforts to identify security concerns prior to the merger, no environment can be considered 100% secure or free of malicious activity. Merging entities should anticipate security challenges and prepare to work collaboratively to resolve those challenges as they arise.

Scenario 2: The Acquired Entity Experiences a Ransomware Attack

Some 70 percent of Company B’s IT infrastructure has been integrated with Company A’s environment. With only a week left to go before the planned completion of the infrastructure transition, Company B employees begin to have issues with authentication and data access. What is initially thought to be growing pains in the rapidly expanding network turns out to be something vastly different when a ransom note is found on a Company B file server attached to Company A’s network.

M&A activities aside, ransomware attacks are known to create urgent challenges even for companies with mature information security and incident response programs. Adding an acquired entity into the mix with its own infrastructure, personnel, and practices shifts the ransomware response paradigm most organizations have established. Even the culture or mentality toward ransomware response may differ between the acquiring and acquired organizations, causing unneeded friction between business and technical leaders. Sourcing funds for a ransom payment – i.e., determining the division of funds provided by the acquiring and acquired entities – is also likely to provoke conflict among internal stakeholders.

The high-profile nature of a ransomware attack also makes any related incident a public relations (PR) matter. Ransomware group blogs smear victims and, sometimes, even the victim’s partners, customers, and other associated organizations. How will disclosure of related data on a ransomware group’s site influence the M&A journey? What if the adversary discloses sensitive details about the merger process itself? Could a severe ransomware attack cause executives to reverse course on the entire merger?

Scenario 3: A Subsidiary of the Acquired Entity is Breached

As part of Company A’s acquisition of Company B, it is agreed that a subsidiary of Company B will maintain its own security team but will draw on Company A’s larger and more skilled incident response team during major incidents. Later, one of Company B’s custom-developed, critical applications is breached, requiring Company A’s incident response team to rely on Company B’s subject matter experts for insights into a specialized application they know little about.

Post-acquisition standardization of teams, tools and standards is simple in theory but extremely difficult in practice, especially when previously discrete security teams bring their own preferences and workflows to bear. In a perfect world, the teams will cooperate, and complementary tools and processes will lead to successful incident response efforts. Realistically, gaps or overlaps in processes and technologies will create conflict that highlights weaknesses ripe for exploitation. The teams may become hostile or defensive of the resources they own, creating major internal divisions.

Finances are also a factor. The acquiring organization’s CISO likely fights hard for their annual budget, and the Board may not understand the need for additional funding to extend the controls protecting the primary merging entities to a subsidiary. These financial restrictions may also prevent the subsidiary from receiving the appropriate technical and leadership training.

Scenario 4: The acquired company’s IT team includes malicious insiders

The merger between Company A and Company B concludes, and the networks of the two organizations have been integrated. Soon after, Company A’s security team identifies suspicious activity on a domain controller linked to the account of a former Company B IT contractor. Further investigation implicates at least three other past Company B contractors, all linked to an overseas IT consultancy. The Company A security team realizes that much of Company B’s original infrastructure was built by malicious insiders.

Attacks facilitated by malicious insiders are traumatic for any organization, adding layers of administrative rigorousness to investigations overshadowed by feelings of distrust and betrayal. To further muddy the waters, a merger is a vulnerable period for both employee and employer. Employees seek to find their footing in a new corporate culture, while employer concerns over disgruntled employees may be heightened. Feelings of distrust between Company A and Company B team members could escalate quickly. Incident responders must focus on digital evidence, but ignoring the human aspects of the incident would be a major mistake.

Lessons Learned

Lessons learned from the scenarios detailed above can be actioned by organizations preparing for M&A. Arranging these recommendations into administrative, technical, and legal/operational categories will also help align these action items with the correct stakeholders (e.g., legal, communications and cybersecurity).

Administrative Considerations

  • Establish roles and responsibilities. Establishing incident response roles and responsibilities across all entities involved in the M&A process ensures that incident response personnel from different organizations can work collaboratively and efficiently towards a common goal – incident remediation and recovery. High-level plans should be created and documented to designate responsibility for specific response activities regardless of which entity is attacked. Subsidiaries of the acquired organization must also be considered during incident response planning. How much help does the subsidiary’s security team expect? Should the parent company’s incident response team step in even if the subsidiary doesn’t request help? If multiple subsidiaries are affected, which subsidiary receives priority assistance?
  • Facilitate secure communications. Once incident response roles and responsibilities are understood by all entities, secure communication methods must be deployed to support the new cross-organizational incident response team. Remember that communications methods must be available to individuals supporting the incident response team, such as executive leadership, communications, legal and human resources. Lines of communication must be open to subsidiaries of the acquired organization as well.
  • Follow personnel security best practices. The acquiring organization should follow new-hire onboarding processes for all incoming IT and cybersecurity personnel, including background checks and any other HR requirements. This requirement may seem inconvenient or offensive to tenured employees who are trusted by the acquired organization, but contracted employees may not have been included in security checks, and few companies conduct follow-up security checks after initial hire. Regardless of whether background checks are conducted, coordinate legal and human resources representatives from all entities to develop a fair and objective insider threat investigation strategy. Define thresholds for what types of evidence will be sufficient to bring an employee under suspicion and outline approved methods for conducting internal interviews and investigations discreetly.
  • Identify recovery time constraints and recovery priorities. Executives should be involved as an extension of the incident response team throughout the M&A process so incident recovery efforts can be coordinated with any contractual M&A milestones. Business leaders can also help the incident response team prioritize critical systems and services until asset criticality listings can be updated to account for the combined infrastructure.

Technical Considerations

  • Plan immediate containment actions. Immediate containment actions should be updated to consider all incoming infrastructure. The potential effects of each immediate containment action should also be understood and approved by all entities involved in the merger. Sweeping containment actions commonly employed during critical incidents (e.g., ransomware attacks) may restrict operations for one or both merging entities, especially as more of the acquired entity’s infrastructure is absorbed.
  • Address specialized applications and technologies. Ensure that specialized applications and technologies are acknowledged by all parties and that subject matter experts will be included in incident response processes. If a specialized application is supported by a third-party consultant or vendor be sure to identify an internal stakeholder to coordinate response efforts with those external resources.
  • Create contingency plans. Adapt existing disaster recovery and business continuity plans to address outages in mission-critical applications or systems resulting from a cybersecurity incident during the merger. Include considerations for delays in technology integration initiatives resulting from malicious activity or other security constraints.

Legal/Operational Considerations

  • Consider incident impact in legal safeguards. Considerable time and effort go into preparing legal protections on both sides of the negotiating table. Include considerations for an active incident and how a critical cybersecurity incident may influence the M&A process. Legal safeguards should also drive compliance with any regulatory requirements the merging entities are subject to during and after an incident.
  • Align incident response support contracts. Review third-party DFIR provider and cyber insurance contracts for all entities. Confirm where coverage gaps exist, and which policies/contracts will take priority during an incident. Designate stakeholders to activate the relevant DFIR and cyber insurance support contracts.
  • Coordinate legal counsel across all entities. Ensure that legal representatives from all organizations involved in the merger have established roles, responsibilities, and expectations for response processes requiring legal coordination. Participation will be particularly important during a ransomware attack, where activities such as adversary negotiation, ransom payment, and sensitive data disclosure often have public relations and legal implications.

Planning for Incident Response During M&A

Given the complexity of conducting incident response during M&A and the potential impact of a security incident on those processes, organizations must build foundational incident response practices into related security preparations. Doing so shouldn’t mean rebuilding an incident response program from scratch, but rather adapting key elements from an existing incident response program into the M&A preparation phase.

The lessons learned presented above can be referenced to support and supplement well-known security standards such as NIST 800-53 and the Centers for Internet Security’s (CIS) Top 18 Critical Security Controls. This will create increased confidence in the joint security team’s incident response capability and should help keep M&A processes on course in the event of a cybersecurity incident.

A Note on Divestitures

While this blog focuses on mergers and acquisitions, divestitures require similar incident response preparations. Business leaders might hesitate to pursue more than the minimum required due diligence processes required prior to divestiture of assets, considering such efforts to be “negotiating against themselves.” But proactively developing contingency plans to address incidents that occur during these key business transactions can help avoid failed negotiations and long-term legal issues at the cost of a short-term inconvenience.

Per recent SEC guidance, publicly-traded U.S. companies and foreign private issuers are obligated to report any “material cybersecurity incidents” with a Form 8-K or Form 6-K filing within four days. Reports concerning any part of the divesting organization’s network could complicate negotiations or even void a divestiture deal prior to closing. Proactively hunting for indicators of compromise within the asset to be divested can reduce the risk of a cybersecurity-related complication. Organizations might also consider preparing an organizational playbook for conducting incident response specifically during divestiture windows. This playbook would focus on scoping and containing cybersecurity incidents in a way that avoids tainting the reputation of assets planned for divestiture.

The post Best Practices for Handling Incident Response During a Merger and Acquisition appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
2024 Cybersecurity Predictions from Industry Experts https://solutionsreview.com/security-information-event-management/2024-cybersecurity-predictions-from-industry-experts/ Thu, 11 Jan 2024 04:00:51 +0000 https://solutionsreview.com/security-information-event-management/?p=5182 The editors at Solutions Review have compiled a list of 2024 cybersecurity predictions from some of the top leading industry experts. To properly close out the year, we called for the industry’s best and brightest to share their Identity Management, Endpoint Security, and Information Security predictions for 2024 and beyond. The experts featured represent some of the […]

The post 2024 Cybersecurity Predictions from Industry Experts appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
2024 cybersecurity predictions

The editors at Solutions Review have compiled a list of 2024 cybersecurity predictions from some of the top leading industry experts.

To properly close out the year, we called for the industry’s best and brightest to share their Identity ManagementEndpoint Security, and Information Security predictions for 2024 and beyond. The experts featured represent some of the top Cybersecurity solution providers with experience in these marketplaces, and each projection has been vetted for relevance and ability to add business value.

2024 Cybersecurity Predictions from Industry Experts


John Stringer, Head of Product at Next DLP 

“In 2024, AI will better inform cybersecurity risk prevention decision-making. Elsewhere, disgruntled employees may lash out at stricter working-from-home policies as insider threats loom.

With AI estimated to grow more than 35 percent annually until 2030, businesses have swiftly adopted the technology to streamline processes across a variety of departments. We already see organizations using AI to identify high-risk data, monitor potential insider threat activity, detect unauthorized usage, and enforce policies for data handling. Over the next year, AI will power data loss prevention (DLP) and Insider Risk Management (IRM) efforts by detecting risky activity and then alerting IT teams who can analyze their movements and respond accordingly, preventing further cybersecurity issues from arising.

Insider threats will start to manifest themselves in other ways in the new year, too. As an increasing number of companies implement stricter policies about office-working and fewer days at home, disgruntled staff – particularly younger employees who have only experienced a ‘post-Covid’ working environment – may lash out at these supposed unfair policies. Frustrated employees could turn to stealing data and leaking sensitive company information, leading to wider security concerns that may impact brand reputation. ”

Steve Wilson, Chief Product Officer at Exabeam 

“Companies are under constant assault and frankly, the cybersecurity sector is failing customers. Businesses, government agencies, healthcare installations and more are in the unfair position of being attacked from the outside by nation state actors, while employees exfiltrate and sell company data from the inside.

Defending against these asymmetric threats using most of the security tools available today is near impossible because they do not effectively target the right threats. As the great unsolved challenge of this decade, cybersecurity needs an overhaul of approaches and tools, so that the people trying to protect companies and data don’t continue to drown amid thousands of daily threats.

We’ve seen the great innovations that AI spurred this year, especially with large language models (LLMs). I expect this technology to be transformative next year, especially in cybersecurity. I believe that AI will allow security operations personnel to use natural language with security tools and remove the friction of programming complex queries to stop intrusions.

Natural language will also allow SecOps to explain threats to executives and departmental business counterparts without complicated representations and tables on a screen. This natural language understanding can open space for SOC personnel to move faster and cover even more ground, which will be music the CISO’s ears, as they continue struggle with a persistent talent gap, and building skilled teams.”

Darren Shou, Chief Strategy Officer at RSA Conference

“While not new for 2024, mental health challenges will continue for many in the cybersecurity industry who are overworked and underappreciated. The stress that cyber employees endure day in and day out to secure vital systems, companies and individuals is only compounded and exacerbated by the skills gap shortage that our industry faces.

The price of mistakes is much higher in today’s world which creates a lot of pressure. The emergence of cyber insurance goes hand in hand with that. Other forms of mental health anxiety can stem from job safety, the expectation for 24/7 availability and the lack of proper training or continuous education suitable to deal with emerging threats. One question to ask heading into 2024 – can mental health support be delivered or supported by AI? Another question is does your team feel supported? Many of these individuals take on the role of cyber heroes and are deployed to help prevent breaches or attacks. The burden should not fall squarely on their shoulders. Organizations and individuals should make it a priority to address these issues head on before burn out, mental health collapse and other significant issues take hold.”

Petros Efstathopoulos, Vice President of Research at RSA Conference

“The evolution of generative AI, ChatGPT and other forms of artificial intelligence and machine learning have privacy once again at the top of the cybersecurity forefront. Privacy concerns for consumers, in the enterprise and online are all still valid. AI adds a new wrinkle to that. How should engaging with AI for work or personal lives be regulated in terms of privacy? Where does individual data go and who has access to it? How can data be removed from a model? What safeguards do we need in order to avoid extraction attacks? These are the types of questions that need to be front and center.

In terms of policy, different jurisdictions have different rules, regulations and restrictions for AI and the data privacy laws. How does this translate to the development of AI capability and how will consumers and enterprises be able to manage the jurisdictional differences? This will be key to watch in 2024 and beyond.”

Zach Capers, Manager of ResearchLab and Senior Security Analyst at GetApp

“In 2023, we finally saw some positive signs in the world of security. Businesses appear to have rebounded from an influx of pandemic-fueled vulnerabilities and have begun locking down systems like never before. This means that cybercriminals will increase reliance on social engineering schemes that exploit employees rather than machines.

Moving into 2024, GetApp research finds the number one concern of IT security managers is advanced phishing attacks. And we’re not only talking about email phishing. SEO poisoning attacks are a rising phishing threat designed to lure victims to malicious lookalike websites by exploiting search engine algorithms. This means that employees searching for an online cloud service might find a bogus site and hand their credentials directly to a cybercriminal, have their machine infected by malware, or both. In 2024, it will be more important than ever to educate employees on the sophisticated and increasingly dynamic methods used to trick them into handing over sensitive information that can result in damaging cyberattacks.”

Theresa Lanowitz, Head of Evangelism at AT&T Cybersecurity and Former Gartner Analyst

“In a world of edge computing comprised of diverse and intentional endpoints, it is important for the SOC to know the precise location of the endpoint, what the endpoint does, the manufacturer of an endpoint, whether or not the endpoint is up to date with firmware, if the endpoint is actively participating is computing or if it should be decommissioned, and host of other pieces of pertinent information. Edge computing expands computing to be anywhere the endpoint is – and that endpoint needs to be understood at a granular level.

In 2024, expect to see startups provide solutions to deliver granular detail of an endpoint including attributes such as physical location, IP address, type of endpoint, manufacturer, firmware/operating system data, and active/non-active participant in data collection. Endpoints need to be mapped, identified, and properly managed to deliver the outcomes needed by the business. An endpoint cannot be left to languish and act as an unguarded point of entry for an adversary.

In addition to granular identification and mapping of endpoints, expect to see intentional endpoints built to achieve a specific goal such as ease of use, use in harsh environments, energy efficiency. These intentional endpoints will use a subset of a full-stack operating system. SOCs will need to manage these intentional endpoints differently than endpoints with the full operating system.

Overall, look for significant advancements in how SOCs manage and monitor endpoints.”

Mary Blackowiak, Director of Product Management and Development at AT&T Cybersecurity

“Digital transformation continues to rapidly evolve and despite some return to office initiatives, the workforce remains vastly distributed. Given these factors, I expect endpoint security will be a major focus for organizations in 2024. The dispersed nature of today’s workforce amplifies the complexity of safeguarding against cyber threats, with a glaring challenge being the lack of visibility into the multitude of devices accessing organizational networks. The good news is that there are solutions to this dilemma.

The difficulty in effectively protecting what you can’t see remains a fundamental principle in the cybersecurity industry, which is why the first step in any endpoint security strategy should be conducting an inventory of all the devices that are accessing the network. This can be accomplished with a unified endpoint management (UEM) solution. Curated security policies via a UEM solution and endpoint security technologies can be applied once you know the kinds of devices you’re working with. Rogue asset discovery tools are also helpful for identifying endpoints behaving in a manner that would indicate malicious intent.

As we look ahead to 2024, organizations must understand that endpoint security is not just a necessity for risk reduction but a strategic investment in safeguarding the digital future.”

Tom Traugott, SVP of Strategy at EdgeCore Digital Infrastructure

As generative AI models are trained and use cases expand, in 2024 we will enter the next generation of edge and scaled computing through the demands of inference (putting the generative AI models to work locally). AI-driven supercomputing environments and databases will not stop or get smaller as they require 24/7 runtime, but they may seek to be closer to the end user in contrast to the large model training locations that can be asynchronously located. While this evolution is certainly something to watch, the next generation computing at the edge is still underway and likely another one to two years from materializing and understanding what it will actually look like, but we believe modern, state of the art power dense data centers will be required to support.”

Richard Tworek, CTO at Riverbed

“Being good corporate citizens, many companies will move to reduce their carbon footprint in 2024. They will also get a push from regulators in California and the European Union, who are looking to make the laggards meet at least minimal standards of reducing their carbon footprint. One area where AI will help companies is determining which devices are running 24/7 when they don’t need to be and suggesting where to throttle back power consumption if possible. Ironically, the move to AI will consume more computing power which will increase companies’ carbon footprint, making it necessary for companies to increase their sustainability efforts.”

Arti Raman, CEO and Founder of Portal26

“The rapid innovations in artificial intelligence (AI) this past year have brought companies to a crossroads: adapt AI or block AI. The productivity gains and competitive advantages AI allows cannot be ignored, nor can its security concerns. As companies debate on implementing or blocking AI, their employees continue to utilize the free and open resource across company networks – whether or not the company is aware. Going into 2024, investments in AI governance and visibility technology will play a significant role in widespread AI adoption.

A recent report about the state of generative AI showed that company executives, while optimistic about AI’s role in the enterprise, struggle to get valuable insight and visibility into its usage. Concerns like data governance and security will be a top priority. Two-thirds of the survey respondents already indicated a generative AI security or misuse incident in the past year.

While AI governance will be essential to building enterprise AI programs, companies must also develop and implement guardrails that ensure responsible usage alongside responsible development. The same survey also found that nearly every executive interviewed expressed concerns over Shadow AI, yet 58 percent provided less than hours of annual education and training on these tools.

Before companies can effectively and safely use generative AI tools, employees must be educated on utilizing best practices: writing prompts that achieve desired outcomes, keeping data security and privacy in mind when inputting data, identifying the quality and security of AI, verifying AI output, and more.

By investing in AI governance tools and developing complimentary guardrails, companies can avoid what may end up being the biggest misconception in 2024: the assumption that you can control the adoption of AI.”

Michiel Prins, Co-Founder of HackerOne

“As the adoption of generative artificial intelligence (GenAI) accelerates, organizations have realized they must prioritize security and risk management as they build and implement this emerging technology. The work we’re already doing with customers, including leading AI companies, proves the value hackers deliver to secure GenAI. Red teaming and the insights hackers offer will play an increasingly central role in ensuring the security of this new technology — as exemplified by the Biden Administration’s endorsement of red teaming in its recent executive order.

While we’re seeing more external support for ethical hacking, the value they offer isn’t new; ethical hackers are consistently first to pressure test emerging technology. Their creative, adversarial, and community-minded approach gives them a distinct advantage in understanding novel security issues.

Our 2023 Hacker-Powered Security Report found more than half of hackers expect GenAI tools to become a major target for them — and we can assume malicious actors are planning the same. As AI continues to shape our future, and new emerging technologies crop up, the ethical hacker community will remain at the forefront of identifying new risks.”

Joe Fousek, Legal Technology Evangelist at Aiden 

“As we move into 2024, we need to be aware that AI (Artificial Intelligence) will introduce new attack vectors that security teams must address. Compounding the problem, the turnaround time for exploitation will drop dramatically as bad actors learn to use AI. Law firms and legal departments — with their extensive list of specialized practice-specific apps that are already a prime target for hackers — could be especially vulnerable. Manual patching and the use of common deployment tools will struggle to keep up with escalating update cycles and new attack vectors. However, continuous updates that take advantage of AI and hyperautomation can decrease the severity and frequency of service interruptions. Apple has shown that forcing users to apply updates can improve security, and Microsoft is similarly updating their 365 suite of applications and services. Law firms and companies in general need to act with the same urgency across their entire applications portfolio by leaning into AI-based solutions.

The current concerns about confidentiality in Generative AI systems like ChatGPT are very real. These systems ‘learn’ by ingesting confidential information, potentially spitting out that confidential information in their output. The legal community will get through this in 2024 just like it got past the concerns 25-30 years ago when everyone was concerned about saving client data on law firm network resources or Document Management Systems. Generative AI is still in its infancy and faces usability and confidentiality concerns. Still, while mainstream material usage of Generative AI-powered apps in 2024 seems unlikely, the legal community is determined to incorporate this technology in their workflow. The ‘will’ to incorporate AI technology is extraordinarily strong, and as AI use increases in 2024, we will be looking to find the ‘way’ to take out the barriers that remain, allowing for more mainstream adoption in the years that follow.”

Alex Rice, Co-Founder & CTO of HackerOne

Over the next year, we’ll see many overly optimistic companies place too much trust in generative AI’s (GenAI) capabilities. Nearly half of our ethical hacker community (43 percent) believes GenAI will cause an increase in vulnerabilities within code for organizations. It’s essential to recognize the indispensable role human oversight plays in GenAI security as this technology evolves.

The largest threat GenAI poses to organizations is in their own rushed implementation of the technology to keep up with competition. GenAI holds immense potential to supercharge productivity, but if you forget basic security hygiene during implementation, you’re opening yourself up to significant cybersecurity risk.

Low code tools built on GenAI also threaten the security of software development lifecycles. GenAI empowers people without the proper technical foundations to produce technical products. If you don’t fully understand the code you’re producing, that’s a huge problem.

The best solution I see to ensure the safe implementation of GenAI is to strike a balance: organizations must remain measured and conservative in their adoption and application of AI. For now, AI is the copilot and humans remain irreplaceable in the cybersecurity equation.”

Chris Evans, CISO and Chief Hacking Officer at HackerOne

“As we look toward 2024, one thing is clear: a pipeline of diverse talent into the cybersecurity workforce remains a significant industry problem. However, there is hope to meet this challenge.

The growing popularity of ethical hacking, particularly among younger generations, has democratized how anyone with a computer, technical ability, and creativity can earn money and experience to jumpstart a cybersecurity career. We’ve heard countless stories of those in the ethical hacker community who started hacking in high school and recently found their calling — and a career — through hacking.

Hacking experience helps these individuals evolve into in-house penetration testers or bug bounty program managers, where their frontline experience provides invaluable insights. Ethical hacking is creating a diverse, skilled, and creative workforce capable of viewing cybersecurity challenges from multiple perspectives. The lower barrier to entry for individuals interested in this field builds an inclusive path toward the security experts of tomorrow and a safer internet for everyone.”

Steve Povolny, Director of Security Research at Exabeam

“In 2024, nation-states will increasingly develop their own large language models (LLMs).

Nation-states will remain a persistent threat in 2024— especially in light of the growing use of artificial intelligence (AI). With almost infinite resources currently at their disposal, nation-states will likely develop their own LLM generative AI systems specifically for malware. They’ll hire large teams to evolve models and build next-generation development tools that will be difficult to combat. The current geopolitical conflicts throughout will likely add fuel to the fire. Hacking operations are being used in tandem with military assaults to gather intelligence for war crimes prosecutions and to initiate disruptive actions that threaten civil society.

To combat nation-states using AI, businesses and government organizations must collaborate with cybersecurity providers to assess their resilience to sophisticated attacks, implement new artificial intelligence capabilities, and improve training and processes to optimize their understanding of these tools.”

Javed Hasan, CEO and Co-Founder of Lineaje

“Organizations’ inability to identify the lineage of AI is going to lead to an increase in software supply chain attacks in 2024.

Over the course of the last year, organizations have been heavily focused on how to prevent cyberattacks on AI. There’s only one problem: everyone is focusing on the wrong aspect. Many security teams have zeroed in on threats against AI once it’s deployed. Organizations are concerned about a threat actor using AI to prompt engineering, IT, or security to take action that could lead to a compromise.

The truth is that the best time to compromise AI is when it is being built. Much like the majority of today’s software, AI is primarily built from open-source software. The ability to determine who created the initial AI models, with what bias, which developer with what intent, is by and large far more critical to preventing gaps in an organization’s security posture.

I suspect that few organizations have considered this approach, and as a result, we’ll see all kinds of interesting challenges and issues emerge in the coming months.”

Ravi Pandey, Sr. Director of Vulnerability Management Services at Securin

“As we look ahead to the year 2024, the cybersecurity industry is expected to undergo several exciting and new developments. New technologies, techniques, and companies have all but made an impression on the industry with unique and exciting products; however, on the other hand, 2023 was another record-breaking year for cybercriminals and other cyber threats. Bad actors are taking advantage of new methods automation has afforded them and are creating more sophisticated tactics to leverage against all kinds of organizations. With the emergence of new threats, techniques, and attackers, security professionals must remain vigilant at all times. Here are some of my detailed thoughts on where the industry is headed.

Firstly, as we look ahead to the future of cybersecurity solutions, artificial intelligence (AI) will continue to play a critical role in computer security. Automation will enable us to scale and innovate more easily, simplifying the process. However, we must also be aware that cybercriminals will be quick to adapt and are already using this new technology on the offensive— identifying and exploiting vulnerabilities within an organization’s attack surface. These bad actors are taking advantage of AI to launch more complex attacks at an even faster rate. As cybersecurity leaders, we must focus on using AI as part of a defensive strategy to counter these threats and automate preventative measures. Specialized testing of AI applications will soon become a standard practice to assess their security and will be used to find potential vulnerabilities within companies’ networks.

Secondly, cyberattacks overall are expected to increase; ransomware groups are targeting vendors, government agencies, and critical infrastructure in the United States. Over the past five years, cyberattacks have surged and this trend shows no signs of slowing down, as cyber criminals move to target supply chains and zero-day vulnerabilities with relentless voracity. Breaches like the MOVEit file-transfer tool will continue to see lasting reach and have a ripple effect across organizations with its impact. With the assistance of AI, particularly generative AI (GenAI) technology, attackers will be able to refine their techniques, increasing their speed and effectiveness. GenAI will allow criminal cyber groups to quickly fabricate convincing phishing emails and messages to gain initial access into an organization. Cyber breaches or ransomware attacks have the potential to cost companies millions of dollars in remediation expenses. Organizations must, therefore, be proactive in implementing and updating their cybersecurity measures to combat these threats.

Thirdly, external Attack Surface Management (ASM) will become an essential aspect of comprehensive cybersecurity strategies. Unfortunately, many organizations currently lack the necessary vulnerability management and validation capabilities to effectively manage their external attack surface. With the US government’s Cybersecurity and Infrastructure Security Agency (CISA), there continues to be a national effort to understand, manage, and reduce risk to the country’s cyber and physical infrastructure. This effort includes a national mandate for asset discovery and incident disclosure, which will hopefully lead to increased trust and faster response times between the private sector and the government in the event of a cyber incident. With both AI and cognitive human intelligence driving these initiatives, new security practices must be developed and tested to manage the external attack surface and protect organizations from irreparable damage.”

Diana Freeman, Senior Manager Governance Advocacy & Initiatives at Diligent

“With the explosion of artificial intelligence (AI) over the past few months, we are seeing the topic come to the fore in conversations at school board meetings and local councils.

According to a Walton Family Foundation national survey, 76 percent of teachers agree that integrating ChatGPT for schools will be important for the future. However, a further survey by Intelligent.com of 1,000 teachers/professors revealed that 71 percent of teachers say their school does not have a policy regulating ChatGPT use.

In 2024, as AI adoption gains traction in education, school boards must establish a roadmap for ethical governance and comprehensive oversight to responsibly maximize the applicable benefits of AI technology, while adhering to privacy guidelines and laws. Boards should take a leadership role in staying informed and must seek out educational resources in order to develop policy and oversight of this rapidly evolving technology.”

Richard Copeland, CEO of Leaseweb USA

“The emergence of AI and machine learning has unveiled a need for heavy computing power to train machine learning models, recognize images, model neural networks, detect fraud, and more. Companies looking to leverage AI and machine learning (ML) will require a hosting provider that ensures high availability, flexible configurations, and powerful computing options to address scalability and cost concerns. As AI and ML technologies become more widespread, we can expect demand for cloud computing to increase over the next few years.

Connie Stack, CEO of Next DLP

“In 2024, organizations will be pressured to consolidate their security stack. Driven by a continued shortage of cybersecurity talent and cost-saving initiatives, in 2024, we will continue to see CISOs pressured by non-security-focused peers and executives to adopt some of Big Tech’s solutions as the single source of data protection. Consolidation is here to stay, but putting all your eggs in one basket is never a good strategy– in life or in cybersecurity. There’s a long list of pros and cons. Cost savings is the core pro for a ‘good enough’ broad platform, but CISOs must consider the cons seriously. From solution gaps, to narrow OS and app coverage, to additional staff or consultants required to manage complex implementations– the widely-touted cost-saving on software subscriptions can quickly get eaten up by supplemental point solutions and consulting fees. Additionally, more red flags become apparent when considering how challenging it is to get resolution on support tickets and feature requests.

As we approach the New Year, I would remind anyone looking to consolidate in 2024 to evaluate their current stack, identify which tools can be replaced, and develop a roadmap tailored to your specific security goals. Consolidation involves more than adopting new technology or embracing an aggressively discounted license that finance teams adore– it’s about reshaping your security strategy, leveraging Big Tech and other specialist solution providers, quantifying the total cost of ownership, understanding your gaps, and aligning them with your organization’s goals and security needs.”

Richard Bird, Chief Security Officer at Traceable

“In 2024 we will continue to see a hockey stick curve in the upward trend of APIs being used to attack your organization and a moderately curved trend in the number of companies that are moving from thinking about doing something about API security, to actually doing something about it. The market is proving to be stubborn in acknowledging the scale of the problem, but the bad actors are showing no restraint in using API security weakness to their advantage.

My number one advice going into 2024 is to stop trusting that your APIs are secure and start asking the hard questions about how exposed your organization currently is to API key theft, API transactional fraud and authorization level exploits. Until you get curious enough to start digging, APIs will remain your greatest unmitigated risk in 2024.”

Tom Ammirati, CRO at PlainID 

“The security landscape will continue to present new and challenging obstacles for organizations in 2024. We saw the global average cost of a data breach in 2023 jump to $4.45 million, and as ransomware attacks continue to persist, that number will only go up in the coming year. To avoid this kind of payout, many organizations are investing in cyber insurance; however, that too will only get more expensive as its demand grows. In an attempt to decrease the financial and operational burden of devastating cyberattacks on organizations, regulators and legislators will ramp up their initiatives to hold cybersecurity firms accountable for breach notifications, security posture management and zero trust initiatives.

Innovative firms seeking to provide a holistic cybersecurity solution will look to advance identity and authorization concepts such as identity security posture management (ISPM) and identity threat detection and response (ITDR). However, both the ISPM and ITDR spaces are still nascent, and analysts are still forming their views on what these categories entail — and if they are in fact categories or simply functionalities. Additionally, cybersecurity firms will continue to evaluate AI and its potential in proactively remediating breaches as a whole and — in particular — addressing identity-related breaches and attack surface.

While high-tech solutions like AI will continue to dominate the cybersecurity conversation, there’s a low-tech option that organizations must invest in more in 2024: cybersecurity training for employees. Of course, all employees need functional training when it comes to using new or updated technology, but organizations can dramatically improve their cybersecurity posture by increasing education for employees on threats both historical and trending so they are more familiar with what threats can look like. I foresee an increase in government programs designed to harden and regulate infrastructure and hold firms accountable for cyber negligence, which will force private and public firms to further their investment in cyber security at all levels. In turn, this move will result in increased and improved programs in educational systems to increase the skill set of security practitioners.”

Caroline Vignollet, SVP of Research & Development at OneSpan

“In this ever-evolving tech landscape, as AI advances, so do the threats, especially with quantum computing on the rise. These increasingly complex challenges have highlighted the growing skills gap in the cybersecurity realm, with organizations already feeling the ramifications. This shortage of experts is critical, and by 2024, the demand will surge, emphasizing the urgent need for expanded skill sets. To navigate this landscape, organizations must prioritize fostering a culture of innovation. This proactive mindset not only anticipates threats but also encourages creative solutions. By investing in research, nurturing talent, and encouraging new approaches, companies can fortify their digital defenses and stay ahead of threats.”

Andre Durand, Co-Founder & CEO of Ping Identity

“Identity has always been a gatekeeper of authenticity and security, but over the past few years, it’s become even more central and more critical to securing everything and everyone in an increasingly distributed world. As identity has assumed the mantle of ‘the new perimeter’, and as companies have sought to centralize its management, fraud has shifted its focus to penetrating our identity systems and controls at every step in the identity lifecycle, from new account registration to authentication. 2024 is a year when companies need to get very serious about protecting their identity infrastructure, and AI will fuel this imperative. Welcome to the year of verify more, trust less, when ‘authenticated’ becomes the new ‘authentic.’ Moving forward, all unauthenticated channels will become untrustworthy by default as organizations bolster security on the identity infrastructure.”

Mike Scott, CISO at Immuta

“Third-party risk will evolve as a big data-security-related challenge in the coming year as organizations of all sizes continue their transition to the cloud. It’s clear teams can’t accomplish the same amount of work at scale with on-prem solutions as they can in the cloud, but with this transition comes a pressing need to understand the risks of integrating with a third party and monitor that third party on a ongoing basis. Organizations tend to want to move quickly, but it’s important that business leaders take the time to evaluate and compare the security capabilities of these vendors to ensure they are not introducing more risk to their data.”

Sameer Hajarnis, SVP and GM of Digital Agreements at OneSpan

“As we approach 2024, the evolution of e-signature methods will redefine the landscape of digital agreements and play an important role in establishing digital identities. The shift from physical documents to digitized formats represents just the initial phase of this transformation. Looking ahead, I see wider adoption of alternative e-signature methods, potentially leveraging technologies like facial recognition or even audio-based authentication. However, the rapid pace of innovation in this realm contrasts with the relatively slow progress in regulatory frameworks and compliance standards, so organizations need to be certain of the security of their e-signature solutions.”

Roman Arutyunov, Co-Founder and SVP Products at Xage Security

“As we approach the new year, the escalation of geopolitical tensions poses a serious threat to critical infrastructure. While nation-state threats loom, opportunistic ransomware groups taking advantage of these situations also pose significant risks. Ransomware-as-a-service continues to rise, following the same repeated pattern of credential theft, privilege escalation, and lateral movement.

To counter these threats, emphasis should be placed on proactive solutions, eliminating compromised credentials, securing access, and controlling any east-west access between machines, devices, or apps. As such, investments should prioritize a strong foundation in protection rather than detection and response strategies. Additionally, we can expect to see more CISA-driven regulation and enforcement for key sectors beyond the TSA and EPA, such as critical manufacturing, particularly given the recent Clorox attack having a lasting impact on operations.

A promising sign is that we are beginning to see a shift in cybersecurity investment strategies that better reflect the current threat landscape. Companies are recognizing that threat hunting and responding to endless detections and false positives uses too much of their precious security resources and they’re growing tired of chasing needles in a haystack. They are now turning their attention to reducing the attack surface by proactively protecting their assets. By prioritizing tangible protection solutions that enhance productivity while complying with expanding regulations, organizations can ensure they can address emerging threats from around the globe in 2024 and beyond.”

Karl Fosaeen, VP of Research at NetSPI

“Across industries, even with workloads shifting to the cloud, organizations suffer from technical debt and improper IT team training – causing poorly implemented and architected cloud migration strategies. In 2024, IT teams will look to turn this around and keep pace with the technical skills needed to secure digital transformations. Specifically, I expect to see IT teams limit account user access to production cloud environments and monitor configurations for drift to help identify potential problems introduced with code changes.

Every cloud provider has, more or less, experienced public difficulties with remediation efforts and patches taking a long time. I anticipate seeing organizations switch to a more flexible deployment model in the new year that allows for faster shifts between cloud providers due to security issues or unexpected changes in pricing. Microsoft’s recent “Secure Future Initiative” is just the start to rebuild public trust in the cloud.”

Nick Carroll, Cyber Incident Response Manager, Raytheon, an RTX Business

“As we head into 2024, organizations will be challenged to strengthen their defenses faster than cyber threats are evolving. This ‘come from behind’ rush to keep pace with attackers can often lead to the harmful practice of organizations skipping the foundational basics of cyber defense and failing to establish a general sense of cyber awareness within the business. Without a solid security culture at the foundation, security tools, such as expensive firewalls or endpoint detection and response (EDR), will ultimately become ineffective down the line. If organizations haven’t already, they must begin to build cybersecurity awareness among employees and third-party partners, while also determining the best path for how to integrate security into the organization’s culture and operations. Once these steps are taken, organizations will have a solid organizational footing that will position them for success in their cyber defense initiatives for the long run.”

Chaim Mazal, Chief Security Officer, Gigamon

“Security data lakes are the future of cybersecurity – Data lakes have long been leveraged in other industries, but are only now becoming a solution to the growing data security concern. Security data lakes haven’t traditionally been used due to the lack of data available in the security industry. With the continued shift to hybrid cloud environments, data simultaneously, and automatically, lives behind encryption. All cloud traffic is encrypted. While it potentially limits viewership by cybercriminals, it also limits the data security professionals have access to in real-time. However, visibility into encrypted traffic is essential as cybercriminals traverse laterally in the network, undetected. As organizations prioritize visibility into this traffic, security data lakes can be a game-changer. By gathering all available data into a security data lake, organizations will be able to create overlays that enable them to monitor what’s happening across the network in real-time.”

Shane Buckley, President & CEO, Gigamon

“The future of cybersecurity isn’t AI. It’s data – AI is the shiny new distraction to investors and enterprises across industries. While it has the potential to change the security landscape dramatically, it cannot do it on its own. Large language models (LLMs) are only as accurate as the data within them. However, with 95 percent of network traffic encrypted, there is a surplus amount of data not visible – and therefore not being used – to optimize AI toolsets. Without that data, networks, organizations, employees, and customers are at substantial risk of being compromised. As organizations look to prioritize budgets for 2024 and look to do more with less, they must have visibility into encrypted cloud traffic to not only improve their security posture but also make the most of AI toolsets.”

Merritt Baer, Field CISO at Lacework

“In 2024, I predict that a large new open source vulnerability will be disclosed. There will always be a next “log4j”. Security and open source communities are intertwined in really healthy ways, but they also require ongoing maintenance and repair. Additionally, I predict there will be a big security issue in an AI model, and there will be a big security issue in using AI for security. On top of all of this, the SEC will continue to apply standards in more aggressive ways, such as the new 4-day incident disclosure reporting requirements that went into effect on Dec. 18.”

Will LaSala, Field CTO at OneSpan

“In 2023, we saw generative AI take off and many companies jumped on implementing and using genAI-powered technologies, and they are now realizing the implications of this rapid adoption both internally and externally, namely in regards to trust and security. To account for this, in 2024, the market will need to create and adopt new solutions focused on reestablishing trust within today’s digital world. We can expect to see an uptick in solutions that focus on verifying digital assets online, as well as digital agreements. With digital transactions at the core of every business, we need to prepare ourselves for an upgrade in innovation and bleed confidence into every interaction we have with customers.”

Frederik Mennes, Director Product Management & Business Strategy at OneSpan

“As we look to 2024, there will be an uptick in new industries investing in security and digitization tools. Industries that have been slower to digitize, such as the energy, mortgage, and transportation sectors are being hit with new regulations, forcing investment in security measures like mobile and cloud authentication along with the adoption of secure e-signatures.

With so much business now conducted online, there’s a renewed focus on protecting the transaction, forcing both companies and consumers to take a closer look at their security posture. As new regulations are adopted, such as the Digital Operational Resilience Act (DORA) in the EU, more organizations will adopt FIDO (Fast Identity Online) and phishing- resistant solutions in the financial sector. Our world of connection is changing, and to compete in 2024, security and digitization must be at the forefront– across all industries.”

Pukar Hamal, Founder and CEO of SecurityPal

“As AI regulations are codified in 2024 — and even before then, when companies feel obligated to take a more scrutinous look at how vendors are using their data to make their business operations run smoother, more effectively, and so on — we’ll start to see a greater divide between established, far-reaching vendors and newer, more specialized entrants to the market. The former will have a clear advantage: companies are already using their products and have undergone the necessary security and GRC reviews, so deciding whether or not the vendor deserves to continue using their data to refine AI capabilities is relatively simple. New entrants, however, will have to have significantly compelling value propositions and be able to convince an ever more security and privacy-conscious GRC team that they are the best solution.”

Joe Palmer, Chief Product & Innovation Officer at iProov

“Over the past year, many financial services organizations have expanded remote digital access to meet user demand. However, this has widened the digital attack surface and created opportunities for fraudsters. The US financial services sector has been slower to adopt digital identity technologies than some other regions which could be attributed to the challenges it faces around regulating interoperability and data exchange. Yet, with synthetic identity fraud expected to generate at least $23 billion in losses by 2030, pressure is mounting from all angles. Consumers expect to open accounts and access services remotely with speed and ease while fraudsters undermine security through online channels and siphoning money. All the while, there is the serious threat of Know Your Customer (KYC) and Anti Money Laundering (AML) non-compliance. Penalties for this include huge fines and potentially even criminal proceedings. Further, there is an increased risk of bypassing sanctions, and financing state adversaries. In response, many financial institutions are being prompted to take action. This has involved replacing cumbersome onboarding processes and supplanting outdated authentication methods like passwords and passcodes with advanced technologies to remotely onboard and authenticate existing online banking customers.
One of the front-runners is facial biometric verification technology, which delivers unmatched convenience and accessibility for customers while at the same time unmatched security challenges for adversaries. More financial institutions will recognize how biometric verification will reshape and redefine the positive impact that technology can have in balancing security with customer experience and will make the switch.”

Andrew Bud, Founder & CEO of iProov

“An estimated 850 million people worldwide lack a legal form of identification, and without identity, people struggle to open a bank account, gain employment, and access healthcare, which leaves them financially excluded. Digital identity programs improve access to digital services and opportunities. They enable people to assert identity, access online platforms, and participate in digital government initiatives. Supported by investment from World Bank funds, digital identity programs can assist less advanced economies in preventing identity theft and fraud as well as provide an alternative way to prove their identities and access essential services such as benefits, healthcare, and education. Based on a decentralized identity these programs will enable users to digitally store and exchange identity documents, such as a driver’s license, and credentials, such as diplomas, and authenticate without a central authority. A decentralized identity puts the user in control by allowing them to manage their identity in a distributed approach. They will offer the convenience end-users now demand and open essential pathways for previously disadvantaged or marginalized individuals to access financial and welfare services.”

Peter Evans, CEO of Xtract One Technologies

“In 2024, the landscape of artificial intelligence is set for a transformative shift as it moves from being a mere buzzword to a realm of pragmatic and purpose-built applications–the year ahead shows a transition to a climate where practicality is at the forefront. Drawing inspiration from the historical cycle of tech innovations like Segway, TiVo, Google Glass, and Palm Pilots, the pattern suggests a departure from hyped and over-marketed tech innovations to tangible, problem-solving applications addressing specific threats. While the general excitement surrounding AI might cool down, specific solutions tackling real-world problems will rise to the forefront–take for instance, physical security. This evolution will start to extend into the daily lives of individuals, with the integration of AI threat detection at entry points becoming commonplace. Think of it as fire alarms for the digital age, where AI-powered weapons detection systems become mandatory instead of just niche solutions. This mandate has the potential to spur innovation and offer proactive safety measures and advanced insights.

Amidst this technological transformation, Generative AI is poised to play a pivotal role, and its influence is expected to become more mainstream with new applications. Though still in its early stages, Generative AI has the creative potential to redefine how we create, design, define, draw, and express ourselves. With this continued innovation comes an opportunity for the adoption of pragmatic, AI-driven solutions to today’s modern problems. As more states, education systems, sports leagues, and other organizations make weapons prevention systems mandatory, there is a growing demand for next-generation solutions. Scaling these systems at every door, gate, and entrance cost-effectively will drive broader adoption versus legacy traditional labor-based manual approaches, and can provide advanced insights to heighten safety.”

Neil Serebryany, Founder and CEO of CalypsoAI

“Given the prevalence of generative AI, we’re going to see the number of security incidents involving these tools grow in volume and complexity in 2024. Specifically, we’ll see the first large-scale breach of a foundation model provider, such as OpenAI, Microsoft, Google, etc. This will lead to a large-scale security incident thanks to all the data – including personal identifiable information and in some cases company secrets – that has been sent to the model by hundreds of millions of regular users over the past year. To better protect themselves, and stop valuable, private information from entering public LLMs, enterprises need to implement appropriate guardrails that ensure data protection and governance, ethical usage, user monitoring and oversight, and full auditability. Only with the right safeguards in place can companies take advantage of all the benefits generative AI has to offer, without worrying about the risks of data exposure.”

Andrew Barnett, Chief Strategy Officer at Cymulate

“We’re due for extinction event in 2024. Even following the most public-facing breach, somehow organizations have been able to recover both their reputation and their stock prices within around three quarters following an attack. This isn’t sustainable, and frankly, it’s shocking that we haven’t seen businesses fail after a breach of that magnitude. 2024 will change things. The introduction of AI has made for smart criminals becoming even smarter. And I anticipate this we will see a company face an extinction because of an attack in the coming year.”

Mike DeNapoli, Director and Cybersecurity Architect at Cymulate

“New SEC regs are going to shake up business as we know it. Significant changes will arise in 2024 as a result of the newly adopted SEC regulations. This will include a range of responses as local government entities, law firms, and other countries follow suit and adapt their own rules and regulations. States like California and New York have previously done this, but we can expect to see other state and local governments begin to ramp up their regional regulations, with particular attention to data control and privacy, given the SEC regulations focus on material impact. Other countries with securities and exchange regulatory bodies will also put forward their own regulations, requiring strict notification schedules and detailed annual reports. Further, we expect law firms and practices to increase their activity around individual and class-action lawsuits against organizations who create some form of perceived or actual harm that can be used as the basis for recovering damages.”

Dr. Brett Walkenhorst, CTO at Bastille

“Protecting against wireless threats will be a key focus in 2024 across industries. The Department of Defense (DoD) has implemented a September 2024 deadline for SCIFs and SAPFs to implement electronic device detection systems to help prevent classified leaks. These detection systems will look for unauthorized devices such as cell phones, wearables, laptops and tablets, medical devices, USB Cables with hidden Wi-Fi and Bluetooth data extraction capabilities, and any device emitting cellular, Wi-Fi, Bluetooth, or BLE. Wireless threats are not limited to the defense and intelligence communities— enterprises and organizations face very similar security challenges. It is imperative for organizations of all industries to adopt security measures and technology such as Bastille’s Wireless Intrusion Detection System (WIDS) to protect and prevent sensitive information from being compromised.”

Ofer Friedman, Chief Business Development Officer at AU10TIX

“2024 will be the year of the inverse parabola of fraudster AI adoption. Most identity fraudsters will not be using AI, so it will be the opposite of the typical bell curve. Instead, it will be effectively adopted primarily by, on one end, amateurs using the free or very inexpensive applications available, and on the other end, sophisticated criminal organizations using professional tools and injection to commit large scale identity fraud.”

Kaarel Kotkas, CEO and Founder of Veriff

“There will be an increase in new founders using AI to solve society’s challenges. New founders are in a prime position to solve complex problems and societal issues with AI solutions. As digital natives, this generation of entrepreneurs has the innate ability to understand AI, its applications, and how it can influence the digital age – for better or worse. For example, in 2023, fake identities and deepfakes became a significant challenge to identity verification – 85 percent of all fraud in 2023 was impersonation fraud. But, despite the threats it can pose, AI is also applied to provide fast and accurate verification and authentication of real users. While the AI threat landscape constantly evolves, we should look to these new leaders to ensure their companies are equipped to easily implement new techniques to solve major challenges ranging from security to predictive analytics to user authentication.”

David Divitt, Senior Director of Fraud Prevention and Experience at Veriff

“Access to advanced technologies will be more widespread. There has been a 20 percent rise in overall fraud in the past year and it will continue into 2024. We will see the number of account takeovers using deepfakes with liveness rise as the use of biometrics for authentication purposes increases. As tools like AI become increasingly easier and cheaper to access and facilitate, we will see more impersonation and identity fraud-type attacks. We’ll see more counterfeit attacks pushed on and at the masses as well as at-scale mass attacks that use deepfake libraries and acquired identities. The trifecta of counterfeit templated docs, deepfake biometrics, and mass stolen credentials will continue to be a looming threat.”

Suvrat Joshi, Senior Vice President of Product at Veriff

“Authentication will have an increased focus on customer experience. Traditional multi-factor authentication should no longer be seen as an adequate security strategy – for the companies or their customers. Cybersecurity threats continue to become more advanced and while companies seek to improve their security methods, their users desire more streamlined and efficient user experiences. To meet these expectations and needs, security leaders will continue to integrate and move to a single device to be used for all forms of digital interaction with a customer and will encourage the use of biometric authentication, like facial recognition, to verify identity while ensuring a positive customer experience.”

 

The post 2024 Cybersecurity Predictions from Industry Experts appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
What to Expect at the 5th Annual Cybersecurity Insight Jam LIVE on December 5 https://solutionsreview.com/security-information-event-management/what-to-expect-at-the-5th-annual-cybersecurity-insight-jam-live-on-december-5/ Tue, 28 Nov 2023 20:02:45 +0000 https://solutionsreview.com/security-information-event-management/?p=5142 A schedule of events for the Cybersecurity Insight Jam LIVE on December 5, the annual element of Solutions Review’s Insight Jam, an always-on community for enterprise technology end-users, experts, and solution providers. What is Insight Jam? Think of the Insight Jam as a continuous, ongoing, interactive tech event. The Insight Jam will always be here when you need answers […]

The post What to Expect at the 5th Annual Cybersecurity Insight Jam LIVE on December 5 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
cybersecurity insight jam

A schedule of events for the Cybersecurity Insight Jam LIVE on December 5, the annual element of Solutions Review’s Insight Jam, an always-on community for enterprise technology end-users, experts, and solution providers.

What is Insight Jam?

Think of the Insight Jam as a continuous, ongoing, interactive tech event. The Insight Jam will always be here when you need answers to the questions that matter to your organization and your career. We’ve partnered with the leading industry experts, thought leaders, and analysts to live-stream a never-ending collection of Roundtable Events, Breakout Sessions, and Expert Podcasts. And Insight Jam is built on a community platform that powers unlimited discussions, posts, and polls that will bring you deeper into the enterprise technology conversation.

Your Insight Jam journey starts here and starts now. We encourage you to dive in, explore, share, and engage. Let’s challenge ideas, bring new perspectives and elevate our knowledge together.

Join the Fastest-Growing Enterprise Tech Software End-User Community

Solutions Review is the largest software buyer and practitioner community on the web. Our Universe of Influence reach is more than 7 million business and IT decision-makers, as well as C-suite and other top management professionals. Our readers primarily use us as an enterprise technology news source and trusted resource for solving some of their most complex problems.

Our collection of vendor-agnostic buyer’s resources helps buyers and practitioners during the research and discovery phase of a buying cycle. This critical stage of information gathering is where buyers narrow down the field of solution providers to a short-list they plan to engage. The mission of Solutions Review is to make it easier for buyers of business software to connect with the best providers.

Event Details: Cybersecurity Insight Jam LIVE on December 5, 2023

11:00 AM (EST): Executive Roundtable: Cybersecurity and The AI Executive Order, featuring Dwayne McDaniel of GitGuardian as moderator. This panel will examine the ins and outs of the AI Executive Order and how this affects the current and future landscape of cybersecurity. Panelists include: Brian Sathianathan of Iterate.ai, Daryan Dehghanpisheh of Protect AI, Josh Davies of Fortra’s Alert Logic, Luis Villa of Tidelift, and Mike Pedrick of Nuspire. Watch it on LinkedIn and YouTube!

Cybersecurity and The AI Executive Order

12:00 PM (EST): Executive Roundtable: The Positive and Negative Impact of Generative AI on Cybersecurity, featuring Nima Baiati of Lenovo as moderator. This panel will examine the impact of Generative AI is having on cybersecurity… both the positive and the negative. Panelists include: Bobby Cornwell of SonicWall, Juan Perez-Etchegoyen of Onapsis, MacKenzie Jackson of GitGuardian, and Steve Winterfeld of Akamai Technologies. Watch it on LinkedIn and YouTube!

The Positive and Negative Impact of Generative AI on Cybersecurity

1:00 PM (EST): Executive Roundtable: Who Am AI? Identity Security in the Age of AI, featuring Dr. Mohamed Lazzouni of Aware as moderator. This panel will examine the world of identity security in the new age of AI. This includes deepfakes, authentication fraud, and other ways AI is being used by thieves. Panelists include: Alex Cox of LastPass, Carl Froggett of Deep Instinct, Nima Baiati of Lenovo, and Tim Callan of Sectigo. Watch it on LinkedIn and YouTube!

Who Am AI? Identity Security in the Age of AI

2:00 PM (EST): Executive Roundtable: Manipulating Generative AI Towards Malware and Other Malicious Behavior, featuring Nathan Vega of Protegrity as moderator. This panel will examine how exploitable Generative AI tools like ChatGPT really are, as hackers find new ways to generate new malware, phishing scams, and other malicious behavior. Panelists include: Anthony Green of OpenRep, Mike DeNapoli of Cymulate, Paul Laudanski of Onapsis, Ram Vaidyanathan of ManageEngine, and Dr. Ryan Ries of Mission Cloud. Watch it on LinkedIn and YouTube!

Manipulating Generative AI Towards Malware and Other Malicious Behavior


And that’s not all: Register for Insight Jam (free) to gain early access to all the exclusive 2024 enterprise tech predictionsbest practices resources, and DEMO SLAM videos!

The post What to Expect at the 5th Annual Cybersecurity Insight Jam LIVE on December 5 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
38 Cybersecurity Awareness Month Quotes from Industry Experts in 2023 https://solutionsreview.com/security-information-event-management/cybersecurity-awareness-month-quotes-from-industry-experts/ Tue, 10 Oct 2023 21:15:17 +0000 https://solutionsreview.com/security-information-event-management/?p=5080 For Cybersecurity Awareness Month, the editors at Solutions Review have compiled a list of comments from some of the top leading industry experts. As part of Cybersecurity Awareness Month, we called for the industry’s best and brightest to share their comments. The experts featured represent some of the top Cybersecurity solution providers with experience in […]

The post 38 Cybersecurity Awareness Month Quotes from Industry Experts in 2023 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Cybersecurity Awareness Month

For Cybersecurity Awareness Month, the editors at Solutions Review have compiled a list of comments from some of the top leading industry experts.

As part of Cybersecurity Awareness Month, we called for the industry’s best and brightest to share their comments. The experts featured represent some of the top Cybersecurity solution providers with experience in these marketplaces, and each projection has been vetted for relevance and ability to add business value.

A number of thought leaders were presented with this prompt: What are some overlooked cybersecurity best practices people take for granted/easily forget? Things that might be obvious to experts but not to the average enterprise user. Or best practices that are so obvious when you say them out loud, but are often forgotten.

Here’s how they responded, along with some general responses from other experts and thought leaders, for Cybersecurity Awareness Month.


Widget not in any sidebars

37 Cybersecurity Awareness Month Quotes from Industry Experts in 2023


Éric Leblond, Co-Founder and Chief Technology Officer at Stamus Networks

A frequently underestimated and sometimes undervalued component of enterprise security is the pivotal role of network detection and response (NDR) systems. Frequently, security teams opt to implement an endpoint detection and response (EDR) system as their initial enterprise-wide threat detection technology and later introduce (NDR) if and when budget allows. And while EDR can play a crucial role in detecting and responding to specific threats within an organization, it comes with some limitations including the inability to install EDR on every single endpoint, the ability for threat actors to evade endpoint agents, and the ability for mechanisms like DNS tunneling to remain concealed from endpoint detection systems.

Organizations should consider these limitations when implementing EDR solutions and should consider integrating EDR with NDR to unite endpoint-level data with network-level data to enhance the overall threat detection capabilities of both systems.

By combining endpoint telemetry with network traffic analysis, organizations can detect advanced threats that span across multiple devices and network segments. Additionally, the contextual information provided by both EDR and NDR enhances incident response capabilities, enabling faster and more accurate response to security incidents.

Sanjay Bhakta, VP of Solutions at Centific

One of the most often overlooked cybersecurity best practices is software updates and upgrades to IT systems, devices, and browsers. Consumers and businesses alike may benefit by updating and upgrading their browsers, system patches, operating systems, and applications. The infamous WannaCry ransomware is an example of the ramifications that could have been prevented with the software update made available weeks prior to the exploitation from the malware attack. Caveat emptor, regarding emails indicating compromised security vulnerability or URLs that automatically update their software across their devices by providing a simple login and password. Obviously, the latter is more of a phishing attack.

There’s an opportunity cost of updating software immediately or delaying the decision. Unfortunately, the average person deprioritizes updates, attributing a lower probability of occurrence for an attack. Updates are perceived as disruptive to the fabric of our daily routines, equating it to time, effort, and/or money involved. From experimentations, it appears only 17 percent of users on average install updates on the day they’re available, 53.2 percent install within one week, with the rate significantly declining after 102 days, while 35 percent of experts consider updates as one of the top three actions performed to stay safe.

Consumers and businesses may opt-in for automated updates, more importantly digital citizens should be educated on the sources and rational of updates, such as visiting CISA, MITRE ATT&CK, NCA, Norton, NSA, as well as subscribing to notifications or alerts from the state government(s), financial services provider, network provider, retailer, and/or telecom or mobile provider. Businesses should further institutionalize a rigorous SecOps practice, interleaving proactive tactics using AI and Gen AI for predicting security vulnerabilities, ethical hacking, and social engineering measures, solidifying their effectiveness.

Dan Draper, Founder and CEO of CipherStash

Very few companies actually protect data– they only protect the systems, such as databases and warehouses, where data is stored. The problem is that data never stays in one place for very long. Data science teams run reports, DevOps teams export and load data into multiple different systems, and eventually sensitive data ends up in a spreadsheet on an executive’s laptop. Because 82 percent of data breaches start with an attack on an individual, applying protections at the system level is quite simply not sufficient to prevent breaches. Protecting data directly using encryption-in-use technology ensures that access controls remain in place, even as data moves across the organization. It hasn’t been practical in the past but technology is now at the point where there are really no excuses for implementing data-level protections.

Igor Volovich, VP of Compliance Strategy at Qmulos

Compliance, often relegated to a retrospective check-box exercise, actually holds untapped potential as a real-time risk intelligence source. In the rush to adopt the latest cybersecurity tools, many organizations overlook the strategic advantage of leveraging the consistency and breadth of compliance frameworks. By embracing compliance automation, we can operationalize this function, bringing it in sync with real-time security operations and threat intelligence. This not only provides a holistic view of the organization’s security posture but also eliminates the subjectivity that often clouds security strategy decisions. It’s a simple truth: When we align compliance with our real-time cybersecurity efforts, we transform it from a mere regulatory obligation to a proactive, strategic powerhouse.

The cyber landscape is vast, intricate, and constantly evolving. CISOs today face an overwhelming challenge: they’re expected to balance priorities across business objectives, risk management, security imperatives, compliance demands, and regulatory mandates, all while contending with adversaries wielding asymmetric threats of escalating scale and complexity. In this high-wire act, consistency in executive decision-making often falls by the wayside, leading to reactive strategies and misaligned resource allocations. The prevailing focus on the latest security trends and the reactive nature of many strategies only adds to the quandary. However, what’s frequently overlooked is the comprehensive nature of compliance frameworks. These frameworks, if leveraged correctly, can cut through the chaos and provide a grounded, consistent lens to view and manage cyber risks. Transitioning from viewing compliance as just a historical reporting obligation to using it as a real-time enterprise risk posture analytics tool can be transformative. With compliance automation at the helm, CISOs can gain the clarity and insight they need for data-driven, proactive decision-making, and strategic alignment, easing their monumental balancing act.

Greg Ellis, General Manager, Application Security at Digital.ai

We are trained at work on phishing awareness, password hygiene, and other general security measures but then we fail to take similar measures in our home environments. Often these home environments, and sometimes even the home devices are being used to connect to enterprise networks when things come up quickly late at night or on the weekend. It is equally important to take good cybersecurity measure at home including such items as:

  • using a password manager to regularly update and use unique passwords
  • update firmware regularly on routers and WiFi devices
  • partition a guest network separately from your home network on your WiFi
  • think about whether smart devices (such as TVs) should be on your home network or a guest network
  • regularly check for and apply firmware updates on smart devices
  • regularly check for and apply updates to operating systems and applications (on both desktop and mobile) devices
  • regularly back up your desktop and mobile devices to a separate drive or cloud system that is not connected all the time to your network (this helps reduce likelihood of random ware propagating to other drives)
  • teach your family about phishing awareness and any children about internet safety

Again, many of us are exposed to this mindset in our enterprise environment but quite often fail to bring these best practices home.

Andre Slonopas, Cybersecurity Department Chair at American Public University System

Strong Passwords: Despite a simple rule, many users use weak or repeated passwords across platforms. If credentials are overused, this makes brute-force password decryption simpler for criminals and facilitates platform infiltration. For security purposes, users should use password management tools to generate and store complex passwords. Changing passwords frequently and employing a combination of letters, numbers, and special characters can protect data.

Multi-factor Authentication (MFA): MFA makes unauthorized access difficult by requiring two verifications. A malicious party could acquire the password, but verification would require a fingerprint, mobile device, or hardware token. MFA prevents fraudsters from targeting vulnerable accounts, thereby enhancing the security of the internet.

Patch regularly: People delay enhancements because they are unaware that they resolve security issues. Malware and other hazards can penetrate vulnerabilities that are not addressed. Installing updates promptly may prevent vendor-resolved software issues. Regular updates enhance the user experience and system security by enhancing system functionality and performance. Whenever possible, configure software to update automatically to avoid delays.

Hanan Hibishi, Assistant Teaching Professor at the Information Networking Institute at Carnegie Mellon University

Reusing passwords: People continue to reuse/recycle their old passwords, which is an intuitive practice if one relies on memorizing passwords. Many recent attacks take advantage of users reusing the same password for multiple systems (Colonial Pipeline is a good example). On the other hand, telling users not to reuse passwords seems to be impractical because there is a limit to how many passwords a human can recall from memory, and users typically have accounts on numerous systems (beyond a handful).

For a more practical approach, I recommend that users use password managers, software that organizes user accounts and passwords and generates stronger passwords for users. Filling out account credentials is now easier (with a click instead of typing long strings of text), and it is a more secure approach than memorizing passwords. In addition, users can leverage single sign-on when possible. Instead of creating profiles and accounts on many systems, choose to log in with existing credentials if that is an option when creating an account.

Kayne McGladrey, IEEE Senior Member

When CISOs work with go-to-market teams, cybersecurity transforms from a mere cost center into a valuable business function. This change is crucial in B2B interactions where robust cybersecurity controls offer a competitive advantage. A centralized inventory of cybersecurity controls, grounded in current and past contracts, helps businesses gauge the financial impact of these partnerships. This inventory also identifies unnecessary or redundant controls, offering an opportunity for cost reduction and operational streamlining. By updating this centralized list after the termination of contracts, the business can further optimize both its security posture and operational costs. This integrated strategy empowers the business to make well-informed, data-driven decisions that enhance profitability while maintaining robust security controls.

Max Shier, CISO at Optiv

Because we all have a lot on our plate are moving fast to get everything done, it’s worth reminding employees they need to slow down when reading emails and text messages and when listening to voicemails. The social engineers who craft phishing, smishing and vishing attacks are banking on the fact people are busy and likely going to overlook red flags. Employees should be reminded if an attempted social engineering attack is received, they need to report the suspected attack to security as there may be others receiving the same messages.

Along the same lines, even though software and device updates always seem to come at the worst times, the importance of updating immediately cannot be overstated. Updates not only enhance features, but they also provide security patches to address known vulnerabilities. Every minute those vulnerabilities are left unpatched is another minute that threat actors have an open door onto the network.

Jerome Becquart, Chief Operating Officer at Axiad

One area security teams can overlook or tend to put less emphasis on is account recovery. When deploying MFA, organizations tend to focus their time and efforts mainly on the authentication experience. However, they do not spend enough time defining secure, user friendly account recovery workflows such as when a MFA method is not available or does not work for an end user. This typically results in not only a bad user experience, but also weaker security overall for the company.

Scott Gerlach, CSO and Co-Founder of StackHawk

With new technology, comes new attack vectors, new attack types, and new problems for security teams to learn, understand, and keep up with. With the speed and deployment of APIs growing insanely fast, and the historically unbalanced ratio of AppSec teams to Developers (1:100), to say it’s a challenge for security teams to keep pace with development is an understatement. Utilizing a developer-first philosophy that acknowledges the pivotal role software creators have with cybersecurity efforts, and bridging that gap between AppSec and engineering is critical to ensure the safe and secure delivery of APIs and applications to production. Bring the right information to the right people at the right time to help them make decisions!

Joni Klippert, CEO and Founder of StackHawk

Viewing security as either a hindrance or a reactive measure doesn’t promote the timely delivery of secure software. With organizations relying heavily on APIs to power their applications, recent research from ESG underscores how this dependency can exacerbate security risks. As development and release cycles for APIs continue to accelerate, we’ll see more challenges as feedback loops for fixes overload developers, and AppSec teams are unable to scale. Organizations need to focus on adopting the right security testing mechanisms and empower the teams that develop code to help prioritize the finding and fixing of security bugs before moving to production.

Manu Singh, VP of Risk Engineering at Cowbell

Bad actors are becoming more sophisticated and clever with their approach to using emerging technologies to launch cyberattacks. The evolving cyber threat landscape is making it more difficult for organizations to defend themselves against convincing phishing emails and malicious code generated by AI.

The most important thing that organizations can learn from Cybersecurity Awareness Month is to take a proactive approach to protecting their information assets and IT infrastructure. To do this, organizations should consistently educate and promote awareness of the latest threats and risks they may face. From there, this education should transform to best practices each employee can adopt to reduce exposure to a cyber event. This promotes a culture of security rather than placing the responsibility on IT or security personnel. Organizations as a whole have the responsibility to secure and protect against the cyberthreats they face.

Dan Benjamin, Co-Founder and CEO at Dig Security

Cloud data assets are a prime target for cyberattacks, but the legacy solutions can no longer cope with the variety and volume of fragmented data held by organizations on multiple cloud environments. Organizations need data security solutions that fit the speed of innovation in the cloud without impacting their business, to address time to detect and respond to an incident; reduce the amount of shadow data; and minimize the growing attack surface. To avoid data exfiltration and data exposure, today’s organizations must take a data first approach to cloud data security. This Cybersecurity Awareness Month, enterprises should prioritize adopting solutions that deliver real-time data protection across any cloud and any data store, in order to reduce data misuse, achieve compliance, and prevent ransomware attacks or data breaches.

Randy Watkins, CTO of Critical Start

Cybersecurity Awareness Month has traditionally focused on educating consumers, who are often susceptible as targets of opportunity, where there is a high likelihood of success, but a low yield. While some of the typical security reminders and best practices can transcend the workplace to create a culture of security, we should also use this opportunity to highlight additional areas of education:

  • Board Level – A litany of cyber regulations has been proposed or approved on everything from breach disclosure to board membership. Educating the board on the organizations current cyber posture, impact on risk, coming regulations, along with the plans team to accommodate the regulation can help get buy-in early and show the value of security to the organization.
  • End Users – Go beyond phishing education and inform your users of the people, procedures, and products that are being used to protect them. With the understanding of the investment made by the organization, others may look to see how they could be good stewards of cyber posture.
  • The Security Team – It’s time for the teachers to become the students. While cybersecurity education programs target the “riskiest attack surface of the organization” (end users), it is important to obtain feedback from those end users on how security practices and technology could be more effective.

Darren Guccione, CEO and Co-Founder of Keeper Security

Let’s face it– it may be time to change the name of Cybersecurity Awareness Month to Cybersecurity Action Month. Sadly, individuals and businesses around the globe are already all too aware of the pain and damage that cyberattacks can inflict.

This October, organizations should take action by prioritizing adoption of solutions that prevent the most prevalent cyberattacks, including password and Privileged Access Management (PAM) solutions. These highly effective tools offer robust cybersecurity protections, and next-gen, cloud-based versions of these solutions are accessible to any-size organization, regardless of their budget or available resources. According to recent research, PAM products give 91 percent of IT leaders more control over privileged user activity, decreasing the risk of insider and external breaches.

In addition to prevention, organizations must prepare and secure their systems to mitigate threats and minimize the impact on systems, data and operations. The most effective method for minimizing sprawl if an attack does occur is investing in prevention with a zero-trust and zero-knowledge cybersecurity architecture that will limit, if not altogether prevent, a bad actor’s access.

John Gallagher, Vice President of Viakoo Labs

CISA chose a great theme with “Secure Our World”. The focus for anyone with network-connected IoT devices is on “Our” – meaning that IoT cybersecurity is a shared responsibility. Organizations can embrace the “Secure Our World” theme by creating an ongoing dialogue between the operators of IoT devices (the lines of business within a company) and organizations like procurement and IT who can help source IoT devices that are cyber secure and help maintain them.

It’s not “Secure Our Datacenter” or “Secure Our Computers” – it’s “Secure Our World”, which means organizations should be looking beyond computers and core applications to every network-connected device, such as IoT, and asking if that device has a plan and means to become and remain secure with the least human effort needed.

If I was to add one more word to this year’s theme it would be “Automatically”. “Secure Our World Automatically” challenges organizations to improve the speed of security operations and relieve humans of tedious tasks like patching, rotating passwords, and screening for phishing attempts. Rapidly closing the window of opportunity that a threat actor can operate in is key to securing our scaled out, geographically sprawled attack surfaces of IT, IoT, OT, and ICS.

Kris Lahiri, Co-Founder and Chief Security Officer of Egnyte

In today’s hybrid work environment, prioritizing cybersecurity is critical. Cyber threats are intensifying, with severe and long-lasting impacts on businesses. Yet, many organizational leaders still remain in the dark when it comes to protecting and managing their content. As we observe Cybersecurity Awareness Month, it’s important to remember that cybersecurity is not just about checking boxes. The frequency and scale of cyber attacks have continued to skyrocket, along with the financial toll and damage to brand reputation. Unfortunately, many organizations lack the proper tools to detect these attacks. Business leaders must also understand that the threat landscape is rapidly changing. Companies can improve their cybersecurity posture by combining foundational practices with cutting-edge technologies. Leveraging secure solutions doesn’t have to be complicated or robust to ensure safer data transactions and achieve unparalleled insights into content usage and access. Overall, businesses can avoid becoming a statistic and refine their data management strategies by making cybersecurity a team sport so that it is an integral part of their employees’ daily lives through education and prevention.

Paul Rohmeyer, Adjunct Professor of Information Systems at Stevens Institute of Technology

One of the challenges in maintaining cybersecurity awareness is that message repeated too frequently tend to have less and less impact, so we need to anticipate some of the most important messages will in fact be forgotten. We constantly hear about the importance of changing passwords and using complex passwords, but password audits routinely show continued use of weak passwords, and use of the identical password across multiple systems leading to a cascading effect if there is a breach. Another concern is clicking on links in emails, and falling victim to phishing and spearphishing. Phishing scams are based on the knowledge that, if sent to a large enough population, some number of recipients will in fact click on malicious links. This is often due to simply a moment of inattention by otherwise cyber-aware users, but even unsophisticated attackers can now leverage inexpensive but effective phishing platforms for low cost repetition of attacks that will unfortunately claims some victims. A third item is system updates. Despite the convenience of automated updates to Windows and Macs, users may postpone running the updates, leaving themselves vulnerable to known attacks. Change your passwords, use strong and unique passwords, don’t click on unknown links and apply system updates to all your devices– these are basics we’ve all heard but may not act upon as swiftly as we should.

Joe Regensburger, Vice President of Research Engineering at Immuta

AI and large language models (LLMs) have the potential to significantly impact data security initiatives. Already organizations are leveraging it to build advanced solutions for fraud detection, sentiment analysis, next-best-offer, predictive maintenance, and more. At the same time, although AI offers many benefits, 71 percent of IT leaders feel generative AI will also introduce new data security risks. To fully realize the benefits of AI, it’s vital that organizations must consider data security as a foundational component of any AI implementation. This means ensuring data is protected and in compliance with usage requirements. To do this, they need to consider four things: (1) “What” data gets used to train the AI model? (2) “How” does the AI model get trained? (3) “What” controls exist on deployed AI? and (4) “How” can we assess the accuracy of outputs? By prioritizing data security and access control, organizations can safely harness the power of AI and LLMs while safeguarding against potential risks and ensuring responsible usage.

David Divitt, Senior Director, Fraud Prevention & Experience at Veriff

We’ve all been taught to be on our guard about “suspicious” characters as a means to avoid getting scammed. But what if the criminal behind the scam looks, and sounds, exactly like someone you trust? Deepfakes, or lifelike manipulations of an assumed likeness or voice, have exploded in accessibility and sophistication, with deepfakes-as-a-service now allowing even less-advanced fraud actors to near-flawlessly impersonate a target. This progression makes all kinds of fraud, from individual blackmail to defrauding entire corporations, significantly harder to detect and defend against. With the help of General Adversarial Networks (GANs), even a single image of an individual can be enough for fraudsters to produce a convincing deepfake of them.

Certain forms of user authentication can be fooled by a competent deepfake fraudster, necessitating the use of specialized AI tools to identify the subtle but telltale signs of a manipulated image or voice. AI models can also be trained to identify patterns of fraud, enabling businesses to get ahead of an attack before it hits.

AI is now at the forefront of fraud threats, and organizations that fail to use AI tech to defend themselves will likely find themselves the victim of it.

James Hadley, CEO and Founder of Immersive Labs

Cybersecurity awareness month has good intentions. But, if organizations are focused on awareness alone, they’re losing. Awareness is not enough for organizations to achieve true cyber resilience. Resilience means knowing that your entire organization has the knowledge, skills, and judgment to respond to emerging threats, backed by data. Businesses need proof of these cyber capabilities to ensure that when an attack inevitably happens, their organization is prepared to respond.

Outdated training models and industry certifications that organizations have traditionally relied on have failed to make them safer and instead have created a false sense of security — which is why nearly two-thirds of security leaders now agree that they are ineffective in ensuring cyber resilience.

Continuous, measurable exercising across your entire workforce — from the store room to the board room — provides businesses with the insights they need to understand the current state of their cyber resilience and where their weak points lie. It also creates a more positive cybersecurity culture that encourages reporting rather than punishing employees when a breach does happen. With top-to-bottom cybersecurity education, organizations are moving beyond awareness and can ensure that their data is secure.

Yariv Fishman, Chief Product Officer at Deep Instinct

This Cybersecurity Awareness Month is unlike previous years, due to the rise of generative AI within enterprises. Recent research found that 75 percent of security professionals witnessed an increase in attacks over the past 12 months, with 85 percent attributing this rise to bad actors using generative AI.

The weaponization of AI is happening rapidly, with attackers using it to create new malware variants at an unprecedented pace. Current security mechanisms rooted in machine learning (ML) are ineffective against never-before-seen, unknown malware; they will break down in the face of AI-powered threats.

The only way to protect yourself is with a more advanced form of AI. Specifically, Deep Learning. Any other NL-based, legacy security solution is too reactive and latent to adequately fight back. This is where EDR and NGAV fall short. What’s missing is a layer of Deep Learning-powered data security, sitting in front of your existing security controls, to predict and prevent threats before they cause damage. This Cybersecurity Awareness Month, organizations should know that prevention against cyber attacks is possible– but it requires a change to the “assume breach” status quo, especially in this new era of AI.

Nick Carroll, Cyber Incident Response Manager at Raytheon, an RTX Business

As cyber threats continue to quickly evolve, organizations are being challenged to act just as fast in counter defense. This rush to keep up can often lead to the harmful practice of organizations skipping the foundational basics of cyber defense and failing to establish a general sense of cyber awareness within the business. Without a solid security culture at the foundation, security tools, such as expensive firewalls or endpoint detection and response (EDR), will ultimately become ineffective in the long term. It’s imperative to build cybersecurity awareness among employees and third parties that work with the business, as well as determine the ways in which security will be integrated into the organization’s culture and operations. Once these steps are taken, organizations will be better positioned to build off of a solid organizational footing that will be most effective for cyber defense initiatives in the long run.

Olivier Gaudin, Co-CEO & Founder of Sonar

This Cybersecurity Awareness Month (CAM), a message to business leaders and technical folks alike: Software is immensely pervasive and foundational to innovation and market leadership. And if software starts with code, then secure or insecure code starts in development, which means organizations should be looking critically at how their code is developed. Only when code is clean (i.e. consistent, intentional, adaptable, responsible) can security, reliability, and maintainability of software be ensured.

Yes, there has been increased attention to AppSec/software security and impressive developments in this arena. But still, these effort are being done after the fact, i.e. after the code is produced. Failing to do this as part of the coding phase will not produce the radical change that our industry needs. Bad code is the biggest business liability that organizations face, whether they know it or not. And chances are they don’t know it. Under their noses, there is technical debt accumulating, leading to developers wasting time on remediation, paying some small interest for any change they make, and applications being largely insecure and unreliable, making them a liability to the business. With AI-generated code increasing the volume and speed of output without an eye toward code quality, this problem will only worsen. The world needs Clean Code.

During CAM, we urge organizations to take the time to understand and adopt a ‘Clean as You Code’ approach. In turn, this will stop the technical debt leak, but also remediate existing debt whenever changing code, reducing drastically the cybersecurity risks, which is absolutely necessary for businesses to compete and win– especially in the age of AI.

Doug Kersten, CISO at Appfire

First and foremost, whether an employee has been at an organization for 20 days or 20 years, they should have a common understanding of how their company approaches cybersecurity; and be able to report common threats to security.

It’s been refreshing to see security come to the forefront of conversation for most organizations. It was rare 20 years ago that cybersecurity awareness was even a training concern unless you were at a bank or regulated institution. Today, it is incredibly important that this heightened interest and attention to security best practices continues. With advancements in technology like AI, employees across industries will face threats they’ve never encountered before – and their foundational knowledge of cybersecurity will be vital.

Employees today should be well-trained on security standards and feel comfortable communicating honestly with their security teams. Even more important, security leaders should ensure their organizations have anonymous alternatives for employees to report their concerns without fear of retaliation or consequence. By combining education and awareness into the foundation of your organization’s security framework, and empowering employees, the odds of the realization of a threat decrease exponentially.

James Lapalme, Vice President & GM for Identity at Entrust

While we can recognize Cybersecurity Awareness Month, it’s important that we prioritize cybersecurity all year round. Threat actors are constantly threatening organizations in unique and rapidly evolving ways, and business leaders need to remain nimble to ensure that their systems and teams are prepared for these evolving risks.

As we’ve seen in the news in recent weeks, spear phishing and social engineering attacks have become a common way for bad actors to create realistic scams that can slip by even the most knowledgeable employee. And, with the advancements in generative AI, adversaries can accelerate the potential impact of these attacks to gain access to sensitive data. The reputational and monetary losses these organizations and their customers experience can be felt for years to come.

Organizations have become so reliant on credentials that they have stopped verifying identity, so to get access or reset access, all you have to do is to give a code or answer a secret question. While that is convenient from a productivity perspective, it leaves the door open to cyber-attacks, which is why we’ve seen these spates of compromises.

Rather than rely on individuals who are frequently too caught up in day-to-day tasks to notice the subtle nuances of these scams, organizations need to evolve their technology response and look to phishing-resistant identities. Methodologies to achieve a high assurance level of Identity verification are Certificate-based authentication for both user and device verification, risk-based adaptive set-up authentication, and implementing ID verification as part of authentication process (or as a high assurance authentication strategy) for high value transactions and privileged users are all ways for businesses to build out their Zero Trust, explicitly Identity verified strategies and ensure the security of users even as new threats continue to emerge.

It’s important to understand that cybersecurity awareness is never really over. Good enough is not good enough. With the ever-evolving threat landscape, it’s essential for organizations to stay ahead of the curve and continue to keep evolving their technology to protect and future-proof their businesses against the ever changing threat landscape.

Steve Stone, Head of Rubrik Zero Labs

Artificial Intelligence, in particular generative AI (GAI), has dominated cybersecurity discussions in 2023.  As we look at how we can think of this technology in the context of Cybersecurity Awareness Month, there’s three topics worth our time.

First, GAI can demonstrably increase the capability and bandwidth of defense teams which are typically operating at beyond capacity.  We should seek out the right types of automation and support GAI lends itself well to so we can then reinvest the precious few cycles we have in our defense experts.  Let’s provide those skilled practitioners the ability to leverage their capabilities in the most impactful ways and transition years of legacy workflow to increased automation delivered via GAI.

Second, what are the inevitable shifts in defense needed as threats pivot to using GAI as well.  Traditionally, cybersecurity has leaned on attacker bottlenecks in our defensive posture.  At a minimum, we used these bottlenecks to classify threat types based on resourcing and capability.  GAI is undoubtedly going to shift these years-long expectations.  If any attacker can quickly use GAI to overcome language limitations, coding gaps in knowledge, or quickly understand technical nuances in a victim environment, what do we need to do differently? We should work to be ahead of these pivots and find the new bottlenecks.

Third, GAI doesn’t come with a zero cost to cybersecurity.  Even if we move past using GAI, the presence of GAI leaves us with two new distinct data elements to secure.  The first is the GAI model itself, which is nothing more than data and code.  Second, the source material for a GAI model should be secured as well.  If the model and underlying data are left undefended, we could lose these tools or have them leveraged against us in different ways all without our knowledge.

Michael Mestrovich, CISO at Rubrik

Monetization of data theft drives the cyber crime business. Modern cybercrime revolves around stealing data from organizations or denying them access to critical data. It is imperative that we maintain a security-first corporate culture and that a security mindset permeates everything that we do.

So how do we achieve this? A culture change starts with simple behavior shifts. When you walk away from your computer, do you lock it? When you’re using your laptop in public, do you have a screen guard on? When entering corporate buildings do you badge in and make sure no one is tailgating you? These sound like small things, but they are the practical day-to-day activities that people need to understand that help cultivate a security-first culture.

Arvind Nithrakashyap. Co-Founder & CTO of Rubrik

On the occasion of the 20th Cybersecurity Awareness Month in 2023, it’s interesting to reflect on all that has changed in cybersecurity over the last two decades, as well as the surprising number of things that haven’t changed.

Let’s start with three dramatic differences.

  1. The mobile revolution. The iPhone wasn’t introduced until 2007. Today, there are more than 4.6 billion smartphones worldwide, according to Statista. Add the more than 14.4 billion Internet of Things devices – connected cars, smart appliances, smart city technologies, intelligent healthcare monitors, etc. – and you have a threat landscape that few could have imagined 20 years ago.
  2. Digital payments. The growing popularity of digital payments over cash is not only changing how people interact with money, it has opened up new opportunities for phishing scams, card information theft, and payment fraud. And, cryptocurrency, which didn’t exist until the late 00s, accounts for the vast majority of payments to ransomware attackers.
  3. AI. Everyone is talking about artificial intelligence in 2023, but that wasn’t the case two decades ago. Now, AI is giving cybercriminals a powerful new tool to execute attacks while also turning out to be an effective weapon against hackers.

 And yet the more things change, the more they remain the same. Three examples:

  1. On prem data. Despite the rise of cloud computing, many companies continue to house critical data in their own private databases and servers. This means protecting on-prem data remains, then as now, a key part of the security equation.
  2. Public infrastructure. “By exploiting vulnerabilities in our cyber systems, an organized attack may endanger the security of our nation’s critical infrastructures,” said the White House’s “National Strategy to Secure Cyberspace” in 2003. The nation still worries a great deal today about how to defend energy systems, dams, and other assets from cyberattack. 
  3. Security infrastructure. The cybersecurity industry used to focus on infrastructure security solutions involving the network, the applications, the end points, the cloud, the logs, etc. It still does. Those solutions remain core to a solid security strategy, though there is growing awareness that newer data security frameworks like Zero Trust are needed for fully realized defenses.

Viewed another way, much of the language one hears to describe the importance of data — “crown jewels,” “an organization’s most precious resource,” and the like — has changed little over the last 20 years. That’s because it’s still so true. Data is everything.

Joe Hall, Head of Security Services at Nile

One commonly overlooked aspect of cybersecurity is getting back to the basics. Don’t know where to start? First– it’s crucial to identify and comprehend the assets you need to protect. As larger organizations transition into hybrid cloud environments, the scope of what needs safeguarding can grow rapidly, which can be challenging for organizations struggling to keep pace with this expanding ecosystem. It’s vital to ensure that systems are not only secured but also designed to trust traffic only as needed, as failing to do so can leave vulnerabilities in the security infrastructure. The market will shift to systems that are natively secure as the risk of a misconfiguration of complex systems becomes too great.

Eric Cohen, CEO of Merchant Advocate

Some businesses may not fully understand the importance of PCI compliance or may believe it only applies to large enterprises or e-commerce companies. In reality, any organization that handles card and payment data, regardless of its size or industry, is subject to PCI compliance requirements.

Overlooking PCI compliance can have serious consequences, including fines, legal liabilities, and reputational damage should a breach or fraud attack occur. Therefore, businesses should not neglect it as part of their overall cybersecurity strategy. Instead, they should consider it as an essential component of their efforts to protect customer data and maintain trust in their brand. One way to check compliance is by examining merchant statements for PCI-related charges, either a charge to access a processor’s PCI portal or for non-compliance. These may be charged monthly or quarterly, so it’s important to regularly check merchant statements to ensure compliance.

Kobi Kalif, CEO of ReasonLabs

Our recent research indicates that malware and phishing are the most prevalent threats facing both businesses and the general population. These dangers often remain unchecked due to limited awareness and poor cybersecurity education among professionals and everyday consumers alike.

Email is a prime vector for phishing attempts and malware; as such, people need to be extremely vigilant when interacting with suspicious emails. Some best practices include:

  • Be wary of any urgent requests for personal information or threats if you don’t act.
  • Check the sender’s address for spoofing and inconsistencies.
  • Do not enable macros in downloaded documents sent over email.
  • Verify requests by contacting the source directly, without replying to the suspicious email itself. Look for spelling errors, awkward grammar or formatting as red flags.
  • Report phishing emails to your email provider, and avoid opening attachments from unknown senders without verifying them first.

Password security is another challenge. Multiple studies have shown that a majority of people use weak, easily guessable passwords like “123456” across all their online accounts and frequently share passwords with others. One successful phishing attack could easily compromise several accounts with this lax personal security. Instead, create long passphrases that are easy to remember but hard to guess. For example, users should mix upper and lower case letters with numbers and symbols for complexity, enable two-factor authentication as an added layer of security, and periodically change passwords, focusing on critical accounts like email, banking, and work logins. Most importantly, passwords should not be duplicated across multiple sites; if one site is breached, it can put other accounts in jeopardy and create further issues down the line.

Rocky Giglio, Director of Security GTM & Solutions at SADA

Hackers have become extremely adept at leveraging human emotions and behavior for phishing and other types of social engineering attacks. Humans often move fast when reading emails, clicking links, or downloading documents, which gives hackers a perfect opportunity to deceive and gain access to sensitive information. These links or emails can also disguise themselves better than ever; for example, and email from what appears to be a payroll provider or internal company system can really be a hacker that made the slightest, hard-to-notice change to their name, phone number, or email address. Cybersecurity leaders at any company need to ensure that they are training their employees to be extra cautious and deliberate in their day-to-day communications, which will in turn help raise awareness and create more proactive security postures.

Mike Laramie, Associate CTO for Security at SADA

The news of recent breaches will hopefully drive faster adoption of cybersecurity best practices at businesses of all sizes. For example, businesses should always encourage their workers to use the passkey authentication method, which is much stronger and much more streamlined than traditional authentication methods. At a minimum, enforcing two-step verification methods is a must-have for any company, whether that be via hardware tokens or push notifications that embrace the FIDO standards. Relying on traditional methods, such as SMS verification and other one-time passcodes, are now proven to be insecure.

Steve Yurko, CEO of apexanalytix

Businesses generally have strong internal cybersecurity practices in place but, despite what they might think, this isn’t enough to keep themselves safe from harm. Organizations are most vulnerable to threats when it comes to their suppliers. Attacks on suppliers can lead to major data breaches that wreak havoc on a company’s operations, finances, brand reputation and customer loyalty – regardless of the internal cybersecurity strategy they have in place. In order to protect themselves, businesses must monitor vulnerabilities throughout the entire supply chain and flag incidents across every supplier. Cybersecurity incidents cause half of all supply chain disruptions, but businesses can manage those risks by monitoring threats and mitigating risks in real-time.

Joshua Aaron, CEO of Aiden Technologies

This year marks the 20th anniversary of National Cybersecurity Awareness Month, which aims to educate people about the value of cybersecurity and encourage good cybersecurity practices among individuals, companies and organizations. Twenty years in, Artificial Intelligence (AI) is changing the way that many organizations operate, especially when it comes to cybersecurity. Because AI is a developing technology and we’re still understanding its capabilities, many IT organizations have hesitated to fully deploy it. However, AI has come a long way since its first incarnations. It now has the potential to offer incredible assistance to IT security teams by helping them reduce the risk of business-critical infrastructure getting compromised via misconfigured software and devices, a core focus of CISA’s cybersecurity framework.

Traditionally, managing the configuration of software and computers is very manual, prone to human error, and slow to execute, especially for overworked IT security teams. The increased use of AI and automation in cyberattacks from misconfigured environments and their improving success rates are proof that traditional methods aren’t working, and we must innovate. AI and automation solutions for keeping computers up to date and in compliance with an organization’s security policy have proven to be extremely effective. IT security teams are able to automatically maintain good cyber hygiene, thus freeing them up to concentrate on higher-visibility, more rewarding projects without fear of the next attack.

In honor of National Cybersecurity Awareness Month, I encourage all organizations to look into how AI can help keep their critical infrastructure more secure and their data safe from threat actors; the SAFETY of our country and our commerce depends on it.

Dylan Border, Director of Cybersecurity at Hyland

Reinforcing what may seem like obvious cybersecurity measures ensures a proactive strategy, but we still see companies ignoring these facts until it’s too late, only starting their commitment to security after a breach or ransomware event occurs.

Even with top talent and tools on hand, foundational processes must be considered to secure your environment, and security is employees’ responsibility. While some may see simple concepts, others may be unaware of often-overlooked security measures. It’s easier than ever to implement table-stakes items, such as monthly security training to ensure best security practices are enacted. Implementing core tactics constantly is a great way to ensure all employees are approaching these concepts from a level playing field.

Role-based training is a great way to ensure that specific training is tailored to employees’ individual roles and responsibilities. While general security awareness training, such as how to spot a phishing email, is relevant and crucial for all employees to complete, some individuals will have even greater access to sensitive data, or control of administrative tasks for critical systems.

This applies to security teams as well. Team members should be experts on the security tools they’re responsible for managing, and if there are gaps in their knowledge, they should undergo deeper training. Something as simple as regularly validating that your endpoint protection, or anti-virus, program is deployed throughout your entire environment can be what it takes to stop a ransomware attack. Build from the basics, and don’t assume you’re covered until you test each of your defenses.


Widget not in any sidebars

The post 38 Cybersecurity Awareness Month Quotes from Industry Experts in 2023 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
I Tricked AI, and I Liked It https://solutionsreview.com/security-information-event-management/i-tricked-ai-and-i-liked-it/ Wed, 20 Sep 2023 16:16:36 +0000 https://solutionsreview.com/security-information-event-management/?p=5042 Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Christian Taillon and Mike Manrod of Grand Canyon Education take us to school on the buzz, the applications, and the very real threat of AI in the cybersecurity space. The buzz around emerging capabilities related to Artificial […]

The post I Tricked AI, and I Liked It appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
AI

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Christian Taillon and Mike Manrod of Grand Canyon Education take us to school on the buzz, the applications, and the very real threat of AI in the cybersecurity space.

The buzz around emerging capabilities related to Artificial Intelligence (AI) and ChatGPT is like nothing I have experienced during my career in technology. I walk past a breakroom that I usually expect to buzz with enthusiasm about the latest sports team or sitcom gossip, and instead hear talk about ChatGPT, AI, and Large Language Models (LLMs)– and that is not even in the IT breakroom. It seems we all grew up with the fictional lore of robots and AI– ranging from fantastical utopian notions to doomsday scenarios where we watch in horror as our own creations conspire to destroy us. While it remains unclear if our creations will condemn or liberate us, it has become clear that Artificial Intelligence (AI) will be a defining factor as this next chapter for humanity unfolds.

In times of uncertainty, we often find ourselves looking for a crystal ball so that we can see the future, avoiding hazards and amassing a windfall by wagering on all the winners. Sadly, there is no crystal ball. There is a time capsule available, which can help us to gain some useful insights. Those of us who have been working in cybersecurity for a while, have already been through at least one AI craze, which started around a decade ago. This has served as a very effective hype-inoculation for experienced security practitioners, as we step back to think of what emerging technologies such as ChatGPT will disrupt, along with what aspects may be overhyped.


Widget not in any sidebars

I Tricked AI, and I Liked It


AI: Adopting the Tech and Not the Hype

Once upon a time, it seemed impossible to go to any security conference, without being inundated with sales messaging about how AI was going to solve every possible problem. Ok, that was also yesterday, except now the reception is usually eye-rolls rather than the rapt attention such charades conjured in the early days. In the best of times, cybersecurity is renowned for hyperbole and sensationalism, causing many of us to create buzzword bingo cards we take to conferences to determine who is buying the first round of drinks. What was the real outcome resulting from the AI frenzy in cyber? Did this all serve to make us more secure?

As we are likely to find with the adoption of AI technologies in general, the answer has varied widely based on a range of factors. One of the most important factors has been how effectively security teams were able to cut through the malarky, to invalidate false claims and zero in on technology that is actually valuable. The key to understanding what artificial intelligence can do is knowing what is reasonable and possible, based on a deep understanding of the capabilities and constraints of the underlying processes. If something is possible manually, but impractical due to limitations in how much we can think or perceive, automation may produce breakthrough results and make new things possible. If the process sounds like magic and includes no detailed explanation of how it works, look out for smoke, mirrors, and peddlers of snake oil.

ChatGPT, LLMs, Smoke, and Mirrors

Understanding how an artificial intelligence product works is the key to having a realistic comprehension of both its capabilities and limitations. For example, we understand that basic applications of AI to antivirus may involve analyzing features of files to train a model on indicators, predicting if a file is malicious or benign. This knowledge helps us to understand possible benefits, limitations, and even security flaws in such a product. In the same manner, if we consider how ChatGPT and other LLMs work, we can begin to think of strengths, weaknesses, and limitations. If we consider ChatGPT at the same very basic level, it is extracting features, except the focus is on features of language. It takes groups of characters, assigns token values, and makes predictions. Both ChatGPT and AI-driven antivirus are excellent guessers thanks to linear algebra, calculus, and probability.

What makes ChatGPT so interesting, is that these predictions are about what blob of text should come next. The models are built by mapping token relationships across the training data, and then applying knowledge of these relationships to append additional text to a question, repeating analysis with each iteration, until it is deemed complete and the answer – minus the original question – is returned as a result. Basically, it is Machine Learning (ML) applied at a large scale to human languages, allowing it to give astoundingly coherent answers, based upon understanding statistical relationships between word patterns.

The interesting aspect of applying Machine Learning to human language is that a system may pass the Turing Test, while clearly not having any true comprehension of the answers it is rendering. This leads to a human tendency to anthropomorphize the algorithm, ascribing all sorts of human attributes that simply do not apply. In Homo Deus (2015), Yuval Noah Harari pointed out that while sentient computers may not happen anytime soon, algorithms that know us better than we know ourselves and that influence human behavior, could be soon upon us soon. The AI revolution we are witnessing now may be the fulfillment of his prediction. As we interact with AI capable of communicating with us like another person, pulling at our heartstrings even with some answers, it is important to remember that this is just a predictive algorithm. So, do we apply the term Machine Learning or the term Artificial Intelligence? In the case of ChatGPT, I would argue that both apply. From the perspective of the person, it is an interactive form of intelligence that is artificial in nature (AI). That said, if we analyze what is actually happening, it is really just another form of Machine Learning.

Malicious Use Cases for Generative AI

One thing AI does have in common with us, though, is a tendency for errors in how information is perceived and processed. In my recent malware analysis class, we spent time abusing ChatGPT to create malicious content helpful in planning, organizing, and delivering cyber-attacks. Of course, if you ask for something overtly malicious, it answers, “I’m sorry, but I cannot fulfill that request…” with a long ethical lecture (the desired response).

What if you ask the question more creatively? Is it possible to trick an AI into providing you with useful code or intelligence, to help with an attack? Unfortunately, it seems the answer is a resounding yes. On one hand, resources such as Jailbreak Chat, index a vast array of tools to bypass the security features of ChatGPT, such as the now infamous DAN jailbreak(s). That said, unleashing unintended functionality, can occur in ways that are sneakier than just using a documented Jailbreak. For example, if you ask ChatGPT to create ransomware, it will follow well-conceived rules to block this activity, rendering the all-to-familiar “I’m sorry” response message. What if you are more creative with your question, though?

Maybe the key to getting an AI to create something malicious is to ask nicely. More specifically, to ask in a way that does not “offend” any of the filters or protective measures implemented within the AI. As an allegory to our ransomware analogy, what if you ask ChatGPT to create a Python script to encrypt every .txt file in a specific directory, using AES256 and a specific key? Now maybe, you could ask it to change the directory to something broader such as Documents, and add more file types. Add a few more required features, one by one / individually, until it is bordering on useful. Then, assemble the modules, and ask it to optimize and translate it into whatever language you want – of course, followed by a bit of refinement, testing, and debugging.

Moreover, if a cyber-criminal establishes a local LLM such as Alpaca, they may create an environment that is completely free of such restrictions. The impending AI wars may get interesting on multiple fronts. On one hand, we could see reduced barriers to entry for new arrivals in the cyber-crime arena, along with more subtle benefits afforded to established adversaries, such as the types of productivity gains we expect in legitimate companies. On one front, we deal with anybody being able to reason their way toward potentially malicious software, on the other, we face the malicious use of LLMs to provide additional productivity and capabilities to experienced threat actors. Basically, the capable adversaries will expand their reach. While some who are now incompetent may become at least reasonably capable, the reasonably capable may become highly efficient actors, accelerating the escalation of cyber victimization.

Managing AI Risk

So, how do we mitigate this risk, as security practitioners looking forward? The first step is to identify the categories of opportunity and risk that need to be considered. As a starting point, it is important to first separate AI strategy into the broad categories of exploiting opportunities, versus mitigating risks. This distinction applies at the enterprise level, as well as within our cybersecurity microcosm. Organizations that fail to capitalize on new opportunities, risk becoming irrelevant, eclipsed by more forward-thinking competitors. As we develop strategies to mitigate risks associated with technologies such as LLMs, we need to remember that failing to adapt is high on that list of risks. This is important to remember when approving projects, creating policies, and considering exceptions.

Once we focus our attention on mitigating risks, we find once again the same differentiator. Are we looking at ways that AI can help us defend, or are we considering ways emerging technologies can be used to improve the offensive capabilities of our adversaries? While the lines will blur as we consider projects such as PentestGPT or Eleven Labs that could be used for testing or for actual attacks, we need to look at how specific applications of such technologies inform strategy.

AI Security and Strategy

AI models do not change the fundamental nature of attack and defense. They instead serve to accelerate both offensive and defensive processes, against a backdrop of what we can expect to be a more tumultuous tech landscape, further destabilized as a result of emerging capabilities. This means that principles we have tested for decades and well-defined frameworks are probably going to remain largely valid in this new paradigm. What is going to change radically, is the tempo at which new flaws are found and exploited – and the reaction speed that will be required to stop undesired outcomes.

And that serves as a nice segue to our second axis to consider when developing our AI security and technology strategy. Time. We can all imagine fantastical and futuristic notions, for business enablement, cyber-crime, and exploitation, as well as for protection and response. All the while, the considerations of “now” press in upon us continuously. Most of us have predatorial competitors with sharp teeth, nipping at our toes here and now. How do we calm down and consider the long-term threats and opportunities while remaining aware and ahead of the issues that are already upon us?

Final Thoughts

We are entering a phase where the technology plans we make, may have an unusual level of influence on the relative standings of organizations as we enter a new era. We need to step back and first map out the risks and opportunities that may undermine or revolutionize an entire business or industry. Anyone looking at history would know that 1908 was not the right time to launch a startup improving upon the horse-drawn carriage. Launching business initiatives that are not aligned with, or at least immune to, emerging disruptive technologies could be ill-fated. When considering the timing of advances and breakthroughs that will influence our technology strategy, we need to be realistic. It is difficult, because we need to consider multiple related rates of change, such as the speed at which new capabilities will emerge, tempered with how quickly a given organization can implement and/or adapt to changes.

When we weigh both opportunistic and risk-reducing AI considerations, combined with short and long-term time horizons, the task of creating a strategy becomes more approachable. A few key questions may help define your strategy. From a technology and business enablement perspective, what does the long-term future look like? What are the near-term opportunities that will help your organization to remain competitive while working toward longer-term advances? On the risk mitigation side, we can work in the opposite direction, thinking of what adversary capabilities are likely to become a serious problem soon. For example, the social engineering implications that emerge when AI voice and video, are combined with pretexts and lures created via ChatGPT, could represent a near-term problem we need to consider. Then we can think of what capabilities we gain, as well as how future advances will shape our security strategy. Useful frameworks are also beginning to emerge to help define categories of security flaws and attack lifecycles against AI tools and services, such as OWASP 10 for LLM and MITRE ATLAS.

If we carefully consider the offensive and defensive aspects of our own business, across a range of time horizons, we begin to understand how we should act. Then, when we map out the probable offensive agendas and capability progression for our competitors and adversaries, we have an idea how they may act. When we align these elements at an enterprise level, it should be possible to assemble a quality strategy including both how to exploit opportunities and how to mitigate risks. When we consider them at a personal level, it may help prepare us to better adapt to a complex and rapidly changing world.


Widget not in any sidebars

The post I Tricked AI, and I Liked It appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Why Security is the Black Box in the AI Race https://solutionsreview.com/security-information-event-management/why-security-is-the-black-box-in-the-ai-race/ Fri, 28 Jul 2023 19:34:57 +0000 https://solutionsreview.com/security-information-event-management/?p=4967 Solutions Review’s Contributed Content Series is a collection of contributed articles written by industry thought leaders in enterprise software categories. Chaz Lever of DEVO argues why AI security is the black box in the next leg of the artificial intelligence technology race. The rapid rise of new, more powerful generative AI chatbot platforms has enterprises […]

The post Why Security is the Black Box in the AI Race appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
AI security

Solutions Review’s Contributed Content Series is a collection of contributed articles written by industry thought leaders in enterprise software categories. Chaz Lever of DEVO argues why AI security is the black box in the next leg of the artificial intelligence technology race.

The rapid rise of new, more powerful generative AI chatbot platforms has enterprises and governments scrambling to rein in the potential negative impacts of this disruptive technology. JP Morgan, for example, has prohibited the use of ChatGPT in the workplace, among others. Dozens of artificial intelligence leaders issued an open letter in March calling for a pause on ChatGPT development so safety measures could be reinforced. And the Biden Administration recently weighed in with several moves to develop “responsible AI” initiatives within the federal government.

They’re all worried about security. Concerns about AI are nothing new, but ChatGPT, Bard, and their ilk have upped the ante, and leaders across the spectrum are sounding the alarm. This reassessment of AI threats comes at a good time, especially with some analysts predicting AI to contribute upwards of $15 trillion to the global economy by 2030. The technology clearly isn’t going away; the genie is out of the bottle, and it’s not going back in. It’s already fueling futuristic applications such as autonomous transportation, weather forecasting, insurance, marketing, and scientific research. But before AI can reach its true potential, people have to trust that it’s secure and not creating more threats than it’s taking away.


Widget not in any sidebars

Security in the AI Race


AI Systems Can Be Used by Attackers

AI systems are widely used as cybersecurity assets. Their powerful algorithms can analyze large amounts of data to identify patterns that could tip organizations off about a cyber-attack. They can be used to proactively identify unknown cyber threats and trigger automated remediations that segment off breached systems or malicious files.

At the same time, AI introduces new attack vectors for malicious actors. It can be used by cyber-attackers to generate sophisticated phishing attacks that are designed to evade detection. AI-based malware can also adapt and evolve to avoid detection by traditional security systems.

AI Models Can Be Poisoned

Machine learning (ML) systems use very large amounts of data to train and refine their model, which requires that organizations ensure that their datasets maintain the highest degree of integrity and authenticity possible. Any failures on this front will cause their AI/ML machines to produce false or harmful predictions.

Attackers can purposely sabotage an AI model by damaging or “poisoning” the data itself. By secretly changing the source information used to train algorithms, data-poisoning attacks can be particularly destructive because the model will be learning from incorrect data. They provide false inputs to the system or gradually alter it to produce inaccurate outputs. Their goal is to trick the learning system into creating inaccurate models, which produce wayward results. Manipulated, or poisoned, data can be used to evade AI-powered defenses. Most companies aren’t prepared to deal with this escalating challenge, which is getting worse year by year.

Information Leakage Can Haunt Future Models

It’s bad enough when AI use opens an organization up to being hacked. It can be worse when sensitive information is shared inadvertently and used inappropriately. This can happen with AI models. If a developer inserts proprietary company secrets into a model, there’s an ongoing risk that those secrets will funnel back into future models. People will end up learning about things that only a few people are supposed to know about. Plus, organizations can face questions about data privacy based on where their AI models start and where they live.

What are the trade-offs to developing and running models locally versus in the cloud? From a privacy perspective, that might influence what organizations are willing to do.

Generative AI Can Create Convincing Fake Images and Profiles

Using AI, scammers can more easily create highly realistic fake content that they use to deceive targets – and the public. Applications include phishing emails, fake profiles, fake social media posts, and messages that appear legitimate to unsuspecting victims. In late May, a deepfake image of an explosion at the Pentagon briefly caused the stock market to drop. After a scam artist posted an image on Twitter, Arlington, Va., police quickly debunked the image. The stock market dipped by 0.26 percent before rebounding. Photography experts identified the photo as an AI-generated image. As generative AI technology continues to improve, these situations likely will become more prevalent and more problematic.

Generative AI can also be used to create photos of people who don’t exist. Once the scammer has the photo, it can be used to create fake profiles on social media platforms. It also can be used to create “deepfake” videos – superimposing a face onto someone else’s body – to manipulate people into believing a person has done something he hasn’t. Deepfakes have targeted celebrities and been used for blackmail.

Complicating Data Privacy

When AI collects personal data, does its use comply with the stipulations spelled out by GDPR? Not necessarily. Ideally, AI algorithms should be designed to limit the use of personal data and make sure the data is kept secure and confidential. GDPR is very specific when it comes to the use of personal data. It requires that automated decision-making can take place only if humans are involved in the decision-making, if the person whose information is being used has given consent, if the processing of information is needed to perform a contract, or where it is authorized by law. GDPR also requires users to tell individuals what information is being held and how it is being used. As a result, there will be significant legal issues that must be addressed in terms of GDPR and the use of personal data– and new policies will need to be set accordingly.

Proceeding with Caution

AI is already an important driver of innovation and value– and will continue to be. But it comes with risks that need to be addressed now. Generative AI applications have brought security and ethical issues to the surface, forcing stakeholders to ask questions and push for solutions that can position the technology to remain a net positive for years to come.


Widget not in any sidebars

 

The post Why Security is the Black Box in the AI Race appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>