Let’s face it, the cybersecurity market is confusing. With ever-changing acronyms and technical language that only those with a certain level of expertise can understand, the marketplace, including critical cybersecurity tools and resources, is inaccessible to the average user. Vendors that put artificial intelligence (AI) and machine learning (ML) labels on their products exacerbate these barriers to entry and exclusivity to appear innovative and keep pace with the industry.
There is a common misconception that the AI label automatically improves a cybersecurity solution when this is far from the truth. Organizations don’t need AI or ML tools to improve cybersecurity.
Advantages and disadvantages of AI and ML
While AI and ML can be beneficial, small teams shouldn’t assume they need AI for threat detection and response or overall security.
In many cases, AI is only effective for threat detection; this does not necessarily resolve the threats. But even threat detection via AI can create problems: AI and ML are often positioned to perform anomaly detection, which brings unknowns to the surface, but also flags other unknowns that have no impact on security. To avoid this, security teams should train the existing model continuously, incorporating a strong feedback loop into the training data. However, this requires additional effort and cost in addition to investigating the find itself.
That’s not the only way AI can create more work for a security team. AI and ML can supplement a team of security operations center (SOC) analysts, helping them sift through false positives. Nevertheless, SOC staff should understand the output and provide feedback to the model to avoid wasting time on an untrained model. Organizations need data science expertise to avoid looking at the results of a poorly trained model, which ultimately adds more time to their day.
Cybersecurity needs the human element
AI and ML are not secret weapons that eliminate the need for human decision-making. Human decision-making is unprecedented. For example, creating detection rules based on attack paths, information about emerging threats, and new vulnerabilities requires context, research, and creativity. The AI could write rules but would only write those rules in the context of the original AI author. Being aware of impending attacks via research, reproducing them, determining where detectability may occur in the stack, and creating detections and playbooks is a unique human effort that AI can support, but not alone.
Additionally, the AI is not only unable to evade all offensive tactics against it, but hackers can also learn the weaknesses of an AI-powered system. All implementations have metrics that make them vulnerable to attackers who learn guardrails through fuzzing or similar attacks. It could be as simple as avoiding detection by Next Generation Antivirus (NGAV) by having an application wait 30 minutes before executing the actual malicious payload, which expires the process’ behavior evaluation period. ML of anti-virus software.
Ensure security without AI-powered tools
AI does not replace the human element in cybersecurity. For smaller organizations that may not be able to capitalize on AI tools or have a security team, leveraging tools backed by a real support team is essential. Working with outsourced security experts can ease the burden on under-resourced teams. Partnering with a solution provider with a SecOps team to provide additional guidance can help companies respond to and prevent future issues.
Teams can also supplement with automation – for example, using automated blocklists. Automation, along with an organization’s internal documentation/rules on how that automation is applied, is an important first step for most businesses. Teams that have documented how they intend to respond to security or operational issues, and can use the data for those issues, are headed in the right direction.
There are a few additional ways for small IT or security teams to provide security, including:
- Use honeypots to their advantage: Honeypots are a low-cost way to attract attackers and detect real threats, such as Remote Desktop Protocol (RDP) attacks.
- Leverage existing security features: Use security features included with tools the organization already uses, such as multi-factor authentication (MFA), phishing protection, and alerts in Microsoft 365.
- Back to the basics: attackers only have a number of ways to infiltrate an environment, so making sure an organization has good security hygiene, for example, no open ports exposed to the internet, enabling MFA and having a way to monitor behavior in the environment – they can prevent the majority of attacks.
See beyond the hype
AI is typically only accessible to large enterprises with SOC teams and is too expensive and time-consuming for smaller organizations with fewer resources and less budget to support the implementation of such tools . AI is not only too expensive but, in most cases, unnecessary for small teams that have to fight other fires within the infrastructure.