You cannot protect what you cannot see. Without an asset inventory, the business has no visibility over what is connected to its environment. Breaches involving unknown assets are significantly harder to detect and remediate, and incident response cannot scope the impact of an attack without a baseline.
Unauthorised and unmanaged devices represent an uncontrolled attack surface. Rogue devices on the network can introduce malware and enable lateral movement. Personal devices accessing corporate data without MDM controls create significant risk of data leakage, as the business cannot enforce encryption, restrict data sharing, or remotely wipe a lost or compromised device. In modern cloud-first environments, conditional access and device compliance policies are the primary mechanism for addressing both risks.
Critical hardware without active maintenance contracts may become unserviceable or fall out of vendor support without warning. Extended downtime, inability to obtain replacement parts, and loss of vendor security patches can result from lapsed contracts.
End-of-life operating systems receive no security patches, creating a permanently vulnerable attack surface. Threat actors actively scan for and exploit known vulnerabilities in unsupported OS versions (e.g., Windows Server 2012 R2, Windows 7, unsupported Linux distributions). Unlike application vulnerabilities, an unsupported OS cannot be mitigated by patching - the only resolution is migration. Every day an end-of-life OS remains in production increases the risk of exploitation.
Unauthorised software, including shadow IT, cracked tools, and known-vulnerable applications, is a common initial access vector for ransomware operators. Without scanning and detection, any software can be installed and run without the business being aware.
Without a data management process, the business cannot identify what data it holds, where it is stored, or what obligations apply. This creates significant risk of regulatory breach (GDPR, HIPAA), inability to respond to data subject requests, and poor incident response when data is compromised.
Without a data inventory, compliance with data subject access requests, breach notification obligations, and data minimisation principles is practically impossible. The business cannot protect data whose existence and location it does not know.
Without access control lists, users may have unrestricted access to data across systems. This creates high risk of data exposure, insider threat, and excessive blast radius in the event of account compromise. A single compromised account gains access to everything.
Retaining data beyond its lawful basis creates regulatory risk (e.g., GDPR breach), increases the volume of data in scope during a breach, and may breach legal hold obligations. Conversely, deleting data too early can create legal and operational risk.
Devices disposed of without data sanitisation are a well-documented and recurring source of data breaches. Hard drives sold on secondary markets, donated equipment, and recycled hardware have all been found to contain residual sensitive data in widely reported incidents.
A lost or stolen unencrypted laptop or mobile device provides direct, unimpeded access to all stored files, emails, and credentials. Device theft and loss is one of the most common causes of reportable data breaches and one of the easiest to prevent.
Data transmitted in cleartext can be intercepted by any actor with access to the network path - whether through a compromised network device, a man-in-the-middle position, or monitoring of unencrypted Wi-Fi. Internal traffic is not exempt: lateral movement following an initial compromise frequently involves intercepting unencrypted internal communications to harvest credentials and sensitive data. TLS 1.0 and 1.1 have known vulnerabilities and are deprecated by all major browser vendors and standards bodies. Expired or misconfigured certificates can be exploited to intercept traffic and cause service outages.
Generative AI tools (ChatGPT, Copilot, Gemini, Claude, and AI features embedded in SaaS products) create a new data leakage vector that traditional security controls do not address. Staff pasting customer data, source code, financial records, or internal documents into AI tools are sharing that data with an external service, often in ways that bypass DLP, access controls, and data classification policies. The risk is compounded by shadow AI adoption, where business units procure AI tools without IT or security visibility. Unlike traditional data exfiltration, AI-related data leakage is typically unintentional and driven by productivity rather than malice, making it harder to detect without specific controls.
Default vendor configurations frequently include default credentials, unnecessary open ports, and enabled services that are not required. These are widely known and actively exploited by attackers. In cloud environments, the risk is amplified: a single misconfiguration (a public storage bucket, an overly permissive IAM role, an exposed management API) can expose data or provide an initial foothold without any need to exploit a software vulnerability. Cloud misconfigurations are consistently among the top causes of data breaches in cloud-hosted environments.
Compromised network devices (routers, switches, firewalls) provide an attacker with a persistent, privileged position enabling full traffic interception and lateral movement across the entire environment. Network infrastructure is a high-value, high-impact target.
Servers without host-based firewalls expose all ports and services to the network, making management ports (RDP, SSH, database) a common ransomware entry point. End-user devices without firewalls enable lateral movement if any single device is compromised. Host-based firewalls provide a critical defence-in-depth layer, particularly for devices connecting from untrusted networks.
Uncontrolled changes to systems, software, and configuration significantly increase the risk of service disruption, security regressions, and inability to identify the root cause of incidents. Without change management, there is no audit trail and no rollback capability.
Without a complete account inventory, the business cannot identify all accounts with access to its systems. Dormant accounts, orphaned accounts after staff departures, and over-privileged service accounts are among the most commonly exploited vectors in breaches.
Weak, reused, and shared passwords are a primary enabler of credential-based attacks including phishing, brute force, and credential stuffing. These are consistently among the top attack vectors in major breaches worldwide. Compromised administrator credentials provide attackers with immediate, widespread access, making enhanced password requirements for privileged accounts essential.
Dormant accounts belonging to former employees, contractors, or decommissioned systems represent a significant risk. Threat actors routinely target dormant credentials in credential stuffing attacks and to establish persistence undetected.
Administrators using their privileged account for daily activities (browsing, email) mean that any malware or phishing targeting them runs in an administrative context. This enables immediate escalation to full domain or system control, the single most critical endpoint security failure.
Local, unmanaged accounts cannot be centrally monitored, rotated, or revoked. Terminated employees' credentials may persist on devices, and attackers who compromise any device gain locally stored credentials that are invisible to central security tooling.
Without a formal access granting and revoking process, accounts belonging to ex-employees remain active, creating risk of both accidental misuse and deliberate malicious access. Inconsistent provisioning also leads to over-privileged accounts accumulating over time.
Single-factor authentication provides minimal protection against credential phishing, password spraying, and credential stuffing. Compromised credentials immediately grant full account access with no additional barrier. MFA is the single most effective control against account compromise.
Without vulnerability management, the business has no visibility into known vulnerabilities in its software, operating systems, or devices. Threat actors routinely scan for and exploit unpatched, publicly disclosed vulnerabilities, often within hours of a CVE being published.
The vast majority of successful ransomware attacks exploit vulnerabilities for which patches have been available for more than 30 days. Unpatched systems are the primary enabler of both opportunistic and targeted attacks.
Without audit logs, security incidents cannot be detected, investigated, or reconstructed. The absence of logging is a compliance failure under most regulatory frameworks and a significant impediment to forensic investigation following a breach.
Inconsistent timestamps across systems make log correlation during incident investigation unreliable or impossible. A 5-minute clock skew between a firewall and an endpoint log can prevent investigators from reconstructing the sequence of events in a breach.
Browser vulnerabilities are a primary initial access vector. Drive-by downloads, malicious scripts, and zero-day exploits routinely target outdated browsers. Chromium-based browsers receive critical security updates every 2-4 weeks; falling behind creates exploitable exposure.
Web-based threats are a primary delivery mechanism for malware, ransomware, and credential harvesting. Without web filtering, users can access known malware distribution sites, phishing pages, and command-and-control infrastructure from any device.
Without DMARC, the organisation's email domain can be freely spoofed by anyone. Attackers can send emails that appear to come from the organisation's own domain, enabling phishing, Business Email Compromise (BEC), and supply chain fraud, some of the highest-cost attack types.
Email is consistently the primary initial access vector in ransomware, BEC, and credential phishing attacks. Without anti-malware and spam protection, malicious attachments, links, and payloads are delivered directly to user inboxes.
Endpoints are the primary target for malware delivery, ransomware deployment, and initial compromise. An EDR agent provides visibility into endpoint activity and generates alerts, but without a managed detection and response (MDR) service monitoring those alerts 24/7, threats that occur outside business hours - which is when the majority of ransomware is deployed - go undetected until the damage is done. An unmonitored EDR is a camera with nobody watching the monitors.
Removable media is a well-established malware delivery vector and a significant data exfiltration channel. It is exploited by both insider threats and external attackers with physical access. The Stuxnet attack demonstrated the potential impact.
Without a DR/BCP, recovery from a significant incident (ransomware, data centre failure, critical system loss) will be slow, chaotic, and significantly more costly. There is no plan for what to recover first, in what order, or who is responsible.
Without automated backups, data recovery following ransomware, accidental deletion, or hardware failure may be impossible or rely on outdated manual copies. Ransomware actors specifically target backup systems as a first step to prevent recovery.
Backup data without security controls (encryption, access restriction) can be reached by the same user accounts or ransomware that compromised production data. This pattern, encrypting both production and backup, is observed in the majority of sophisticated ransomware incidents.
Without isolated backup copies, a ransomware attack or site-wide disaster destroys both production data and all backups simultaneously. Recovery from ransomware is effectively impossible without at least one isolated, immutable copy of backup data.
Untested backups fail at a significantly higher rate than expected. Common failure modes include corrupted files, incomplete backup sets, changed restore procedures, and encryption key loss. Discovering this during an actual recovery event dramatically extends downtime and cost.
Network infrastructure is a high-value, high-impact target. A compromised router or firewall provides an attacker with persistent, privileged access enabling full traffic interception and lateral movement across the entire environment. Perimeter firewalls with unreviewed rule bases accumulate excessive permissions over time, creating exploitable gaps. Network devices running end-of-life firmware have known, publicly disclosed vulnerabilities that will never be patched.
A compromised home or personal device connecting to the corporate network via VPN is a primary ransomware entry pathway. Without device compliance checks, attackers can move laterally from the remote device into core infrastructure unimpeded.
Human error and social engineering account for the majority of initial compromise events in reported breaches. Without security training, staff cannot recognise phishing, social engineering, or unsafe behaviours that represent the primary cause of security incidents.
Staff who cannot identify a phishing email, do not know how to report an incident, and unknowingly engage in high-risk behaviours (reusing passwords, connecting to public Wi-Fi, clicking unsolicited links) are the primary enabler of the majority of reported security incidents.
Generic security training does not address the specialised threats facing different roles. IT administrators managing privileged access, developers writing code, executives targeted by whaling attacks, and finance staff targeted by payment fraud all require tailored training.
Third-party suppliers represent a significant and often underestimated attack surface. The SolarWinds, Kaseya, and MOVEit incidents demonstrated that a compromise of a single supplier can cascade across thousands of organisations simultaneously.
Internally developed applications frequently introduce vulnerabilities that would be identified by basic security testing. Web application vulnerabilities (OWASP Top 10) are among the most commonly exploited in targeted attacks, and without an SSDLC, these can remain unpatched indefinitely.
The Log4Shell vulnerability (CVE-2021-44228) demonstrated the impact of untracked open-source dependencies. Organisations without an SBOM could not identify whether they were affected for days or weeks. Visibility into third-party components is essential for rapid response.
Without environment separation, developers and CI/CD pipelines have direct access to production systems. This introduces risk of accidental data modification, exposure of production credentials, and deployment of untested or vulnerable code to live systems.
Organisations with mature incident response processes contain breaches faster and at lower cost. Without assigned roles and a structured process, response is reactive, uncoordinated, and significantly slower, leading to longer dwell times, poor containment, and regulatory reporting failures.
In the event of a significant incident, the business must quickly identify who to notify internally and externally, including regulators, legal counsel, insurers, and Vela. Delays in notification can result in regulatory penalties (e.g., GDPR requires notification within 72 hours), legal liability, and reputational damage.
The majority of breaches that result in regulatory penalties involve a failure to report within required timeframes. Without a reporting process, incidents go unreported or are reported too late, preventing timely detection, containment, and regulatory compliance.
Penetration testing provides an independent, adversarial validation of the business's security posture that no other control delivers. Vulnerability scanning identifies known CVEs; penetration testing identifies exploitable weaknesses in configuration, architecture, and logic that scanners miss. It is a specific requirement under PCI DSS, DORA, and many cyber insurance policies, and is routinely expected by regulators and auditors. Without regular penetration testing, the business relies entirely on its own assessment of its defences, with no external challenge.