Given the very public Equifax breach, there is no better time than now for you and your organization to review (or create) your patch management process to make sure that it is being followed, gaps are identified and filled, and everyone is working to secure the environment.
To give you an update in case you are not aware; back in early September, Equifax publically announced one the largest breaches ever where attackers got a hold of data on 143 million consumers from the U.S., UK, and Canada. The breach occurred from mid-May through July where the attackers could access names, Social Security numbers, birth dates, and addresses. In some cases, the attackers were even able to steal credit card numbers.
How did the attackers do it? Due to a poor patching process, the attackers breached the system through a vulnerability in the Apache Struts platform, an open-source web application service. Apache disclosed and patched the relevant vulnerability on March 6 but Equifax did not patch the portal using the service even though they were aware of the vulnerability for at least two months. Not so good.
With 15.1% of assets having a high or critical vulnerability according to Edgescan, how do you not become the next Equifax? Implement a flexible and responsive security patch management process and patch management process flow.
Implementing a Patch Management Process
A flexible and responsive security patch management process is critical to maintaining proper cyber hygiene and in protecting your organization’s public trust.
There are many different methodologies and guidance that include the ITIL Patch Management Process and the SCCM Patch Management Process to help with building a quality patch management process but the key takeaway is to make sure you implement a process that aligns with your organization’s people, processes, and resources.
The process you implement must be repeatable and you must get buy-in throughout the entire organization, from the administrators installing the patches all the way up to the executives and board of directors. If there is no buy-in, it does not matter how great your process is because the chances the process is being followed are very low.
If you do not have a process in place or are taking this time to review and update, the SANS Institute InfoSec Reading Room has provided a good methodology on how to implement a patch management process. At a very high level the methodology is:
- Baseline and Harden
- Develop a Test Environment
- Develop Backout Plan
- Patch Evaluation and Collection
- Configuration Management
- Patch Rollout
- Maintenance Phase – Procedures and Policies
Another guidance provided by Microsoft is very similar and focuses on:
Patch Management Best Practices
When building or reviewing your patch management process, keep the following best practices in mind.
Quality Inventory Management Process
If you do not know what you have in your environment, then how are you supposed to protect against threats?
A lot of organizations maintain a very basic asset inventory which usually includes Operating Systems, IP Addresses, location, and sometimes the owner. While this is helpful and way better than not having an inventory at all, it does not include enough details to properly understand the organization’s threat vectors.
I always like to recommend expanding the inventory list to include at least the OS version, installed applications and their versions, the function of the asset (i.e. Domain controller, web server, database), interfaces/services/protocols, and a criticality score. Feel free to include additional details that are helpful to your organization, if it can help in the discovery, assessment, testing, and patching of an asset then add it in.
Reduce Threat Vectors
This is much easier said than done. As organizations grow, so do the number of assets and applications they must manage. I have seen thousands of distinct operating systems and applications and I have also seen just a handful. Both scenarios can lead to a breach or increased risk if the assets and apps are not patched and maintained, but the organization with thousands has a lot more work to do and they have far more threat vectors to manage and monitor.
I recommend periodically reviewing the operating systems and applications and work to reduce the number as much as possible. While Bob from accounting may really like using MS Works, it may not be in the best interest of the organization to continue to allow him to use it. Even if there is one legacy system or application in your environment that is unknown, out of date, or no longer supported, it can expose protected data and lead to an increased risk to the environment.
You can’t secure what you don’t know about. The best way to know if a vulnerability exists is to employ discovery and scanning capabilities. A proper discovery service uses a combination of active and passive scanning features and the ability to identify physical, virtual, and on and off premise systems.
In addition to scanning the network, you also should define a reliable system for identifying new vulnerabilities. Waiting for the operating system or application to release a patch is not sufficient. In some cases, a new vulnerability may not have a patch available and the organization must mitigate or remediate the risk by other means.
I recommend signing up for notifications for all the operating systems and applications that you have in your environment that provides this feature, as well as subscribing to security bulletins or vulnerability reporting services that can help you identify emerging threats and new vulnerabilities. Review these notifications and compare them against your inventory to ensure you are up-to-date.
Patching is Not Just for the Operating System
It is surprising to see how many organizations focus their patching on operating systems and either ignore or prioritize application patches much lower than the actual risk warrants because operating system patching is “easy” and everything else is perceived as hard. As was the issue at Equifax, an application vulnerability was identified but it took months and a breach for the organization to patch it.
According to the 2017 Trustwave Global Security Report, 99.7% of web applications that Trustwave application scanning services tested in 2016 included at least one vulnerability. Don’t just focus all efforts on the operating system but also take into consideration installed applications, services, libraries, and devices. With the growth of easy to install apps, IoT, and connected devices, everything matters. Review the threats and vulnerabilities to determine the risk and prioritize. Don’t just lower the priority or not patch something because is hard or complicated.
Metrics and Verification
How are you doing with your patching? Without measuring your performance, it is hard to determine if you are meeting your target and organizational goals. Metrics can help to validate that your patch process is effective and provide valuable information that can demonstrate the security posture to the business in a meaningful way. Metrics like percentage of systems up-to-date, percentage of failed patches, number of hours/days to patch, etc. are all valuable to the organizations.
Along with the metrics, an additional item that often get missed is to verify the patch actually remediated the vulnerability. Many organizations just assume that once a patch has been installed they are good to go, without going back to verify. Ideally, this is done is test before the patch is installed but if not, rescanning the environment or testing a few machines after the patch was installed is necessary to determine if the vulnerability has been remediated.
Common Issues and Roadblocks
Without getting into too much detail, I wanted to point out some high level issues and roadblocks originations may run into when trying to implement and following a patch management process.
- Not reviewing exceptions: Exceptions are generally written when something can’t be patched but many times these exceptions are never reviewed after they have been written. Remember to go back and reassess all exceptions periodically to make sure they are still necessary and not introducing additional risk that was not originally defined.
- Being forced to support aging software: Touched on No. 1 above. Sometimes you must support old software. If that is the case, work on trying to limit access, network segregation, adding additional monitoring to reduce the risk.
- Not viewing risk as a whole: Most risks are written for each threat or vulnerability but no one took a look at all the risks as a whole. A low risk somewhere else in the organization may increase the risk for a given threat or vulnerability. Also, if you have hundreds of low risks, maybe the overall risk is now a medium or even high.
- Lack of resources: Everything is easy with unlimited resources but no one has unlimited resources. According to Cybersecurity Ventures Cybersecurity Market Report, by 2021, companies will be unable to fill 3.5 million open cybersecurity positions, and 57% of organizations report that finding and recruiting skilled IT security personnel is a “significant” or “major” challenge per Money, Minds, and the Masses: A Study of Cybersecurity Resource Limitations. With difficulties finding qualified staff and trying to maximize profits, patching efforts may suffer.
- Not enforcing policy: If you have a policy but don’t follow it or don’t enforce it, then why have it? Be sure to hold teams accountable for not meeting the targets and goals.
Linford & Company provides multiple services like SOC 1, SOC 2, FedRAMP, and HIPAA that are designed to assess an organization’s security management and regulatory compliance effectiveness. Contact us if you would like to discuss our services further.
Linford & Co., LLP, founded in 2008, is comprised of professional and certified auditors with specialized expertise in SOC 1, SOC 2, HIPAA, HITRUST, FedRAMP and royalty/licensing audits. Our auditors hold CPA, CISA, CISSP, GSEC licenses and certifications. Learn more about our company and our leadership team.