For every second (more likely in the milliseconds) that passes during the day, your system is (or should be if it is not) generating audit logs. This is good and bad. It is good because the audit logs contain valuable information about what is happening on your system – everything from user access, system faults or error conditions and use of privileged commands on the system, to name a few. Audit logs also represent the key mechanism to detect (most important), understand, potentially reconstruct and recover from pwnage, should it occur. It is bad if you just let the system generate audit logs without any awareness of log content, location, lifespan or why specific logs were generated in the first place. Take the time needed to understand your system and the audit logs it generates. Below are a few additional concepts to consider with regard to audit logging:
- Everyone gets to play. What I mean by this is that your system consists of multiple hardware and software platforms, each with the ability to generate audit log data. Be familiar with each of the components on your system (including boundary devices) and what its capabilities are with regard to audit logging. Configure the component to generate logs in a manner that provides the level of detail needed and represents value for analysis. Except in specific instances like boundary devices, don’t just configure the most verbose logging level offered by the component as you’ll soon find yourself hip-deep in audit logs, making it difficult to quickly identify those of value.
- Authoritative time source. It is important to configure each network device and server to retrieve time from an authoritative time source. This configuration should include a primary and secondary time source. This will ensure that timestamps across the various audit log entries are consistent. To protect against drift, the servers and network devices should synchronize their time against the authoritative time source at a defined interval (e.g. hourly).
- Have a centralized repository. As part of the analysis of the logging capability for each hardware and software component on your system, you will need to determine where the component writes its audit logs. For the operating system, this is pretty straight forward and well understood, whether it be (a version of) syslog for (most) *nix systems or Windows event viewer for Windows systems. Note: both *nix and Windows systems, however, must be configured to write to a centralized audit log repository. For the myriad of software components riding on the operating system, the options likely will vary whether they can be configured to write locally or to a centralized audit repository. For those components that only write audit events to the local file system, it is imperative that you get these audit log events to the centralized audit repository whether by custom script, open source or commercial tools.
- Control access to the centralized repository and protect the audit logs. Access to the audit logs needs to be restricted to those who require access as part of their job function. Also, the logs need to be protected against modification and deletion. Make sure there is sufficient storage for the log volumes generated by the system. Unless you have unlimited storage (and the money to support it), you should understand the log volumes generated by your system and calculate the storage needs appropriately. Audit logs should be digitally signed to protect the integrity of the logs, and the digital signature should be checked on a periodic basis to ensure the logs have not changed. This process can (and should be) automated.
- Understand your retention requirements. Again, unless you have unlimited storage, you will need to understand your retention requirements. Whether defined by compliance standards or otherwise, you will need to determine when to archive audit logs for longer term storage or purge them altogether.
- Log analytics and correlation – know and understand. This is the most difficult and most important aspect of all with regard to audit logging. If you don’t know and understand your audit framework and what audits are generated from which systems, then you are essentially blind as to what is occurring on your system. No one wants to be in this situation. If you’ve gone through the process to understand the audit logging capabilities of each of your components, configured them in accordance with organizational needs and requirements, understand what is written in the logs and the log format, then you have a good foundation from which to build. Automation is the key to success in this step. Commercial Security Information and Event Management (SIEM) systems are expensive but powerful with regard to understanding your audit logs. If you have the budget, people and processes to support it, then by all means go that route. Unfortunately, that does not seem to be the case for many organizations. If this is the case, look at open source options. You may need to close some of the gaps in the open source options with custom scripts or other tool integrations. Do whatever is needed to gain the understanding and insight into your audit logs. Understand what represents “normal” behavior, so you can tune your audit log analytics and correlation capability to better identify anomalies or suspicious behavior. Lastly, don’t forget the notifications! If you can successfully identify and detect anomalies and suspicious behaviors, but no one is notified, then all your work is in vain.
Remember the popular security motto, “prevention is ideal, but detection is a must”? Without a means to understand your system through the audit logs it generates, detection is nothing but a “| dream.” You may have to start small and grow it over time, but investing in the people, processes and technology to support audit log correlation, analysis and understanding in support of “detection is a must” will be well worth it.
Ray Dunham started his career as an Air Force Officer in 1996 in the field of Communications and Computer Systems. Following his time in the Air Force, Ray worked in the defense industry in areas of system architecture, system engineering, and primarily information security. Ray leads L&C’s FedRAMP practice but also supports SOC examinations. Ray enjoys working with clients to secure their environments and provide guidance on information security principles and practices.