Best Practice

Event Log Management for Security and Compliance

Has someone made any unauthorized changes to your Active Directory policies or Access Control Lists (ACLs) for a directory on a server containing company Intellectual Property? Has someone gained unauthorized access to data that is regulated by law, such as HIPAA? Is somebody trying to hack into your internal systems? What if your compliance officer asks you for SOX-centric reports?

Every day, computer networks across the globe are generating records of the events that occur. Some are routine. Others are indicators of a decline in network health or attempted security breaches. Log files contain a wealth of information to reduce an organization’s exposure to intruders, malware, damage, loss and legal liabilities. Log data needs to be collected, stored, analyzed and monitored to meet and report on regulatory compliance standards like Sarbanes Oxley, Basel II, HIPAA, GLB, FISMA, PCI DSS, NISPOM. This is a daunting task since log files come from many different sources, in different formats, and in massive volumes, and many organizations don’t have a proper log management strategy in place to monitor and secure their network.

In response this article will discuss common Event and Log Management (ELM) requirements and best practices to decrease the potential for security breaches and reduce the possibility of legal or compliance issues.

Why Centralized Event and Log Management (ELM) is Important

Every system in your network generates some type of log file. In fact, a log entry is created for each event or transaction that takes place on any machine or piece of hardware–think of it as acting as your “journal of record”. Microsoft-based systems generate Windows Event Log files, and UNIX-based servers and networking devices use the System Log or Syslog standard. Web Application servers like Apache or IIS, as well as Load Balancers, Firewalls, Proxy Servers, or Content Security appliances generate W3C/IIS log files.

Centralized Log Management should be a key component of your compliance initiatives, because with centralized logs in place, you can monitor, audit, and report on file access, unauthorized activity by users, policy changes, and other critical activities performed against files or folders containing proprietary or regulated personal data such as employee, patient or financial records. A centralized log management strategy should include overseeing Event Logs, Syslog and W3C logs. And this is key because information breaches come equally from internal and external sources. For example, Windows Event Logs will give you visibility into potential harmful activities conducted by disgruntled employees, while Syslog management will give you control over your network perimeter.

Windows-based systems have several different event logs that should be monitored consistently. Of these logs, the most important is the Security Log. It provides key information about who is on logged onto the network and what they are doing. Security logs are important to security personnel to understand if vulnerability exists in the security implementation.

Syslog is a log message format and log transmission protocol defined as a standard by the Internet Engineering Task Force (IETF) in RFC-3164 with draft improvements in RFC-5424. Networking devices, UNIX and Linux systems, and many software and hardware platforms, implement Syslog as a standard logging format and means to transmit and collect those log files in a centralized log management repository. Using Syslog information, you can capture highly detailed information about the status of a device or a number of devices. The information can be sorted and parsed to see atypical behavior through changes in operational or performance patterns. These changes may indicate a single or multiple problems. Storage of Syslog log data can also support compliance efforts by providing audit logs to trace any event that may affect network reliability and protection of data. This is important as it proves control of all information to auditors.

Similarly, W3C logs also provide information on user and server activity. These audit logs should too be monitored as they provide valuable information that you can use to identify any unauthorized attempts to compromise, for example, your Web server. IIS log files are a fixed (meaning that it cannot be customized) ASCII format, which record more information than other log file formats, including basic items, such as the IP address of the user, user name, request date and time, service status code, and number of bytes received. In addition, the IIS log file format includes detailed items, such as the elapsed time, number of bytes sent, action, and target file.

By deploying a centralized log management solution, you can easily manage the frequently overwhelming amount of log information generated by your systems. Real-time access to log data will allow you to filter and locate that one “needle in a haystack” event that could be the cause of a security breach.

How Much Log Data is There?

Every day your servers and other network devices produce 10’s of thousands of log entries. In fact, the average enterprise will accumulate up to 4GB of log data a day. Over 95% of the data within the log files consists of detailed entries recording every successful event or transaction taking place on the system. For example; a server crash, user logins, start and stop of applications, and file access.

Many administrators are surprised to learn that “simple” log files can result in such a large amount of data that is collected and then stored. It is the perception of this mostly routine data that is at the heart of an organization’s failure to capture, store and analyze the log data being generated constantly. You should never underestimate the importance of the data found within these files.

When manually collecting and reviewing log data, you need to be aware that the more servers you have producing log data, the potential for awareness and locating a security related or compliance issue decreases exponentially over time.

Security Inside the Perimeter

Security is always in the forefront of any size organization’s IT strategy. Usually this strategy focuses on the perimeter of the network to prevent unauthorized access or attacks from malicious parties, that are not associated with the organization.

While external security is essential, what about the internal aspects? A nosy employee who wants to look at confidential company financial data and changes their access permissions? Or a disgruntled employee who has created a back door into a key server and is about to delete terabytes of customer data? While these may be extreme cases, are you prepared to counteract these possible events? The potential for a security breach is just as likely from an internal source as it is from the outside. In fact, it may even be higher. As we discussed in the Executive Summary, the potential for liability is considerable when some unauthorized individual accesses data that is considered protected by legislative act.

Through establishment of a comprehensive ELM strategy for security monitoring of Windows event logs for internal activities and changes that are out of the range of normal business activities, you can locate and prevent small events before they turn into a major catastrophe.

Compliance Initiatives: Prepare for the Worst

Your organization may or may not face regulatory compliance. If you are a private entity, most likely you do not. However, this should not prevent you from understanding what the regulatory standards have defined as requirements. Leveraging these standards can provide you with a blueprint for your own internal security plans and log management strategy.

If your organization is public, non-compliance with Sarbanes-Oxley, for example, can result in heavy fines and legal liability for the officers. Many of the requirements of the other legislative or industry specific initiatives for security and compliance, as they relate to log management, overlap with those of Sarbanes-Oxley. As a result, all public and many private companies look to that standard for guidance in building a log management strategy.

A quick review of each of the standards below will provide you with a high level overview to understand each of them and how they can affect your log management strategy.


In Sarbanes-Oxley, the phrase “internal controls” in section 404 of the act is central to compliance efforts. Public companies’ annual reports must include:

[…] an internal control report, which shall –

  1. state the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting; and
  2. contain an assessment, as of the end of the most recent fiscal year of the issuer, of the effectiveness of the internal control structure and procedures of the issuer for financial reporting.

Several misconceptions center on who and what is impacted by these requirements. First, Sarbanes-Oxley applies only to publicly traded organizations in excess of $75 million, or by extension, private firms to be acquired by public companies. Second, it isn’t just the financial data itself or the reporting of that data with which Sarbanes- Oxley is concerned when it refers to “control structure failure;” this has been very broadly interpreted to include just about anything that could even affect the reliability of that data.

Because of the inherent differences in network configurations, business models, markets, and the preferences of auditors,the best practices that are suggested later in this document should be viewed as a starting point. The below sidebar synthesizes some of the ELM requirements that should be included in your audit and compliance strategies.

While these suggestions are focused towards Sarbanes- Oxley, they can also be used in addressing the requirements of Basel II, GLBA, HIPAA, PCI, and other efforts.

In most cases, this will be part of a larger compliance strategy, so be sure to consult with your audit specialists for more detail relevant to compliance in your specific industry.

Basel II

Compared to Sarbanes–Oxley, Basel II is less well known and lacks its clout. The goal of Basel II is to promote greater stability in financial systems internationally. For our purposes here, the focus centers on Basel’s concern with “operational risk,” which is subject to interpretation. It is possible that a good starting point could also be those listed above for Sarbanes-Oxley compliance. Efforts will require interpretation of the Advanced Measurement Approach (AMA), a portion of the Basel II accord, by your group’s management and audit teams.


Regarding IT compliance, the Gramm-Leach-Bliley Act (GLBA) focuses on the protection of customer data by financial institutions. Much of GLBA overlaps with Sarbanes-Oxley’s requirements. Though sometimes regarded as just another collection of rules or mere guidelines, GLBA does have teeth. The consequences of failure to comply can include civil action brought by the U.S. Attorney General. The act can be access though the Federal Trade Commission’s web site at: http://www.ftc.gov/privacy/privacyinitiatives/glbact.html


HIPAA’s requirements are similar to GLBA’s in that they stress the existence of a reliable audit trail to protect the personal data of medical patients. HIPAA is comprised of two major rules: the Privacy and Security Rules. Each of these has their own requirements for implementation and reporting. according to the Centers for Medicaid and Medicare, in addition to building IT infrastructure and strategies to protect against “threats or hazards to the security or integrity of the information,” preparations must be in place for investigation of potential security breaches. An audit trail must be able to provide “sufficient information to establish what events occurred, when they occurred, and who (or what) caused them.”


The Federal Information Security Management Act (FISMA) is designed to protect critical information infrastructure of the United States Government. It sets minimum security standards for information and information systems and provides guidance on assessing and selecting the appropriate controls for their protection. Each Federal agency and its contractors are required to develop, document and implement policies that meet the FISMA standards. The National Institute of Standards and Technology (NIST) has issued a Special Publication 800-53 to provide guidelines for selecting and specifying security controls for information systems supporting the executive agencies of the federal government.


The requirements included in Chapter 8 of the National Industrial Security Program Operating Manual (NISPOM) are of interest to government agencies and private contractors with staff who have access to sensitive and classified data. The manual states that security auditing involves recognizing, recording, storing, and analyzing information related to security-relevant activities. The audit records can be used to determine which activities occurred and which user or process was responsible for them.


Developed by MasterCard and Visa and being enforced by American Express, the Payment Card Industry (PCI) Data Security Standard provides IT shops that handle sensitive consumer credit card data with detailed requirements. Any entity that implements PCI-DSS must prove in an annual PCI-DSS audit report that they comply with the standard, or they can be denied the ability to process or store any credit card related information. Section 10 of the standard defines audit information and log files requirements.

MA 201 CMR 17

The Massachusetts Privacy Law – MA 201 CMR 17 defines that “every person that owns or licenses personal information about a resident of the Commonwealth” has a duty to design, document, and implement a system that protects that information. The affected person could be an employee or customer of the company. The effective date of the law was March 1, 2010. All of the legal or industry standards highlighted above reflect an ongoing need to not only ensure the protection and integrity of financial and personal data, but also prescribe that each and every transaction be auditable. It is strongly recommended that IT and security professionals seek the input and feedback of their management and audit teams when structuring any compliance strategies based on ELM. There should be complete involvement from all disciplines within an organization to ensure the integrity of the entire process from gathering of event and log data to auditing and reporting.

Event Log Monitoring and Alerting

Learn about our log management tools.

Learn More

Contact Our Team

Talk to one of our experts.

Learn More

Event and Log Management Best Practices

Best Practice #1: Define your Audit Policy Categories

The term audit policy, in Microsoft Windows lexicon, simply refers to the types of security events you want to be recorded in the security event logs of your servers and workstations. On Microsoft Windows NT® systems, you must set the audit policy by hand on individual servers and workstations, but in Windows 2000® or Windows 2003® Active Directory® domains, with Group Policy enabled, you can associate uniform audit policy settings for groups of servers or the entire domain. For a summary of key logging categories to enable, please refer to the “BASELINE ELM STRATEGY FOR SECURITY, COMPLIANCE AND AUDIT” table.

Best Practice #2: Automatically Consolidate All Log Records Centrally

By default, Windows event logs and Syslog files are decentralized, which each network device or system recording its own event log activity. To obtain a broader picture of trends going on across the network, administrators tasked with security and compliance-centric initiatives must find a way to merge those records into a central data store for complete monitoring, analysis and reporting. Log data collection and storing is critical since some compliance standards mandate data retention for 7 years or more! Automation can really help here because it will save time and ensure the log data reliability. Remember:

  1. When the archived log files are retrieved, it must be a reliable copy of the data—there can be no debate as to the integrity of the data itself. As the human element is removed with automation, the level of data reliability is increased.
  2. The number of machines, users, and administrators in the enterprise; and considerations such as bandwidth and competing resources can complicate log collection so much that an automated solution is the only way to ensure that every event is collected. Can you ensure that each and every event has been successfully collected through a manual process?

In a typical setup, an administrator will configure an ELM tool to gather event log records nightly (or periodically) from servers and workstations throughout their network. This process involves saving and clearing the active event log files from each system, reading log entries out of the log files into a central database (e.g. Microsoft SQL or Oracle), and finally compressing the saved log files and storing them centrally on a secure server.

Keeping your log data in two formats—as database records and as compressed flat files—offers a distinct auditing advantage. Event log data in flat files compresses extremely well, often down to 5% of the original size. Therefore, in terms of storage cost, it costs very little to keep archived log data for many years should an auditor ever need it. However, flat files are a very poor medium for analysis and reporting, so keeping an active working set of data (often 60 to 90 days) in a database allows ad hoc reporting as well as scheduled reporting to be available for recent events. Look for an ELM tool that provides an easy mechanism for rapid re-import of older saved log files back into your database should they ever be needed. It has been our experience that the majority of employee hours when facing an audit are dedicated to simply chasing flat files around and attempting to extract the same types of data from all of them. Having data at the ready in a central database greatly reduces the potential for lost hours when an auditor comes knocking.

Best Practice # 3: Event monitoring- Real-time alerts & notification policies

Most organizations have a heterogeneous IT environment, with a broad mix of operating systems, devices and systems. Even though your environment may trend towards Windows desktop and server OSs, you may also want the option of choosing more than just Windows event log monitoring. Syslog support is important to have not only for routers, switches, IDS and firewalls, but also for UNIX or LINUX systems.

Most software products require the use of agents to perform real time monitoring of log files. If any factor influences your choice of a solution this should be the one. If you can opt for a no-agents-required implementation of a monitoring solution, do it. This will save a lot of headaches in the initial implementation, as your network grows, and in the ongoing maintenance of your monitoring solution.

When developing a log monitoring plan, every organization has different rules on what sorts of events they must monitor. IT departments will frequently focus on security events as the sole indicator of any issues. While monitoring the security event log is essential, other event logs can also indicate issues with applications, hardware issues or malicious software. At a minimum, all monitored events should be traceable back their origination point. The “BASELINE ELM STRATEGY FOR SECURITY, COMPLIANCE AND AUDIT” sidebar table in the above section provides the kick-off point for your log monitoring implementation.

Depending on your requirements and the flexibility of the ELM solution you deploy, you should define a methodology for continuous monitoring based on how frequently you want to check logs for events of interest in real-time. Each defined event is polled at a regular interval and will generate an alert or notification when an entry of interest is detected.

The number of events configured, number of target systems and polling frequency will dictate the amount of bandwidth consumed during a polling cycle. If you already know the events of interest on certain systems that you want to monitor, then configure away.  If you are establishing your event monitoring for the first time, it may better to start by enabling all events and configuring a higher polling frequency. After your familiarity level increases, you can then pare down the number of events and decreasing the polling frequency.

Best Practice #4: Generating Reports for Key Stakeholders: Auditors, Security or Compliance Officers and Management Teams

Reporting is one area to which you should pay particular attention. It provides you with significant data on security trends and proves compliance. Reporting can help you substantiate the need to change security policies based on events that could result or have resulted in compromised security.

Any ELM solution that you implement needs to answer the following questions:

  • What report formats are available?
  • How much of your work is already done for you in prepackaged event log reports that ship with the event
  • Are you tied to a particular format? Will HTML and the availability of that HTML report to multiple users play a role?
  • Can customized filters be easily recalled for repeat use?
  • From what data sources can reports be generated? Does it include EVT, text, Microsoft Access, and ODBC?
  • Will the solution be compatible with your event archiving solution?

Learn More About Log Management

Monitor Everything in your network

Start Your Free Trial of WhatsUp Gold