ModSecurity as a WAF
ModSecurity is open source has many awesome features and often is used as a reference and as a component of some WAF’s. The 2017 Magic Quadrant Gartner for WAF does not mention ModSecurity as a product in the comparison but as they say, “Gartner analysts assess that the vendor’s WAF technology provides more than a repackaged ModSecurity engine and signatures”.
Gartner does not evaluaate the WAF protection itself (see ICSA or OWASP) instead for that purpose.
There are three interesting issues that users face using ModSecurity and the rule sets. There is a significant learning curve to go from zero to hero, from just managing the configuration to developing virtual patches for zero days. Once finished the initial setup with a rule set as the OWASP Core Rule Set loaded, the time to deal with false positives begins. Often finding the balance between usability and security takes a good amount of effort. For some, the problems may end here if the website is small and there is a single website to protect, for others this is where another problem starts, dealing with distributed logs and reacting to attacks.
Having ModSecurity to protect a website or an application is often a complicated task, the initial setup can be painful or easy depending on the level of security required and the complexity of the application. The OWASP CRS defines several levels of rule strictness called paranoia levels, going from the very relaxed configuration in the lower level to a not so friendly and high false positive rate at the highest level.
Different from other friendly WAF’s out there, ModSecurity has no GUI for managing the logs, alerts, configurations, or writing rules. ModSecurity has no “learning” function to ease the rule tailoring process. There are plenty of tutorials to start playing with ModSecurity, here is a good one to help to pass through the initial torture of tunning the rules (in high paranoia levels).
ModSecurity logs can be forwarded to a remote server using several methods, like using mlogc, pipe logs or using a log shipper, each has pros and cons, my personal favorite is using filebeats to forward the logs to a logstash to parse, enrich and then push to different elasticsearch indexes depending.
Once ModSecurity with the CRS is working, it will defend the website from most of the known attacks, but it is not enough to keep the attackers away forever. It is imperative to watch the logs closely. Often after an attack is detected, the traces and evidence of exploitation are already in the logs. Log monitoring provides an early warning for preventing further compromise and detection of new types of attacks.
OWASP TOP 10 A10:2017
Insufficient Logging & Monitoring
A few days ago the second release candidate of the OWASP Top 10 2017 was published, and it mentions ModSecurity and ELK as a solution to the A10 security risk. Log monitoring is an often overlooked activity, it is “easy” to implement, and it can be very cheap if using ELK.
Closely monitoring the logs gives a lot of visibility on the application activities, security issues, and operational problems before that the service is affected. It is also an excellent source to build useful dashboards and metrics to support and secure the website better.
From the A10:2017:
“How Do I Prevent This?
• Establish effective monitoring and alerting such that suspicious activities are detected and responded within acceptable time periods.
• Establish or adopt an incident response and recovery plan, such as NIST 800-61 rev 2 or later.
There are commercial and open source application protection frameworks such as OWASP AppSensor, web application firewalls such as mod_security with the OWASP Core Rule Set, and log correlation software such as ELK with custom dashboards and alerting. Penetration testing and scans by DAST tools (such as OWASP ZAP) should always trigger alerts.”
From the WAF to the ELK
Starting in ModSecurity 2.9.1 the audit log supports JSON format. This format is very friendly for ELK ingesting and parsing using custom scripts. ELK offers a free version of its products with no limits on the logs it can process. Usually, the hard part is having the logs ingested and properly parsed to show in Kibana and build dashboards. There is no alarm feature in the free ELK version, but it is enough for live tailing and event identification and investigation.
Elasticsearch also has a log shipper called Filebeat which is lightweight and fast and supports TLS and compression, which is useful to collect the logs from multiple servers remotely.
See the sample sample logstash pipeline configuration to ingest Apache access and error logs at elasticsearch site. The audit log is much bigger as it may contain the full request and response headers, the request and response body as well as all rule messages and even the full rules (if part K is enabled).
Here is a sample pipeline configuration for JSON format audit log for ModSecurity +2.9.1. ModSecurity configuration to get all the transactions logged in JSON format is as follows:
SecAuditEngine On
SecAuditLogParts ABEFHIJZ
SecAuditLogFormat JSON
Final notes
ModSecurity may produce a huge amount of log data. The access and error log suffice for immediate analysis and reference having. The error log contains already an extract of the ModSecurity events sufficient for most troubleshooting and incident investigations. Both access and error logs should be loaded live to ELK, preferably enriched to include at least the basic client identification and the unique_id to correlate the events in the different log files to a single transaction.
The audit log contains enough information to replay the original request and responses and the events that triggered. Having this log available is especially helpful for deeper investigations or detection of unknown anomalies. The audit log is more useful for feeding on demand during an investigation rather than live to ELK or preprocess it in logstash, and fed live to ELK only the most interesting events and parts of the logs.