Many companies that are looking to establish a digital presence create a website which customers can access and interact with to learn more about the business. However, externally accessible IT assets have the potential of presenting security risks to these organisations. In order to better assist an organisations’ understanding of potential security risks, Alcorn Group have recently conducted research into the detection of application fingerprinting for a common application framework, Drupal.

Drupal is an open source Content Management System (CMS) designed to support the development of web applications stored on remote servers, which deliver content through a browser interface. Many companies use Drupal to create customised web applications for user account creation and management, as well as content delivery. Drupal is the 4th most popular open source CMS in Australia and has over 9000 customers. Overall, it is known for its stability and security, however like many other Content Management Systems, there are several known vulnerabilities within versions of the CMS which can be exploited.

How Attackers May Exploit Drupal

Attackers often employ automated tools to crawl the internet for sites which meet certain conditions that can be exploited. By further fingerprinting vulnerable web applications, an attacker can determine which known exploits can be used. Depending on the content within the application and its functionality, web applications can expose user information, company documentation, staff information, or other personally identifiable information.

Why is Logging and Monitoring Important?

The source of an attack can be quite difficult, or near impossible to determine if no logging occurs. A lack of logging and monitoring also contributes to attacks being unnoticed for extended periods of time. Consequently, an attacker’s presence may persist during this time, with access to the information available within the application, such as user data. Additionally, there are liability risks associated with companies who fail to effectively control and mitigate these risks. To combat this, it is possible to use logging and monitoring tools to capture all requests made to an application. However, this often creates large files filled with information that may or may not be useful in identifying malicious behaviour.

What Can be Done?

Due to the difficulties associated with manually logging and monitoring all traffic requesting responses from a single web application, it is important to use a centralised system to ensure nothing is missed. A centralised logging server gathers logs in a single location for further assessment, while also assisting to prevent logs from being altered should they be compromised. Centralised logging typically improves the overall security posture of an organisation, as it not only assists with identifying network activity, but allows ‘normal’ traffic to be identified. This in turn aids in identifying any abnormalities when they occur. Additionally, centralised logging increases the integrity of data logged by ensuring a complete history of logs exist, which is a crucial part of a mature Incident Response process.

How to Detect

The use of centralised logging ensures a convenient single location where logs for numerous devices can be monitored and assessed. For example, the Elastic Stack, or ELK Stack, is a combination of three open source projects which take log data from any source and allow the user to search, visualise, and analyse the data in real time. By monitoring the incoming request data using Elastic Stack, an organisation can find unusual activity on their network, diagnose errors, and recover from attacks.

Overall, centralised logging to an external logging system, combined with after-hours alerting and monitoring, ensures that a robust and complete log history can be captured and maintained. A follow up blog post will dive into the research conducted by Alcorn Group in identifying Drupal scans using the ELK Stack.

Contact Us