7 posts from January 2013
January 31, 2013
 The NY Times Response to Antivirus Proponents
Pin It

Today’s front page NYT story is about how the Chinese went after the NYT for publishing disparaging stories about Chinese government officials.  The same reporter who wrote the NYT story on antivirus also wrote this story about Chinese hackers.  Note something interesting:

Out of the 45 different pieces of malware planted on the Times‘ systems over the course of three months, just one of those programs was spotted by the Symantec antivirus software the Times used…

One out of 45 is about 2%, very much like the results of our antivirus study, which was referenced in this Forbes article bashing Symantec:

… analysis performed by the security firm Imperva along with the Technion Israeli Institute of Technology found that antivirus managed to detect only 5% of new threats, and that it took an average of four weeks for antivirus firms to identify a new piece of malicious code. “Although vendors try to update their detection mechanisms, the initial detection rate of new viruses is nearly zero. We believe that the majority of antivirus products on the market can’t keep up with the rate of virus propagation on the Internet,” their paper reads.

Here’s the message for security:  rebalance the security portfolio.  Use free antivirus and spend some money modernizing your security strategy.

I recently talked to a CISO who said he buys AV because of legal reasons.  If someone is infected, which he knows will happen, he has a legal defense to say ‘I did what I could.’  But he also knows AV won't work.  If customers are buying AV to appease lawyers versus protecting an enterprise, something isn't right.

Symantec’s response essentially blaming the Times--their customer!--for the failure gives some insight into what isn't right.  Their reaction reminded me of a key tenet Clayton Christensen's The Innovator's Dilemma.  In the book, Christensen notes that big companies fail to innovate because customers often ask for better versions of current products when they really need a new technology.  Customers, according to Christensen, become a barrier to innovation.  Symantec's reaction, explaining that if the Times had turned on more functionality they'd have been safer, is the best illustrations of the innovator's dilemma you'll ever see.


January 30, 2013
 Red October: The Hunt For the Data
Pin It

The recent discovery of the Red October malware has focused a lot on its effects, but inadequate attention has been given to its purpose.  The real goal of this campaign—which successfully evaded anti-virus and network intrusion detection systems for at least five years.

The malware contained many of the traditional functions associated with malware, such as key logging.  But focusing on these traditional capabilities misses a key point:  hijacking local data, such as files and credentials, was the means—but not the end.

Red October contained two interesting aspects:

  1. Attackers recycled stolen data from victims of the same sector to make their spear phishing emails less suspicious by incorporating some context that would be familiar to the victim. 
  2. Ability to identify and access the important data centers.

The victims of this cyber-espionage operation belonged to the most protected and threat aware sectors – government, energy, aerospace and military.  The potential bounty that can be extracted from such victims is varied both in content and in type: documents and presentations of meeting summaries and strategic plans, database financial records, CRM records, technical blueprints of weapons and infrastructure, sensitive email conversations and more.

Rocra, the name of the malware used in the Red October campaign, is APT by the book.  It has specific modules for each of the elements needed for an APT attack: Reconnaissance gathering, spreading, persistence maintenance, data extraction and data exfiltration.

Specifically, it has capabilities to access both unstructured data (files) as well as structured data (database records), or as the Kaspersky Labs Report noted, it would “Collect information about installed software, most notably Oracle DB…”

What do these modules do?  Let’s break down some of them:

  • The purpose of the “Recon” modules is to help the attacker find the right data.
  • The purpose of the “Exfiltration” modules is to deliver the data to the attacker.

Overall, Rocra’s modules are capable of reaching FTP servers, remote network shares as well as local disk drives and copy files from these resources. Unlike the “Recon” data collection modules which are invoked by the attacker “on demand”, the “Exfiltration” modules are designed to run repeatedly and bring only new valuable data.

The infiltration to the networks and end points of the victims was conducted using vulnerable Excel and Word documents attached to carefully crafted email messages. The attached files recycled stolen data (and therefore context) from other victims of the same sector, making what would otherwise be a suspicious email, a legitimate email. It is reasonable to assume that the identity of the victim was also used to send the email with his positive reputation and appearance.

These targeted social engineering messages (“Spear Phishing”) bypassed “perimeter” security measures.

New software exploits will always be around to help circumvent “perimeter” security measures. DLP solutions were also probably defeated in this attack since Rocra implements a propriety data transmission protocol with the C&C that change both file content and file size. However, data access patterns are difficult to change. Automation, among other attributes of data access, provides the attacker with speed and volume and cannot be discarded.

Was it possible to detect and prevent the data theft?  Yes—had the victims monitored their data more closely rather than just monitoring the network perimeter and endpoints.


January 29, 2013
 Imperva and Cisco Collaboration
Pin It

This week in London is Cisco Live.  We are proud to continue our collaboration with Cisco to significantly enhance the deployment and management of the SecureSphere Web Application Firewall (WAF) in Cisco Nexus 1000V based virtual switching environments.  We think this helps security teams tremendously.

How?  SecureSphere will capitalize on Cisco’s feature rich virtual switching and software-defined networking technologies built into Nexus 1000V – a multi-hypervisor ready virtual switch – and is designed to simplify deployment, maximize uptime, and support live virtual machine migration in virtualized data centers as well as multi-tenant private and provider clouds.

For securing virtualized and cloud environments, customers need defense-in-depth security architecture that demands multiple virtual firewalls, including zone-based firewall, tenant-edge firewall and web application firewall (WAF).  The Imperva SecureSphere WAF, in conjunction with Cisco’s Virtual Security Gateway (VSG) and ASA 1000V cloud firewalls, can offer customers a comprehensive multi-tier virtual firewall solution.  Furthermore, the Cisco Virtual Service Datapath or vPath technology embedded in Cisco Nexus 1000V would enable insertion and deployment of multiple virtual services on a per tenant basis. 

At Cisco Live in London this week, Imperva is demonstrating integration of SecureSphere with the Cisco Nexus 1000V.  What does this mean to customers?  It makes it much easier for organizations to provision Web application security, enabling customers to rapidly on-board and protect new applications and tenants. With support for Cisco’s vPath technology, customers can transparently steer a tenant’s virtualized application traffic to SecureSphere through pre-defined policies and also enable service chaining across multiple virtual network services.  SecureSphere’s vPath awareness would ensure that security policies remain in place even when virtual machines move.  Together, the joint solution simplifies deployment of Web application security in virtualized datacenters.

We believe that more and more companies are virtualizing their data centers. Virtualized environments, just like their non-virtualized counterparts, must be highly secure and available.  This new joint solution, which follows on the heels of our Cisco Nexus 1010/1110 announcement in September, allows Imperva to extend its best-of-breed security, manageability, and flexible deployment options to Cisco Nexus 1000V virtual switching environments.

The SecureSphere WAF with Cisco Nexus 1000V vPath support helps customers to:

  • Steer Web traffic through the SecureSphere WAF, even in complex and heterogeneous datacenters with multiple virtual services.
  • Migrate SecureSphere WAF virtual machines to new host machines without impacting application sessions.
  • Maximize application uptime through fail open Web application firewall configuration.

Imperva is hosting a technology preview of the Cisco Nexus interoperability at Cisco Live London starting January 28, 2013. The demonstration showcases the Imperva SecureSphere WAF hosted on the Cisco Nexus 1110 and the SecureSphere WAF interoperating with Cisco Nexus 1000V. Please visit us in booth E1.

You can see Cisco's blog on our collaboration here.


January 25, 2013
 Social Engineering Hits the Classifieds
Pin It

Interesting ad:

Do you have an open mind, a sense of adventure and the desire to make some serious cash? We're a group that specializes in extracting key pieces of information from business leaders by seducing them with beautiful ladies such as yourself. Each assignment pays between $5K and $20K depending on the value of the information and how long it takes to obtain it. We also reimburse for travel expenses, if any. We have immediate needs for beautiful, sophisticated ladies who will do anything it takes to find out what we need to know! Please send photos and tell us something about yourself.

It isn't just malware that targets executives.



January 11, 2013
 Ruby Gone Wild
Pin It



It has been recently discovered that the Ruby on Rails (RoR) environment suffer from certain parser vulnerabilities. The CVE-IDs associated with these vulnerabilities are CVE-2013-0155, CVE-2013-0156.

Both of the issues stem from RoR parser vulnerabilities. The handling of complex objects representations such as XML and JSON serialization formats, fails to address some scenarios raises several security issues.

What are the vulnerabilities mentioned?

In essence, the RoR parser can be told by the attacker to automatically instantiate complex objects of the YAML and Symbol type. These complex object instantiation may involve evaluating some arbitrary, attacker controlled, Ruby code. Some sources report they were able to abuse this vulnerability to run some arbitrary operating system (OS) commands.  This means once the attacker can basically own the server completely.

This vulnerability is Generic in the sense that it’s not related to a specific application running on top of RoR, but to the RoR infrastructure itself.

An example could be found on Rapid7’s blog.


Using JSON, an attacker can pass an array instead of the expected atomic type such as an integer or a string.

Abusing that vulnerability, the attacker can smuggle a NULL in one of the array elements, and bypass an application specific “IS NULL” check designed for atomic types, because the array contains other values besides the null.

Note that this vulnerability exploitation is very specific to the context of the application itself.

An example could be found on the Ruby On Rails Google Group here.


How to stay safe
In order to stay on the safe side of the lake, you should always harden your application from any unused capabilities in order to lower the risk.

A few steps to consider:

  1. Profile your application; only allow actions that are explicitly required. For example, if your application does not use XML/JSON, simply don’t allow that traffic. A WAF should have a capability to block access against “Unauthorized Request Content Type” with respect to the application profile.
  2. Blocking the creation of YAML or Symbol type in web messages. A simple WAF rule would stop that activity.
  3. For CVE-2013-0155 the logic is to simply to disallow null values to the specific vulnerable array parameters.


After thoughts
The Ruby on Rails vulnerabilities discovery prove the importance of White List security in the Web application world.  These issues show the importance of  only allowing what is specifically permitted and then specifically precluding known bad input values.


January 10, 2013
 Still Don't Like Our AV Study? A Response to The Critics
Pin It

Imperva CTO Amichai Shulman:

Let me start by saying that I’m not a big fan of back and forth argumentative discussions taking place in the blogosphere. However, the religious rage that erupted over the past couple of weeks with respect to our paper, Assessing the Effectiveness of Antivirus Solutions, compels me to provide some response.

Trying to avoid dragging the reader through a lengthy text full of complex arguments I’ll try to take this backwards (kind of the “Backwards Episode” from Seinfeld). The bottom line is in fact that many people have questioned the core aspects of our research: choice of malware samples and method of sample evaluation. However, even among those who have questioned our methodology, there seems to be a consensus around our conclusions – that standard AV solutions have reached the point of diminishing returns and organizations must shift their investments towards other solutions that protect organizations from the effects of infection. I have to assume that if our methodology leads us in a logical way to conclusions that are so widely acceptable, it can’t be all that wrong.

Criticism #1:  Sampling
The first part of the criticism targeted our choice of malware samples. Let me again put forward the bottom line – our critics basically claim that our results are so different than theirs because the method we used to collect the samples is incorrect. Let me put this in different words – if attackers choose malware the way AV vendors instruct them to, detection rates become blissful. If attackers choose malware in a different manner, you’re toast.

Poor sampling would be a fair argument to make if we used some mysterious technique for collecting malware that can only be applied by high end criminal gangs. That is, of course, not the case. We used Google searches with terms that get us close to sporadic malware repositories in publicly accessible web pages. We salted that with some links we obtained through sporadic searches in soft-core hacker forums.  We did focus on Russian language forums, but I do not believe that this is controversial.  Meanwhile, the “cream of the crop” was supplied by some links we took from traffic obtained through anonymous proxies. All this collection work was done by unbiased people, those who are NOT in the business of hacking nor employed by antivirus companies.

Moreover if we inspect the claim made by antivirus vendors with respect to what is the “right” set of malware samples, it actually supports our finding. They claim that if you take the sample size they are dealing with – 100K per day, they achieve higher than 90% detection (98% according to one report). That is – they miss 2,000 malware samples out of 100K. How hard do you think it is for an attacker (and I intentionally did not add the term “skilled”) to get his hands on a couple of those 2,000 undetected samples? I should add that all the samples that we included in our statistics—out of the samples that we’ve collected and tested—are those that were eventually detected by a large enough sample of AV products, and that none of them was a brand new malicious code – rather they were all variations and instances of existing malware.

Criticism #2:  Using VirusTotal
The second part of the criticism touches on our use of (VT) as a tool for conducting an experiment related to AV effectiveness. We recognize the limitations of using VT, and described those limitations in our paper.  However, bottom line first – we are not the first one to publish comparative studies of AV efficiency or to publish some analysis of AV efficiency based on VT. We drew explicit conclusions that are not put in technical terms but in plain business terms – organizations should start shifting their budgets to other solutions for the malware infection problem.

The first and foremost statement made by critics is “you should not have used VT because they say so." Again, here’s the bottom line – we have used VT in a prudent and polite way. We did not use undocumented features, we did not subvert APIs and we did not feed it with data with the purpose of subverting results of AV vendor decisions (which is an interesting experiment on its own). So basically, our wrongdoing with respect to VT is the way we interpreted the results and the conclusions we drew from them – going against this has no other term but “thought police”. This is of course before mentioning the fact that various recent reports and publications have been using VT for the same purpose (including Brian Krebs). I know that VT do not claim or pertain to be an anti-malware detection tool and that VT is not intended to be used as an AV replacement. However, they cannot claim to only be a collection tool for the AV industry with results provided per sample being completely meaningless. I must add that having an upload / get results API further disproves that claim. I deeply regret being dragged into this debate with VT since I truly value their role in the anti-malware community and have the utmost respect to their contribution to improvements of AV detection techniques and malware research.

One of the most adamant arguments against the validity of VT as a measurement for effectiveness is that it uses the command-line version of AV products and that configuration may not be ideal. I’d like to quote:

  • VirusTotal uses command-line versions: that also affects execution context, which may mean that a product fails to detect something it would detect in a more realistic context.
  • It uses the parameters that AV vendors indicate: if you think of this as a (pseudo)test, then consider that you’re testing vendor philosophy in terms of default configurations, not objective performance.
  • Some products are targeted for the gateway: gateway products are likely to be configured according to very different presumptions to those that govern desktop product configuration.
  • Some of the heuristic parameters employed are very sensitive, not to mention paranoid.

Regarding the first point, I personally do appreciate the potential difference between a command-line version of an AV tool and other deployed versions. However, in terms of signatures and reputation heuristics I don’t really get it. I’d love to see AV vendors explain that difference in details and in particular pointing out which types of malware are not detected by their command line version that are detected by their other version and why. I am certainly willing to accept that our results would have been somewhat different if tested an actually installed version of the product that is not the command-line version. However, I do think that they are a good approximation. If AV vendors claim that this is by far untrue I’d really like to see the figures. Is the command-line version 10%, 50% or 90% less effective than the product?

I don’t see the point in the second argument. Are they really claiming that VT configuration is not good because it is the recommended vendor configuration?

As for the third argument, this is really puzzling. According to this, we should have experienced a high ratio of false positives, rather than the high ratio of false negatives that we have observed in practice.

Quoting again:

VirusTotal is self-described as a TOOL, not a SOLUTION: it’s a highly collaborative enterprise, allowing the industry and users to help each other. As with any other tool (especially other public multi-scanner sites), it’s better suited to some contexts than others. It can be used for useful research or can be misused for purposes for which it was never intended, and the reader must have a minimum of knowledge and understanding to interpret the results correctly. With tools that are less impartial in origin, and/or less comprehensively documented, the risk of misunderstanding and misuse is even greater.

Again, the writer agrees that VT is indeed a tool that can be used for research as long as results are correctly interpreted. Yes, it is possible that we’ve misinterpreted the results. If that is your opinion then argue with our interpretation of the results. Unfortunately most critics chose not to do so, but rather argued that we used the wrong tools.

I could continue, however, I think that I’ve addressed the main criticism against our work and shown that most of it is of immaterial nature. I would like to see a livelier debate around our interpretation of the results and the conclusion – AV solutions attempting to prevent infection have reached a point of diminishing returns and are thus providing attackers with a large enough window of opportunity time-wise and device-wise to penetrate organizations and remain undetected for extremely long periods. It does not mean that we have to throw AV solutions away, it just means that we need to start shifting some of the money towards solutions that detect and prevent the effects of infection.


January 08, 2013
 How American Banks Were DDoSed by Hacktivists
Pin It

Great blog and research by Incapsula.

A nice toxic mix of PHP and backdoors.



Find Us Online
RSS Feed - Subscribe Twitter Facebook iTunes LinkedIn YouTube
Monthly Archives
Email Subscription
Sign up here to receive our blog: