The European Agency Network Information Security Agency (ENISA) warned today that antivirus programs only work to prevent 30% of attacks and instructed companies to be more diligent about protecting themselves. “Antivirus only works in 30% of cases to prevent cyber attacks, so it is necessary for security and technology go far beyond what has been done so far in the EU,” explained ENISA’s executive director. (Ironically, this hasn’t been covered by the English media yet. The article above was sent to me by a Spanish colleague.)
By contrast, in the US, rather than warning about the shortcomings of antivirus, the FBI warned of a "new" virus attack. Specifically, on 17 September, they cited a "new trend in which cyber criminal actors are using spam and phishing e-mails, keystroke loggers, and Remote Access Trojans (RAT) to compromise financial institution networks and obtain employee login credentials." Of course, a different version of the same old attack is not really new.
Is ENISA’s directive part of a growing trend away from antivirus? It seems so. This snippet from a hacker forum highlights the fundamental problem with antivirus and the ease of evasion:
When Flame was made public, Mikko Hyponnen’s famous mea culpa was quoted repeatedly across the internet: “Flame was a failure for the antivirus industry. We really should have been able to do better. But we didn’t. We were out of our league, in our own game.”In just the last two weeks, two articles have appeared questioning the efficacy of antivirus. First, MIT’s Technology Review wrote, “The Antivirus Era Is Over.”
In his blog, Neil MacDonald asks, “Is Antivirus Obsolete?”
Both articles argue you need it but by itself antivirus is not enough. Fair enough: You want to have an antivirus because it provides signatures to protect you against the high volume of the known malware attacks. But what about the hardcore hackers who write new stuff daily? You need to take them into account when securing your stuff. Hardcore hackers are not reusing exploits, they are finding them and writing their own payloads. Consequently, they are ahead of the antivirus industry until the hack is found and gets patched.
Some recommend expanding endpoint protection. Not enough. With an industry built on evasion, modern data protection policies should be all about spotting aberrant behavior and whitelisting. A good policy for an organization will be to monitor for unknowns and unwanted behavior, but audit all activity for a period of time. Reviewing audits based on business events. Banks, for example, look for audit trail of users while those users are on vacation, so they know there shouldn’t be activity. In this way, you can ensure complete control of your data access.
Imperva's Tal Be'ery has an interesting and positive perspective on Google's acquisition of VirusTotal which was published in SecurityWeek. Here's the thesis:
In buying VirusTotal, Google provided itself with an access to a community based reputation feed. They will probably leverage this valuable data in order to provide Chrome’s and other Google services’ users with better malware protection.
The acquisition of VirusTotal by Google signals that community based reputation feeds is a key element in a modern protection suite.
Anyone concerned about DDoS--which is just about everyone--should download our report. It is available here (no reg required).
After seeing the Dark Reading article, What the IPS Saw, the question came up was, 'What Didn't the IPS See?'
Why? The billions upon billions of data points they collected contained one glaring gap: the Web application attack vector (SQL Injection and Cross Site Scripting to name a few). Since many security teams falsely believe that IPS can block or mitigate application attacks and IPS vendors increasingly claim application security features, understanding the gap is important.
Fact: Hackers love
web applications and databases
In 2011, according to Verizon’s Data Breach Report (page 39), 83% of all data breached was from databases and 80% involved web application breaches. In the case of hacktivism, our report, the Anatomy of an Anonymous attack, highlighted the paramount importance application attacks play in a hacker’s arsenal:
Analyzing hacker forum data (over a period of a year) helps us understand what interests “private” hackers. Again, web attacks are the vector of choice:
It is easily recognizable that the largest vector of attack was in fact SQL Injection.
Bear in mind that the zero-day and shell code percentages also include XSS attempts to inject malicious code, which means even if the payload is shell code; the injection vector is a Web XSS.
Brute-Force will also be included in the Web Application Attack vector, as it will be mainly the attempt to break web logins by running dictionary attacks at them, via—surprise!—the Web.
Why don’t I see this information in IPS reports?
The answer is quite simple. The reason you can’t see this information in IPS reports, is because IPS can’t see it.
IPS technology is designed to follow patterns and to either match signatures against traffic, or understand structure of a flow. For example, in an exploit that is known, the system will have an updated (or so you hope) dictionary of signatures that will match and the session will be dropped. And in other cases, IPS might work on thresholds for amounts of traffic, or what is “known to be good practice” threshold.
Tautology vs. Signatures
Let’s single out SQL Injection to make an important distinction. A SQL Injection utilizes a True statement, meaning a statement that the SQL Interpreter will analyze and will say “yes, this is valid, I will now analyze this”. Unfortunately for the world of IPS, there is no limit to True statements in the world. For example:
- date(today) != char(57)
- JimmyPage > Life
- The list goes on...
The point here is very simple … you can’t write a signature for unlimited amount of terms, and you can’t predict behavior of an application by analyzing traffic as a pattern, since every application is written differently, and every developer has his/her own quirks.
You don’t use an IPS for Web application security, like you don’t install an antivirus to protect yourself against spam.
Last week, we blogged quite a bit on the Apple UDID-FBI news.
We were right that in fact the IDs were not a hoax, as some had purported. However, the real lesson from the breach isn't the trustworthiness of hacktivists or the FBI. Rather, this episode highlights the intersection and interdependence of privacy and security. I don't find myself agreeing with the ACLU often, but in this case, their principal technologist Chris Soghoian, had it exactly right:
What this highlights is that this identifier that exists on your phone is not as private as you might think,” he says. “There are probably hundreds or thousands of companies that have databases of UDIDs.”
Whether the FBI or other government agencies track UDIDs the same way ad networks and app makers do is still up for speculation. But thanks to a model where hundreds of firms can pass around users’ data without restrictions, it would have been a surprise if the government was left out of the party.
Sunday's New York Times op-ed warns us about the growing use of offensive vs defensive cyber security capabilities.
This article actually got it wrong. For years, emphasis has been put on developing attack capabilities rather than defensive ones. Its just that recent headlines with Flame and Stuxnet would have us believe otherwise.
The reality? In the US, much like in Israel (and probably other countries), there is a large disproportion between the amount of resources invested in developing offensive capabilities (a lot of resources) versus developing defensive measures (which has only recently increased). This is inversely proportional to common sense and potential impact. While offensive capabilities are very important in terms of gathering intelligence, they are relatively ineffective in terms of a large scale strike on underdeveloped countries like Syria, Libya and even Iran, not to mention terrorist organizations.