13 posts from May 2008
May 30, 2008
 When The Real World Meets The Virtual
Pin It


Yesterday, I stepped into Amichai's shoes, delivering the Application Defense Center's "top security trends" webinar with Carahsoft, one of Imperva's partners. (The original ADC recording is available here). One of the top trends (#7) is online game fraud.

Online game sites boast millions of loyal users. Surprisingly (at least to me) there is real money to be made in these virtual worlds. For example, people like Ailin Graef (aka Anshe Chung) earned their real money honestly from the virtual world (she was the first SL real world millionaire.).

As more actual money flows into this virtual world, the potential appeal for mischievous individuals increases. One of the threat vectors that the ADC mentioned was "Farmers":

Gamers employing cheap labor or software bots to focus on virtual money generating activities (exploiting application bugs and loopholes). Virtual money is later converted into real currency

In real life (first or second) the reality is beyond all imagination.  The Register reported yesterday about a man accused of siphoning $50,000 in micro-payments from Schwab, E-trade.

Michael Largent of Plumas Lake, California, is accused of using an automated script to open 58,000 online brokerage accounts that were linked to a handful of online bank accounts.

Largent then allegedly prompted E-trade and Schwab to send huge numbers of deposits between two cents and $1 to the accounts. Online brokerages frequently send such "micro-deposits" to verify that account details of new customers are correct. Largent's script had a penchant for cartoon characters: Accounts bore the name of Hank Hill and Rusty Shackelford, of Fox TV's King of the Hill and Marvel Comic's Johnny Blaze.

                                                                    (credit to Kevin Poulsen of Wired News)

Imperva's application data security products can monitor these kinds of activity not only to protect the virutual world, but the real world, too.

Imperva's application data security products can monitor these kinds of activity not only to protect the virtual world, but the real world, too.


 Turnabout is fair play?
Pin It

We've been following up on different incidents where sensitive information appearing in websites is accessible without any adequate protection measures, leaving the data vulnerable to different security threats such as Google Hacking. And yet, the following incident differs somewhat from the ones we usually read about... a recent exposure of underground information trading network reveals that hackers themselves are not immune to data leakage as a server containing stolen private information did not have any of the common safety controls (encryption, access controls) implemented on it. In effect, this allowed anyone with access to the server to freely gain 1.4 gigabytes of sensitive data!

And on a similar note, it seems that it takes a hacker to know a hacker and these crooks are very much keen on protecting their trade - a recent article described a "EULA" agreement on virus-spreading software suites. Basically, the virus writer has its copyright on the malware (no re-distribution and the likes). If violated, the hacker-seller will snitch to the appropriate authorities on the hacker-buyer. Now that's what's called Honor Among Thieves!


May 29, 2008
 Reviewing the Code Review
Pin It

Remember that old PCI DSS 6.6 dilemma - the one in which everyone and every company had a say - whether to perform a Code Review or deploy a Web Application Firewall (WAF)? Not only do I set my stance as rooting for the WAF, I reach further by saying that such a dilemma is a false one, for different reasons, the first of which is that the WAF provides real-time defense at virtually zero response time.

The PCI Security Standards Council tried to calm the tones by publishing a detailed clarification regarding this requirement. To summarize, the Code Review option may be enforced by applying at least one of four specified measures to obtain minimum level security. One of these measures is stated as "Proper use of automated web application security vulnerability assessment (scanning) tools".

And yet, I still have a problem with the code review option. Security holes in the source code are not always found, even when using a scanning tool and thus do not provide the minimal defense as specified in Requirement 6. Take for example the recent findings of XSS flaws found in multiple websites. These aren't just passive websites, but e-commerce websites, engaging in trade activity, where credit cards details are transferred, and which must comply with PCI. To further point the shortcoming of the code review option, we read that these sites earned McAfee's "Hacker Safe" logo. This basically means that McAfee, a well-established security assessment company, performed a code review (scanning) and certified the site as being PCI compliant. I certainly do not doubt the professionalism of McAfee - bugs are an inherent part of the development cycle and it is impossible to create a completely secure environment.

It is precisely for this reason why a WAF should be deployed in order to prevent the attacker from exploiting the application's vulnerability. 


May 28, 2008
 Giving SQL Injection the Respect it Deserves (Or: How Long Does It Take)
Pin It
During our recent Customer Advisory Board, one of the members (using WAF in production, now in the process of adding Database Security Gateways) mentioned that his organization was looking into Security Development Life Cycle (SDLC) as the first line of defense for data security, but after they had realized that fixing code could last many months, they turned to using WAF first and then implemented SDLC simultaneously.

I was intrigued by this comment and decide to find some statistics that can light a healthy discussion. Being one of Murphy's fans (the man and the pub establishment ) I know that 'there's always one more bug, but still, I decided to give it a try.

One of the first pages that Google ranked for me, was Microsoft's Secure Development Life Cycle blog. In one of the article I read, Michael was discussing the recent SQL injection attack and his advice was clear:

I think the first lesson from this attack is this:

If you have a Web server (doesn't matter what type), and it's hooked up to a database (doesn't matter what type) you need to go in and review your code that performs the database work.

Great. Code review is not too tough, even someone like me can handle it. What's next?

So now that you've determined the database access code, now what? The SDL is very specific about what do here, there are three requirements - they are requirements not recommendations, which means you must do the following coding requirements and defenses

  • Use SQL Parameterized Queries
  • Use Stored Procedures
  • Use SQL Execute-only Permission

Microsoft is one of the greatest SDLC evangelical organizations. In March 2005, they provided some statistics about the The Trustworthy Computing Security Development Lifecycle . This document proves how SDLC can reduce the number of critical vulnerabilities. It will not reduce it to zero, but will certainly improve overall security. The following diagrams illustrate some of the benefits of SDLC:
SDL1-Win2000 and win2003.png 

Thumbnail image for SDL2-SQL2000 before and after SDL.png

Impressive. SDLC minimize the risk (though not completely eliminating it).

How long does it take?

Microsoft's document states different numbers, so let's pick the shortest period  :

During the release phase, the software should be subject to a Final Security Review ("FSR"). The goal of the FSR is to answer one question. "From a security viewpoint, is this software ready to deliver to customers?" The FSR is conducted two to six months prior to software completion, depending on the scope of the software. The software must be in a stable state before the FSR, with only minimal non-security changes expected prior to release.
In other words, it'll take weeks to months (depending on the size of the code) before discovered vulnerabilities will be detected. Not to mention bugs that are introduced in the last minute, or bugs that were NOT discovered  during the review.

Give SQL Injection (as well as other application vulnerabilities) the respect it deserves. Use a system that is capable to prevent attacks while allowing the developers to fix the code. Use Application Data Security solutions.


Pin It

Earlier this month a new social network entered the scene, named "House of Hackers". I'm interested to see how this network will evolve. As of writing these lines, more than 2200 members have signed up!

As being a network for computer security researchers, I guess the members run the gamut from ethical hackers, to in-house penetration testers, to kids and to Mob-related hackers. On the one hand it can develop to become a well-established security platform where researchers collaborate and communicate, exchanging security ideas and concepts with the purpose of providing more secure and sound systems. It could however become yet another platform for the exchange of sensitive information and disclosure of 0-day exploits.


May 27, 2008
 The Search of SQL Injection
Pin It

We read and re-read the news: a massive Web attack, performed via SQL Injection which inserts hidden iframes which in turn refer to compromised sites which eventually download malware.

It happened back in January, then in March, once again in April and already twice in May. The most recent one uses zombies which search for vulnerabilities in sites using Microsoft ASP. No surprising news here. As we approach mid-2008 I can definitely say that this has been one of the biggest attack trends of the year. I know you've heard all the technical details as posted in other reports and blogs. And I already posted an entry regarding this attack so I won't repeat it here. But what comes to my mind regarding these attacks is the frequency and magnitude of Website infections. What is the method of mass transportation of these injections? Surely these hack-operations must have been automated. I believe this is where Google Hacking comes in, Google at the hackers' service. Supporting this claim is the fact that the number of infected applications is huge while the network footprint of the attack is negligeble.

We have already classified these attacks as SQL Injection, but I'd like to reconsider. As I see it, this trend is actually a hybrid-attack, Google acting as the conduit of these hacks, where a simple search can provide millions of results of vulnerable sites within a fraction of a second. The most recent webinar at Imperva on Google Hacking presents this combo-attack explaining how viruses are easily spread via Google and how to avoid being infected.[The link is to our webinar series...the recording will be up shortly].

It's this fusion of Google Hacking with SQL Injection which is making these attacks effective and creating the news headlines.


May 23, 2008
 What's The Deal With SAP Certification
Pin It
For some time I wanted to blog about the need to protect enterprise applications. You might think that it's obvious but most organizations are still behind when it comes to enterprise application protection.
Wednesday, Imperva announced that SecureSphere archieved SAP certification  Many organizations are using SecureSphere to monitor and protect their SAP systems and the certification provides the highest level of assurance that we meet SAP requirements for integration, performance and of course, security.

The press release tells that SAP has certified that SecureSphere v6.0 meets certification criteria under the category of Network Security for enterprise SOA based solutions. The PR jargon tells that Imperva SecureSphere Web Application Firewalls help provide seamless protection for Web-facing SAP applications.

In the real world lingo it means that when it comes to SAP (and actually, any dynamic application like SAP) SecureSphere integrates very well. During the certification process (which I was personally involved in), it was amazing to see how fast SecureSphere was able to plug into the SAP Certification Center network and pass the rigid tests in different scenarios.

If you believe in coincidence, at the same time that we announced our certification, the Digital Security Research Group ( reported an SAP Web Application Server vulnerability. An Input Validation Hole in webgui Permits Cross-Site Scripting Attacks (see more at Security Tracker). One might ask how come that such vulnerability was not detected before (by, for example automated scanning technology). The answer does not really matter. There's always one more bug and there's always more vulnerabilities to detect. But if you think about it from a security standpoint, this newly discovered SAP vulnerability simply shows that scanning, code review and assessment alone can not be trusted as the sole method to protect web applications. Amichai calls this PCI's False Dilemma (from Tech News World)

For those only concerned with compliance, the answer is simple: WAF. Because a WAF can be deployed without affecting the application and without engaging outside consultants to review application code, WAF is a faster and more cost-effective approach to meeting the letter of the law.

For those concerned with actually doing the right thing and asking "which first?" rather than "which?" the answer is actually the same: WAF. That's because a WAF can be deployed to provide immediate protection, and a WAF can be quickly configured to adjust as applications and application attacks change. WAFs not only provide the most cost-effective first step, but a sound building block for the second step. Once a WAF is in place, code review projects can proceed at a controlled pace, reducing the risk of errors. WAFs also provide critical information on usage patterns and changes in usage patterns that can guide code review teams and point out obvious problems.

And in the case of SAP, who's going to be willing to change that code (if they even have access to the source code in the first place)?

So the deal with SAP certification is very straight forward: New vulnerabilities are discovered often. New vulnerabilities affect business applications (like SAP). SecureSphere can protect and prevent attacks that exploit those vulnerabilities.


May 22, 2008
 How long does it take to detect a breach
Pin It
I am not sure that it can be called a phenomenon yet, but recently we are witnessing a growing number of breaches that are either detected after a very long time or it takes a very long time to determine its exact scope.

  • Two years later, IRS employees charged with snooping at tax records
According to an article in Computerworld , five federal workers at an Internal Revenue Service office in California have been charged with computer fraud for illegally accessing the confidential records of taxpayers. The five IRS workers accessed and viewed the records of up to 13 different people early in 2006.

  • It will take weeks to analyze the breach at UMass
CBS Springfield reports that hackers breached the computer system used by UMass Amherst's Health Services, potentially gaining access to thousands of medical records.

Campus officials say it will be weeks before they are completely sure what information, if any, was taken off the computers. They say the entire campus system is being looked at to avoid future breaches.

It's good to know that not all organization are clueless when it comes to activity monitoring. At least the internal systems at the IRS are monitored . Hopefully, they were able to detect it in real time and now, two years later they are only pressing charges.


May 15, 2008
 Under All that Heavy Equipment there is Just an Operating System
Pin It

Recently, a rare bug in a SCADA system by Invensys was disclosed - one which if exploited could cause a remote Denial of Service on the system. As these systems are deployed in power plants, dam control systems and other truly mission critical systems such an attack scenario is a major concern for the public.

Vulnerabilities in SCADA (Supervisory Control And Data Acquisition) are perceived as rare, however let us not overlook the fact that most SCADA systems deployed today are either controlled and monitored by software running on "standard" operating systems (i.e. Windows, Linux) or are themselves based on these operating systems. Thus the systems are probably exposed to a plethora of vulnerabilities known to affect these platforms. 

Last year, a short video clip was published with the results of a "secret" experiment on exploiting SCADA systems. Researchers spent a lot of time and money showing that sending specific command sequences to a generator would end up causing irreversible damage (as indicated by the inevitable smoke coming out of the tormented device). The truth is that they could spend 1/10 of the time and money by showing that if you turn off the cooling system of the generator (probably controlled by a Windows computer) you'll get the same effect.

All these SCADA exploits are just a reminder that most "civilization support" systems today are controlled by computers, most of them using standard operating systems, not to mention a web interface. It is definitely the time for governments all over the world to start setting and enforcing regulations not only for financial systems but also for those "civilization support" ones.


May 13, 2008
 Patches Reversing into Exploits
Pin It

On May 18 security researchers will gather at the IEEE Symposium on Security and Privacy. One of the papers to be represented is "Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications" by David Brumley, Pongskin Poosankam, Dawn Song and Jiang Zheng. These researchers showed how given a Microsoft patch, they were able to reverse engineer the patch and create a quick exploit to the original code. Their point was to show how Microsoft's current deployment of patch shipment is at fault, allowing attackers access to an exploit before other users have even begun to download the patch, a system which requires a redesign.

This seems like big news, even security expert Bruce Schneier took the time to blog about this research. The researchers conclude that current patch distribution schemes are insecure. The researchers do provide alternatives but for the meanwhile I find that these claims aren't mature enough, in a security manner of speech. After all, is that what we want - that Microsoft eliminates its patching process now that it knows that the vulnerability may be exploited as a result of distributing the patches?

That said, the research do have a point - the timeframe of hackers to attack unpatched systems is quite large when these systems rely on patch deployment as their sole means of protection.

This argument actually strengthens my claim that it is necessary to deploy 3rd party components which provide virtual patching in order to minimize the window of opportunity for attackers. These virtual patches can be deployed quickly, providing a fast response time to protect against a potential attack by acting as the front guard before the system itself is properly fixed and updated.



Find Us Online
RSS Feed - Subscribe Twitter Facebook iTunes LinkedIn YouTube
Monthly Archives
Email Subscription
Sign up here to receive our blog: