Blog|Login|中文Deutsche日本語
9 posts from December 2012
December 20, 2012
 Security Trends 2013: Trend #5 Hacktivism Gets Process Driven
Pin It

Trend #1:  Government Malware Goes Commercial

Trend #3:  Strength in Numbers

 

Trend #5:  Hacktivism Gets  Process Driven
In 2012, we witnessed changes in the way that Hacktivism operated. In early 2011, Hacktivist groups were focusing their efforts at specific organizations by methodically analyzing and attacking a targets front end—applications and web pages—and breaking them.   In 2012, Hacktivism was down, but not out.

For example, some “hacking incidents” proved to be hoaxes, or, more precisely, PR stunts.  Case in point:  Anonymous claiming to have hacked an FBI laptop hack.

To be effective, Hacktivists need to focus on divulging content or data that can damage their targets.  In our February report on Hacktivism, we detailed the process for stealing data from web applications.  We think this process will continue, but a new variation will emerge.  Specifically, Hacktivists will focus efforts on discovering CMS that are used in public websites via well-established techniques, such as error grabbing and Google dork searches, mapping them to vulnerabilities. Then use automated hacking tools to pull out the database contents as well as sensitive files for public disclosure. This approach, though simple and methodical, will focus on quantity over quality.

For example, the focus of Hacktivist group GhostShellTeam, in the course of 2012, have focused on CMS hacks with automated tools to expose files and data. When looking at the disclosed data, it was very clear that most of the data was captured from a CMS system, and that the extraction method was SQL Injection. How do such attacks work?

  1. Identify and collect vulnerabilities in CMS systems via different sources such as exploit-db.com and other exploit databases, some on hacker forums and pastebin.com publications.
  2. Using different techniques to map sites that use these CMS systems and versions via error message grabbing, Google dork searches and other techniques.
  3. Once identified, the targets may or may not be branched into different Hacktivism campaigns depending on the current agenda of the hacktivist group.
  4. An automated tool, such as SQLmap or Havij, is then used to grab the data out of the vulnerable website.
  5. Data is disclosed via social networks, usually alongside a long public letter from the group naming and blaming whoever the campaign targets.

 

 Security Trends 2013: Trend #4 APT Targets the Little Guy
Pin It

Trend #1:  Government Malware Goes Commercial

Trend #3:  Strength in Numbers

 

Trend #4:  APT Targets the Little Guy
We expect that, in 2013, attackers will also extend the practice commonly dubbed as APT to smaller businesses.

In 2012, we saw the continuing trend of smaller businesses being hit by cyber criminals. This is a direct outcome of the industrialization of hacking that successfully automated web application attacks. Attackers have learned to exploit and profit from compromised web applications—especially since automation can help uncover poorly protected, smaller companies.  Automation and poor protection will assist APT hackers target smaller organizations containing valuable information.

There are two key drivers that put smaller business at the risk of cyber attacks. First is the ability to automate web application attacks from start to end, compiling a list of potential targets, identifying vulnerability and completing the exploit. Second is the ability to profit from such exploits in some way – either directly monetizing data that was captured from the applications (especially PII and payment information) or indirectly by using them as platforms for attacks against consumers.

In the APT arena, attackers are already capable of launching massive, automated infection campaigns and one can assume their infection success rate is higher among users and devices in smaller organizations (that usually demonstrate lower security standards and awareness). Thus they already have a large foothold within small enterprise networks. In order to take advantage of this foothold, they need to evolve in two directions: automate the exploit process within the compromised network and find a way to monetize on the information.

As Mandiant indicated in a recent report, today internal network exploration and exploitation is mostly manual and thus attackers focus on a few larger targets. In order to scale these operations, botnet agents are going to become more sophisticated, allowing them to operate autonomously within compromised networks. Moreover, botnet agents will need to have autonomous mechanisms for filtering the data they send out; otherwise, storage and bandwidth are going to become an issue on the drop server side. We are already seeing botnet agents downloading and executing large software modules that perform local processing, in particular, file and data collection.  Therefore, we can safely assume that local document and data filtering capabilities are a natural evolution for such attack software.

The big question? How will attackers monetize their activities abusing smaller enterprises?  There are two potential directions:

  • Financial fraud—In this case, the attackers will require technology for automatic extraction of information from unstructured sources.
  • Information trading—Requires attackers to obtain technology for the automatic extraction of information from unstructured sources.

Given that both technologies are already being put to use in valid commercial applications and that most hacking is driven by well-funded criminal organizations, we believe that this is a natural evolution of attacks.

 

 Security Trends 2013: Trend #2 Black Clouds on the Horizon
Pin It

Trend #1:  Government Malware Goes Commercial

Trend #3:  Strength in Numbers

 

Trend #3:  Strength in Numbers
Cloud computing, and in particular, Internet as a service, or IAAS, has become an important piece of modern commercial IT. Amazon EC2, for example, allows versatility and elasticity for organizations (big and small), allowing them to sustain a direct correlation between their business activity volume and IT costs.  The same holds true for the hacking community.

In 2013, we expect to see a growing use of IAAS by attackers for different activities. There are a number of aspects that make cloud computing an appealing offering for attackers, and, especially those that are profit driven:

  • Elasticity – the ability to quickly get hold of a lot of computing resources without too many prerequisites.
  • Cost – the ability to closely tie up spending with specific attack campaign and the potential gain.
  • Resilience – the use of commercial cloud-computing platforms reduces the ability of defenders to black-list attackers and adds much valued latency to the process of server takedown.

Over the past year we have seen a number of attack campaigns in which attackers were deploying attack servers in Amazon’s EC2 cloud. In particular, this practice is used with respect to fraud and business logic attacks whose network footprint is relatively low per server (and thus hard to detect as a network traffic anomaly). In addition, for DDoS attacks, such cloud offerings become very compelling. Using a stolen credit card number to pay for the cloud service, an attacker can mount a large scale attack from the cloud.  The attack can then be carried out for a long enough time period before a preventative action against the attacking servers can be taken.

Finally, expect to see more usage of on demand computing power as attackers obtain larger quantities of unstructured data and find themselves in a need of computing power in order to process their bounty.

 

 Security Trends 2013: Trend #3 Strength in Numbers
Pin It

Trend #1:  Government Malware Goes Commercial

Trend #3:  Strength in Numbers

 

Trend #2:  Black Clouds on the Horizon
The famous criminologist, James Q. Wilson, pioneered the concept of community policing and transformed law enforcement.  In this case, police partnered with citizens and business to identify issues that led to crime in order to reduce crime rates.  Mr. Wilson’s approach, however, applied to the physical world. 

The digital equivalent would encourage organizations to share attack data, and coordinate what they see from an attack standpoint.  Today, an attack on one company may seem random.  But taken in a broader context, having broader visibility takes the randomness out.  Why don’t security professionals do this?  Psychologists often assert that “The first step toward change is awareness.” We predict that in 2013 we will see that both business and government parties will be taking the second step of reducing the security deficit, not just by extending their individual defenses, but, more importantly, creating collaborative defenses by sharing individual protection data.  In other words, cyber hippies will form security communities.

Benefits of Collaboration
From the attacker point of view, launching a successful attack against an organization requires investment in infrastructure. The infrastructure may be physical, such as internet servers to host command and control servers and exfiltrated data, or logical, such as software hacking tools need to be developed, vulnerabilities that need to be researched, and stealth communication protocols to conceal the attacker’s true identity.

In order to get the most out of their initial investment in hacking infrastructure, attackers strive to reuse their attack infrastructure against as many targets as possible. When there’s no collaboration between defending parties, each new target has to react to the attack as if it’s new, while chances are that other targets have already experienced the same attack in the past.

A good example for such reuse, and the potential of using it for defense side benefit, was the discovery of the HTran protocol used by many APT hackers to disguise the location of their command and control servers. The sharing of protocol details helped Dell to uncover 60 different families of custom-targeted malware used to mount complex APT attacks.

Government involvement in Collaboration on 2013

We predict that in 2013, the private sector will actively seek security solutions to enable it to share attack data rapidly and to automatically enjoy the strength of community defense without hindering the privacy of the data.

But private sector will not be alone in that effort. Governments have also become aware to the damage cyber attacks inflict on the state’s economy and national security and acknowledged the potential of sharing attack data to fight it. The data can be shared between the government and the private sector and also amongst the different private sector parties themselves.

A recent report of the Bipartisan Policy Center states:

Improvements in information sharing between the federal government and private sector about cyber threats and vulnerabilities show great promise for improving our cyber defenses and potential response measures. Public-private cyber information sharing can bolster and speed identification and detection of threats and will be critical to a coordinated response to a cyber incident. This type of information sharing can and must be done in a manner that protects privacy and civil liberties.

We believe that we will see more regulations and laws to encourage the sharing of attack data on one hand and on the other hand lifting legal barriers concerning the privacy of data that may interrupt such sharing. Such obstacles are the current demand of certain regulations never to share certain type of data articles. We predict that this strict demand will be replaced with a more balanced attitude that allows the sharing of such data in a privacy preserving way, in order to provide better data protection.

 

 Security Trends 2013: Trend #1 Government Malware Goes Commercial
Pin It

Trend #1:  Government Malware Goes Commercial

Trend #3:  Strength in Numbers

 

What will happen in 2013?  Will our cyber security get better or worse?

First, the good news.  We think security will improve for larger, well-funded organizations.  In the same way James Q. Wilson introduced community policing, transforming law enforcement, we think a community approach—a sort of security commune—will improve security in the digital realm.  Sharing attack information will help remove seeming randomness of attacks. 

Second, the bad news:

  1. As bigger firms get smarter, we think hackers will choose the path of least resistance—small companies.  To date, we’ve seen for-profit hackers pursue small organizations but rarely have we seen government-sponsored (APT) attackers go after the little guys.  We think that will change.  Small companies contain a lot of data and, in many cases, quality intellectual property.  They make for ripe targets.
  2. Not surprisingly, we think hackers will continue to get more sophisticated.  In 2013, hackers will continue to refine cloud computing for attacks.
  3. Traditional SQL injection attacks will continue—but we believe they will focus on content management systems (CMS).  Hackers go where the vulnerabilities are.  Today, CMS provides a rich target.
  4. We think hackers will use a cloud-based model to become more efficient and effective. 

Overall, 2013 will also have many headlines reporting breaches.  We believe the path and methods, however, will look a bit different.

 

Trend #1:  Government Malware Goes Commercial
Government military research has an influence on the industry.  Commercial aviation, for instance, has been heavily influenced by advances in military aircraft.  In 2013, we believe this government-driven cascade effect will apply to cyber security.  How?  The most dynamic change factor in the insider threat landscape in recent years is the evolution of modern malware. The massive introduction of user-owned devices coupled with work-force mobility is giving the “compromised insider” threat an extra weight compared to the more traditional “malicious insider” threat.

We expect two existing trends to take us through 2013:

  • Technologies previously attributed to “state sponsored” attacks are going to become commercialized (or commoditized), further blurring the difference between Cyber Crime and Cyber War.
  • Devices affected by modern malware (APT), representing a “compromised insider” threat, are going to become a more prominent risk factor than malicious insiders.  The 2012 Verizon Data Breach Investigations Report noted malware’s impact:  “69% of all data breaches incorporated Malware.” This represented a 20% increase over 2011.

Through 2011 and 2012, we have seen a variety of allegedly state sponsored malware operations described in the media among them “Gauss,” “Doqu” and “Flame.” Three notable aspects were discussed with respect to these operations:

  1. The method of infection.
  2. The complexity of the software and the robustness of the command and control network.

Throughout 2012, we have seen two of these three aspects appear in modern commercialized malware.

Infection Methods Gone Wild

In the second half of 2012 we closely tracked a number of botnets—which gives us a glimpse of future infection methods with next generation malware. Today, malware is usually delivered as a compressed archive about 50KB size and is the basis for a very robust and versatile compromise operation. We generally see that the initial infection package changes very frequently, even within the same distribution campaign to the point that antivirus (AV) products appear to have difficulty keeping up with detecting new strands of the same code. What will change?

  1. A larger number of hosts containing more sophisticated malware.  Each campaign is also characterized by keeping a large number of compromised servers that host the infection package. The actual functional modules that are downloaded from time to time vary in size, are by no means the tiny, size-optimized, executable programs of previous generation malware. Some of the modules are larger than 1MB, and in some of the instances, we tracked the total code size that amounted to almost 10MB.  Modules keep evolving over time.  For some, we saw version numbers grow substantially over time.
  2. The command and control structure (C&C) becomes larger and more robust. Today, basic malware comes equipped with a list of more than 10 IP addresses of available C&C servers. Recently, we have seen this number go up to 40.  Moreover, all C&C servers seem to share a common state with respect to the clients through some mechanism. The different modules downloaded from time to time provide functions such as sending spam, file pilfering, password grabbing, and attack against web servers. Each individual operation was able to last a few weeks before being shut down.

It seems that most operational capabilities that defined Flame and the like as “super malware” are in fact finding their way into these commercial malware operations. We expect the infection vector to remain the biggest differentiator between the commercial malware and the truly advanced persistent threat.

C&Cs Get a Major Upgrade

In a different incident, we tracked a botnet’s activity. This botnet is a classic banking malware typically seen in Latin America. The instance we tracked employed two versions of the malware agent using different methods to control the redirection of user traffic to the attacker-controlled server and two different types of C&C channels, thus giving the entire network an improved efficacy and redundancy.

This botnet operates by locally hijacking domain names of online banking applications and routing the traffic through an attacker-controlled server. It does that either by rewriting the “hosts” file or hooking into the domain name resolution service. It quickly became clear that the same technology can be used by the botnet operator to target enterprise systems rather than personal banking accounts. In particular, if the domains to be controlled (which are downloaded as a configuration file after first infection) are chosen to be cloud-based enterprise applications like SalesForce.com, NetSuite, SilkRoad, and the like, an attacker can gain access to corporate information stored at these systems and accessed by infected computers. Moreover, these cloud-based services may be accessed by mobile devices (in particular, laptops) from outside the enterprise perimeter, leaving no trace of the attack.

This is just one example that leads us to believe that next year we will see more enterprise data being affected by malware originally used for other malicious activities. This is going to be driven by the following:

  • Existing commercial banking malware technologies can seamlessly be applied to the compromise of cloud enterprise data.
  • Cloud enterprise data is mostly used by organizations with a large mobile work force, which, in time, is more susceptible to compromise.
  • Attackers are always looking for new revenue streams based on existing technologies.

Attackers have always followed the path of least resistance.  Considering that antivirus has not been effective in preventing infections from modern malware, this trend should surprise no one.  Enterprises who fail to adopt a data or file-centric security approach will be caught with their pants down.  Investing in the right “ears and eyes” to monitor the access of servers, databases and files, to make the detection of such attacks easier.

 

December 18, 2012
 Data Wiping: A New Trend in Cyber Sabotage?
Pin It

Yesterday, the Iranian CERT made an announcement about a new piece of malware that was designed to corrupt data. This malware joins the list of data corruption malware discovered in April, November and December 2012 – Wiper, Narilam and now GrooveMonitor respectively.  They wrote:

Latest investigation have been done by Maher center in cyber space identified a new targeted data wiping malware. Primitive analysis revealed that this malware wipes files on different drives in various predefined times. Despite its simplicity in design, the malware is efficient and can wipe disk partitions and user profile directories without being recognized by anti-virus software. However, it is not considered to be widely distributed. This targeted attack is simple in design and it is not any similarity to the other sophisticated targeted attacks.

GrooveMonitor does not pose a real threat to companies since it attack local files only and not the datacenter (databases or file shares) nor the datacenter backup. However, the Narilam malware discovered last month is a database sabotage malware. Its purpose is to corrupt databases of three financial applications from TarrahSystem used for banking, loans, retails and industrial applications.  But it is not a technical beauty pageant. When all of your data gets wiped and your antivirus proves to be worthless , do you take comfort in the fact the malware was simplistic?

Indeed, this new malware raises the question – are these just singular incidents or do we witness a trend of malware designed to corrupt data rather than steal it? While all three malware attacks originated in Iran, a country of great interest for several espionage agencies around the world, only Wiper is believed to be state-sponsored. The authors of the other two were probably inspired by Wiper to some extent. As Microsoft’s director of trustworthy computing Tim Rains stated nicely this week: “Unintended consequence of operating a sophisticated cyber espionage activity is that criminal groups are essentially given free research on how to infect systems and little-known vulnerabilities are brought to the forefront.”

This is just as true for method of operation. It is easier to hurt a competitor’s business by sabotaging its production systems by corrupting data rather than operating a complicated long term espionage campaign for stealing data. Roel from Kaspersky security blog sums it up: “If it wasn't clear already - the era of cyber-sabotage has arrived. Be prepared.”

 

 

December 17, 2012
 From A to V: Refuting Criticism of Our Antivirus Report
Pin It

Our anti-antivirus study got a lot of attention (you could say it went viral).  Most interestingly, people called our methodology “flawed.” 

While our report acknowledged the limitations of our methodology, we believe that, fundamentally, the model for antivirus—and not our methodology—is flawed.  Antivirus was built years ago during an age when mass infections was the name of the game.   Today, malware is deployed to target SPECIFIC individuals—CEOs, researchers, politicians, executives—and not everyone’s mom. 

One reaction to our study asserted that a virus can be blocked based on source IP:  “email with the malware attached, or the included URL… could have been blocked based on its source IP.”   This approach, however, addresses an old threat model in which the attacker would try to infect as many as possible targets with a single campaign – that included reusing URLs to hoax the malware and IP addresses to send an email. Reusing IPs allowed security companies to have blacklists for both IPs and URLs. However, in today’s threat scape, where we consider attackers that are specifically targeting a specific victim, they create a dedicated URL to host the malware and use a dedicated IP address to send malicious mail, easily overcoming blacklists.

Our study concluded that antivirus solutions are very effective in fighting widespread malware, and slightly less effective for older malware (2-3 month old).  But for a new malware, there is a good chance it will evade the antivirus.  In fact, our results are consistent with other studies.    For example, let’s look at the AV-TEST Institute’s results.  

The AV-TEST Institute, according to their site, is a “leading international and independent service provider in the fields of IT security and anti-virus research.”  According to AV-TEST’s website, in order to test the protective effect of a security solution, AV-TEST researchers simulate a variety of realistic attack scenarios such as the threat of e-mail attachments, infected websites or malicious files that have been transferred from external storage devices. When carrying out these tests, AV-TEST takes the entire functionality of the protection program into account. But even when all of the Anti-virus functionality enabled, the results reveal a worrisome security gap:

 

While antivirus solutions are very effective in fighting widespread malware and slightly less effective for older malware, for a new malware, there is a good chance it will evade the antivirus solutions.  That’s exactly what we found.

Finally, one should ask a question CEOs are asking CISOs worldwide:   if antivirus software is so good, how come we see so many successful attacks based on infected computers (Coca-Cola, South Carolina DoR to name a few)? And the obvious answer is that antivirus is not perfect and needs to be augmented with data security solutions, as was honestly acknowledged by antivirus veteran researcher, Mikko Hypponen “Antivirus systems need to strike a balance between detecting all possible attacks without causing any false alarms. And while we try to improve on this all the time, there will never be a solution that is 100 percent perfect. The best available protection against serious targeted attacks requires a layered defense.”

 

December 12, 2012
 TeamGhostShell’s #ProjectWhiteFox Analysis
Pin It

In a recent public disclosure via pastebin.com TeamGhostShell claimed to have stolen 1.6 million records off public websites such as NASA, FBI and more under a project named “Project White Fox.”

An open letter that TeamGhostShell published states that the campaign is more of an effort to encourage Hacktivism rather than focus on a target like other groups usually do.

By analyzing a partial sample of the data posted, it was clear that most of the data was captured from CMS systems, and that the extraction method was SQL Injection. The reason for that conclusion is the names of CMS systems within the database content that got extracted alongside well identifiable content characteristics, and the output format that is unique to the database attack tool SQLmap, and even though that was the majority, there was a portion of data in simple CSV format.

Some data that was disclosed was private information such as usernames, passwords and contact information.  In some cases, financial data was disclosed.

Looking into the activity in depth, it seems that most of the database dumps that were taken came from closed or open source CMS based web sites with known vulnerabilities.

Naturally, it seems that the targets were chosen out of a list of vulnerable websites based on the mapping of a vulnerability to a CMS based. Then, they used an automated SQL Injection tool to pull out the data.

 

 

December 06, 2012
 New Password Cracking Method
Pin It

A new attack makes some password cracking faster, easier than ever.  A researcher has devised a method that reduces the time and resources required to crack passwords that are protected by the SHA1 cryptographic algorithm.

First, some context. One of the main use cases for hashing function, such as the SHA-1 function, is to store passwords securely. When attackers obtain such hashed password, they need to launch a “brute force” attack against it, in order to reveal the password. “Brute force” means, they need to repeatedly guess the password, apply the hashing function on it and compare the result with their hash password they have. The security researcher has found an algorithmic shortcut in SHA-1 calculation that makes the computation easier, thus reducing the time needed to successfully “brute force” an attack.

But it should not surprise the security community, as the writing was on the wall. When a crypto hash is weakened (i.e., discovered to be less secured than perceived), it usually marks the start of its downfall and SHA 1 has been weakened since 2004.  This chart of the state of popular crypto hashes from 2009 (http://valerieaurora.org/monkey.html) shows just that:

Lifecycles
 

The corollary?  In case the hashing is done for security (e.g. hash user passwords, verify data integrity, etc.):

  • MD5 is dead and should never be used.
  • SHA-1 is going in the same direction.  Consider an upgrade of existing systems and definitely don't use it for new systems.

A smart choice would be to follow the U.S. National Institute of Standards and Technology (NIST) recommendation for federal agencies: "Federal agencies should stop using SHA-1 for generating digital signatures, generating time stamps and for other applications that require collision resistance." 

Best option? Use a hash function from SHA-2 family, such as the SHA256.

 

 

 

 

Find Us Online
RSS Feed - Subscribe Twitter Facebook iTunes LinkedIn YouTube
Authors
Monthly Archives
Email Subscription
Sign up here to receive our blog: