Blog|Login|中文Deutsche日本語
26 posts from December 2011
December 14, 2011
 Insider Threat Analysis
Pin It

This is a very interesting piece about the psychological aspects of insider threat from Symantec.

While I don't think that it has any remarkable individual insights it does puts this issue in a very nice perspective and suggests non-technical, organizational measures to mitigate insider threat. They have a lot of disclaimers regarding the bias of the data used for deriving the conclusions but in general I think that they have most of it right. The problem is that I don't think any of this has eventually a practical use for organizations. For example, they claim that a predisposition for someone who might be an insider threat risk is "poor social skills". Come on! That puts most high-tech people, and basically I think larger parts of corporate America at a high risk profile. The other prominent risk factor is "social or professional network risk" which basically means someone who knows someone in a competing company. Which basically means nothing in an environment where enterprises constantly try to recruit people from their competition.

 

 

December 13, 2011
 California Creates Special Unit to Fight Computer Crimes
Pin It

Great development.  The highlights:  

California has long led the nation in technological innovation. A lesser-known fact: It also leads the country in cybercrime.

On Tuesday, Kamala D. Harris, California’s attorney general, announced the creation of the nation’s largest e-crime unit, a group of 20 investigators and prosecutors whose sole mission will be to thwart and prosecute cybercrimes like identity theft, Internet scams, computer theft, online child pornography and intellectual property theft across the state. The unit, formed in August, has already filed 20 criminal cases and is investigating an additional 24 cases.

California joins Texas, Florida and Louisiana, which also have cybercrime units, though California’s scope and mandate will be much broader. Texas’s and Florida’s cybercrime units focus almost exclusively on online child pornography. Louisiana’s unit, which is broader in scope, is a three-person team: an investigator, a prosecutor and a forensics specialist.

Until now, cybercrimes in California were typically prosecuted at the local level —making any effort to prosecute multijurisdictional crimes difficult.

More law enforcement attention on cyber crime is welcome.  If past is prologue, expect hiccups as state (and federal) organizations figure out how to get it right.  Eventually, and hopefully, this leads to a similar tipping point that helped cripple the mafia.

 

 

 

 Top Cyber Security Trends for 2012: #1
Pin It

Our other trends are here:

Nine
Eight
Seven
Six
Five
Four
Three
Two
One

On December 14th, Imperva's CTO Amichai Shulman will be hosting a webinar, talking you through the ADC's predictions.  To register, click here.

 

Trend #1: Security (Finally) Trumps Compliance

In 2012 we expect to see security decisions driven not by compliance but for the simple reason of… security.

It sounds simple enough, but in previous years we have seen the influx of laws and regulations which drove the budget and security solutions. PCI, SOx and world-wide Data Privacy Acts were all used as the reasons to feed the security budget.  But this approach often backfired.  Anecdotally, when one CIO was asked about the key lesson from a major breach his firm experienced answered, “Security is not about surviving the audit.”

Smart companies used these regulations as springboards to enforce the case of security. In fact, both a 2011 Ponemon survey and the 2010 Verizon Data Breach Report showed that PCI did improve the organization’s security stance. However, regulatory compliance is not equivalent and does not confer security. It is enough to turn to Heartland Payment Systems for such an example. The company passed its PCI evaluation, and yet, they had suffered one of the biggest breaches in history.

This past year we have seen a shift in the corporate attitude for several reasons:

  1. Breaches are costly. Security breaches such as those suffered by Epsilon, RSA and Sony dominated front page news. The high profile breaches highlighted the impact of security. Brand damage, loss in brand, legal costs, notification costs, service outages and loss in shareholder value all became news of the day. In fact, the day after Sony’s breach announcement, the stock price dropped steeply. DigiNotar, a CA company was breached in September (see SSL trend) went underbelly later that month. While actual assessments of the cost of these past year breaches have not yet been made public, we can return to the Heartland Payment Systems breach for a lesson. For nearly two years financial analysts watched as large legal payments for damages were settled before the market could feel comfortable about Heartland’s ability to stabilize revenues.
  2. Companies with an online presence, regardless of size, are targeted. Not only were large corporations affected by breaches in the past year. Hackers have become very adept at automating attacks. According to the 2011 Verizon Data Breach Investigation Repot, hackers have “created economies of scale by refining standardized, automated, and highly repeatable attacks directed at smaller, vulnerable, and largely homogenous targets”. In other words, in a world of automated attacks, everyone is – or will be – a target. This point was exemplified in August 2011 when USA Today published that 8 million websites were infected by malware.  Our own research highlights how applications are likely to be probed once every two minutes and attacked seven times a second.
  3. Hacktivism brings (in)security to the frontlines. Hacking groups such as Anonymous and Lulzsec have received headlines when they repeatedly hacked into different corporations, large and small. Visa, Paypal, Sony Pictures, Fox.com, PBS.org as well as countries such as Tunisia, and government agencies such as Infragard all felt the hackitivist wrath whose attacks targeted applications and infrastructure.
  4. APT becomes an actual threat. Advanced Persistent Threats (APT) attacks are sophisticated attacks which relentlessly target corporations and governments for espionage and destruction. However, with good branding from worldwide Marketing and PR teams, this term has become the alternative description to a compromise following a corporate-phishing attack.   The fear of such an attack is boosting the security budget. A recent survey by ESG indicated that due to APT concerns, 32% of respondents are increasing security spending by 6-10%.
  5. Intellectual property requires protection. Organizations are beginning to understand the risk and consequences of a compromise of their bread and butter. The biggest risk of exposure of intellectual property is actually caused unintentionally. For example, through an employee leaving the company with corporate info obtained rightfully over time. Or, through a mis-configured server holding confidential documents (see trends on the externalization of collaboration platforms). Organizations also face the risk the deliberate theft of data from vengeful or malicious employees. For instance, this past year a former Goldman Sachs employee received an eight year sentence for stealing proprietary software code. Compromise of intellectual property may even be performed by the hands of external hackers. In the past we saw how hackers were solely focused on credit card numbers, login credentials and other such generic commodities. Although this type of data is still on the attacker’s radar, we are starting to see hackers focusing also on intellectual property. As a point in case, consider the RSA attack which involved the data relating to the SecureID tokens.
  6. Shareholders are now involved. The SEC has recognized the impact of a security breach to a company. As a result, recent updated SEC regulations require reporting information security breaches to shareholders. If in the past breaches could have been swept under the carpet, this regulation will make it harder to do so.

For these reasons, we will increasingly see how companies will perform wise security decisions based on actual security reasoning. Furthermore, the abundance of regulations – which ultimately try to set a minimal bar of security – will make it too costly for organizations to handle on a regulation-by-regulation basis. Instead, enterprises will implement security and then assess whether they have done enough in the context of each regulation.

 

 

 

December 12, 2011
 Deconstructing the Black Hole Exploit Kit
Pin It

This month, the science journal Nature published a story on the biggest black hole ever discovered by UC Berkeley researchers.  

What is the biggest black hole in cyber space? Imperva's malware dissection team took a careful look at the Black Hole Exploit kit anatomy.  In addition to Tomer, Sarit has now joined the team to add a feminine touch to the dissection process.

What’s New?
The new black hole exploit kit has been out and we’ve had a chance to deconstruct it.  Before we get super geeky, some general observations about the innovation in this kit:

  • Malware developers continue to use the latest tools to encrypt their malware to evade anti-virus (AV) software.  As usual, the encryption signature is new, avoiding AV—our analysis showed that 70 percent of AV software would miss this altogether.  This serves as a not-so-gentle reminder the fundamental problem with signature based AV—it changes every week with the use of a new encryption algorithm.  
  • Hackers are deploying resiliency.  In the past, we’ve seen hackers deploy a single exploit server.  In this case, there were four that could be redirected if any of the URLs was taken down.

 

What are BEPs?
An exploit kit, a browser exploit pack (BEP) is a toolkit that automates the exploitation of client side vulnerabilities. 

The toolkit is a bundle of PHP and HTML files with a list of exploit files (including JAVA, PDF, Browsers, Adobe Flash Player …etc) designed to target the operating system, browser or other client side application.  Toolkits are usually heavily obfuscated using some known or unknown obfuscation and crypto algorithms tools to avoid detection by anti-virus vendors.  

Black hole is yet another web exploit kit developed by Russian hackers. Blackhole is a very powerful kit with a number of recent exploits including Java and Adobe PDF exploits. One blog published (with updates) a great overview of the most known exploit packs. 

According the Hacker News, the black market cost of the pack:

Users can purchase the annual license for $1500, semi-annual license for $1000, or just a quarterly license for $700. The license includes free software updates for the duration of the contract. For those malicious users with a commitment phobia the makers of the kit offer yet another solution. You can rent the kit (on the author’s servers) for $50 for 24 hours, $200 for 1 week, $300 for 2 weeks, $400 for 3 week, and $500 for 4 weeks. A domain name comes included with the rental agreement, but should you desire to change it you need to pay another $35. But Now its FREE HERE!

NOTE:  For any pictures, click on them to BIGGIFY.

Summary:  The Infection Flow

Here’s a breakdown of the infection flow:

BEP1

The Infection Process

A live exploit pack only requires a victim “drive-by” – a trivial site visit – to start the infection process.

The click
The most common method used by BlackHole to spread is via links inside phishing emails.  We were no exception:

BEP2
 

Once clicked, we get the infamous “WAIT PLEASE LOADING……” page.

We can immediately see the 4 JS.JS java scripts sources.  Each JS.JS contains a redirection to a black hole exploit kit server.

BEP3

The redirection
We can see that the redirection is achieved by the JS document.location property:

BEP4
The infection
Then, the exploit kit will check for vulnerable applications and will select the best exploit.

  BEP5

Deploying the payloads 
The BEP searches for several vulnerabilities to propagate itself.  By deobfuscating the above JS we can notice the following java scripts and functions that reveal the targets.  We can see that the code is going to extract the versions of the following installed applications:

  • Java
  • PDF
  • Flash 

BEP6

Here, the kit checks for the installed OS:

  BEP7

Here, the kit checks for a PDF exploitation (CVE-2008-2992/CVE-2009-0927):

  BEP8


Here, the kit checks for a flash exploitation (CVE-2011-0611):

BEP9

 

Here, the kit gets a shellcode function:

BEP10

 

Here, the kit checks for a HCP exploitation (CVE-2010-1885):

BEP11


The Java JAR Payload 

The exploit kit sends a heavily obfuscated JS code with a Java applet code that downloads a malicious JAR file to the infected system.

  BEP12

BEP13

After uncompressing the jar file, 5 java class files are extracted:

BEP14

I used JD decompiler to decompile the class files. Zoom.class attempts to exploit “CVE 2010-0840”:

Unspecified vulnerability in the Java Runtime Environment component in Oracle Java SE and Java for Business 6 Update 18, 5.0 Update 23, and 1.4.2_25 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors.

  •  
    • Once the vulnerability is successfully exploited the payload tries to download and save on the infected system another malicious binary (analysis later). Saves it with a random name <randomName>.exe
    • Silently registers the downloaded binary as calling regsrv32.exe –s <binary.exe>

  BEP15

As noted above, the anti-virus payload detection is low, with 13  of 43 vendors catching it, a 30.2% success rate.

BEP16

Successful Exploitation – The Malware

On every successful exploitation the running shellcode downloads and executes a malicious binary (a download / execute type of shellcode).  We got infected twice.

BEP17


ZeuS v.2

BEP18

Sample MD5: 53507987ca9d772c7377a6066259aa7e

BEP19

Anubis Report: http://goo.gl/LCiCg

Ceeinject / Scar / Kazy

BEP21

Sample MD5: ba7017bb86b8d37a37479d6460e992f0

BEP22

Anubis Report: http://goo.gl/NUylt

 

Be safe.


 

 Top Cyber Security Trends for 2012: #2
Pin It

Our other trends are here:

Nine
Eight
Seven
Six
Five
Four
Three
Two
One

On December 14th, Imperva's CTO Amichai Shulman will be hosting a webinar, talking you through the ADC's predictions.  To register, click here.

 

Trend #2:  The Rise of the Middle Man

In 2010, we predicted the industrialization of hacking.  What is the impact of industrialization to hacker’s business models?  In 2012, with the increased supply and demand for compromised machines, as well as for sensitive corporate info, we predict the rise to a new cyber crime job role: the broker. This individual is responsible to match the buyers of stolen data, or compromised machines (aka “bots”), with the sellers of the data (or bot renters).  In the same way stocks and investors gave rise to stock markets, hackers need a middle man.

The success of bot herding opened up a large market where lots of hackers have many corporate machines under their control, each potentially holding a vast amount of data. However, waiting for individuals to approach and buy this type of data from them is simply too much of a slow and ineffective approach – causing the hackers to be a victim of their own success.  Instead, we are seeing that this situation actually opens up the wholesale opportunity for a middleman to bridge this gap.

 

 

December 09, 2011
 Top Cyber Security Trends for 2012: #3
Pin It

Our other trends are here:

Nine
Eight
Seven
Six
Five
Four
Three
Two
One

On December 14th, Imperva's CTO Amichai Shulman will be hosting a webinar, talking you through the ADC's predictions.  To register, click here.

 

Trend #3:  Anti-Social Media

As many more organizations are making their way into the social media space, we expect to see a growing impact to the integrity and confidentiality of the enterprise’s information. Moreover, hackers will continue to automate social media attacks, further exacerbating the situation. The heart of the problem resides in three separate issues inherent to social networks:

  1. Sharing - The most important thing to understand about social networks and the tools built on top of them is that they are designed for sharing information--not restricting access to it. Enterprises that try to use social media as collaboration suites for internal, sensitive business data- which require different levels of access privileges- are bound to encounter massive data breaches. The reason is not due to flawed access controls and privacy mechanisms. Rather, the restriction of information through these channels is in complete contrast to the concept of such environments which is, in fact, all about sharing. Consequently, organizations should keep an operational copy of all their data in a business system that can provide decent access controls. Data that can be made public can be exported out of this system and posted to the social network. This way, restricted information is kept inside business systems (regardless of whether they are on premise or in the cloud), while public information can be retrieved to publication on the social platform.
  2. Control – Organizations need to understand that there is nearly an absolute lack of control over interactions with members of the social platform. In the real world we attempt to control the types of social interactions we experience by carefully choosing our social circles as well as the places we hang out. This is not possible in the cyber world. Comment spam, defamation, false claims and bad language are the norm.  
    Keeping your social cyber environment clean of these is a difficult task.  Further, cyber cleansing claims resources in a manner proportional to the popularity of the enterprise. Measures range from sifting and sanitizing comments to engaging closely with the social networks in case of defamation. Enterprises who fail to invest these resources will quickly find that true followers are fleeing the scene. In the meanwhile, the brand name erodes – defeating the purpose of entering the social network scene.
  3. Lack of Trust and Proper Identification - There is no real way for enterprises to avoid copy-cats.  In today’s social platforms, there is no solid way to tell apart the real owner of a brand from impostors and copy-cats who are trying to take advantage of the popularity of a specific brand, to abuse it or to erode it. The identity of message posters cannot be verified in any way and there are no real tools to evaluate the trustworthiness of messages and their content.

The consequences could be general brand erosion or attack campaigns targeted towards enterprise's social circle. Mix these three concepts with the growing use of automation and you get social network mayhem.  In the past couple of years we have witnessed the impact of the power of automation when applied to social networks:

  • In February 2011, the Lovely-Faces.com website showcased hundreds of thousands of scraped Facebook user profiles.
  • In September 2011, another group demonstrated an application that automates the process of “friending”. Based on this process, the application creates a collection of all personal information, including photos, from those who accepted the friendship request.
  • Recently a group of researchers demonstrated the power of “social botnets”. These are fake profiles. However, these accounts can automatically grow a network of friends of actual real accounts. The research proved that the flawed “friend of a friend” trust model enabled this type of botnet proliferation. Further, their research found that individuals were three times more receptive to accepting a friendship request if the requester already shared a mutual friend with them.
  • Software automating account generation and various data mining research projects exist.
  • This Fall, DHS started setting up policies to monitor Facebook and Twitter. Automating this process will be at heart of this project in order to sift through the incredibly high volume of traffic.

Unfortunately, we do not see any market solutions ready to handle the above issues. Facebook as well as other social media platform providers are currently keeping full control and are attempting to fight some of the issues (mainly automation and fake accounts) from within. One such initiative is Facebook’s Immune project. This has proven to be mostly futile so far (for instance, there’s a clear conflict of interests between Facebook’s attempt to remove fake accounts and its attempt to show constant unbelievable growth). Rather, the solutions must be incorporated into existing platforms by enterprises themselves.

These solutions will have to rely on third parties that offer trust and data control services over the social media platform. Currently, we are not aware of any such existing solutions, leaving a void space ripe for research.

 

December 08, 2011
 Two Thirds of USB Devices Infected
Pin It

Australian rail company sold unwiped USB keys from lost property in auction.  Not the smartest idea.

But it provided a glance into data stored in USB devices with some  interesting stats:

  • The prevalence of malware: 66% were infected.
  • A lot of interesting docs  (duh..)
  • None was encrypted (duh.. times two)

For further breakdown and nice tables and graphics see Sophos site.

 

 

 Top Cyber Security Trends for 2012: #4
Pin It

Our other trends are here:

Nine
Eight
Seven
Six
Five
Four
Three
Two
One

On December 14th, Imperva's CTO Amichai Shulman will be hosting a webinar, talking you through the ADC's predictions.  To register, click here.

 

Trend #4: The Kimono Comes Off of Consumerized  IT

After being caught off-guard by the process of consumerization of IT, professionals are trying to regain control of corporate data. The problem is that they are doing it the wrong way. Instead of trying to control data at the source, IT organizations try to regulate the usage of end-user devices and de-cloud data access. We expect organizations to spend a lot of time, money and effort on these techniques and technologies next year--with very poor results.

The consumerization of IT refers to the process in which corporate data is increasingly being processed by end-user devices and applications chosen and provided by the end-users themselves. Smart phones, tablets and custom personal laptops are leading this trend with their increasing processing power and storage capabilities, combined with their growing diversity of available applications. These are augmented by the increase of a remote work force and individuals who use home computers and home networks on a regular basis to access corporate resources. This process by itself posses many challenges to an organization that are related to the compromise of information on the device (either physically through loss and theft of the device, or digitally through malware), as well as the compromise of enterprise networks through a compromised device. Coupled with the move of corporate data into the cloud- where corporate data is stored outside of the organization- an even a more difficult problem emerges. With these issues in mind, the organization completely loses control over the entire interaction between end-users and corporate data.

There is a growing trend among IT professionals to try and regain the control of end-user devices. Through different means, organizations are trying to enforce “proper” usage and settings of non-corporate devices. IT departments are attempting to enforce policies such as password strength, device lockup and even remote wiping in the case of device loss. For example, access through the ActiveSync protocol to Microsoft eMail servers can be restricted to devices that implement a specific security policy. Some enterprises also go as far as to try and regulate the devices that are allowed to access enterprise data to those models who posses certain security capabilities. We anticipate that the next step will be to require that certain security solutions be installed on those devices that are allowed to connect to the network (e.g. Lookout or any other mobile AV). In order to reduce the risk of device compromise, enterprises are also trying to enforce any web access from the device to be relayed through the enterprise network where it can be monitored and controlled (which, of course, has severe implications in the case of SSL protected web resources - as explained in a different trend). Further, this approach hopes to bridge gap that exists between user devices and cloud applications that hold enterprise data.  The approach described above is bound to fail for quite a few reasons. Most of them stem from overlooking past experience and human nature:

  1. Past is prologue.  The past couple of years have shown that enterprises are failing to prevent the compromise of enterprise computing equipment. Extending the scope of the problem to a larger variety of devices only magnifies  the problem:  
  2. Maintaining availability.  Organizations that delegate information availability and network accessibility issues to the cloud and then take the approach of tunneling all user device traffic, are going to face major networking issues. Consequently, they will find themselves spending time and money on creating and maintaining the high level of world wide availability which they wanted to avoid in the first place.
  3. User privacy. There are unsolved issues regarding the impact to user’s privacy and the liability of the enterprise to personal information stored on these devices. For instance, remote wipe-out tools cannot differentiate between corporate and personal information.

This upcoming year, organizations are going to spend quite a lot of money and effort before realizing how little improvement this approach brings to enterprise data security. When they do realize the failure of these measures, they are going to look for a different set of solutions that are going to be more tightly coupled to the data itself. Such solutions include monitoring requirements for access to the data stores and strict control of that access. 

 

 

December 07, 2011
 Anonymous' Legal Legacy
Pin It

Imperva's legal team pointed out an interesting article in the American Bar Association (ABA) Journal.

The article details an attack by Anonymous on a computer security firm HBGary--not terribly noteworthy for those of us in security.  However, what is interesting is that Anonymous posted emails online between HBGary and its law firm, Hunton & Williams.  Anonymous exposed efforts by HBGary and Hunton & Williams to use illicit tactics in bringing down an activist website.

The lawyers involved now may face charges relating to their illegal conduct, as well as face State Bar scrutiny, regarding not only their illegal conduct, but also whether they adequately protected confidential client information.

The lesson?  Currently, there are no explicit laws governing the legal industry on what measures lawyers must take to keep client information confidential.  Considering how legal information has been digitized, this will be something lawyers will have to evaluate seriously. Considering the wealth of sensitive information lawyers hold on clients, imposing security and privacy compliance on lawyers seems inevitable.  And ironic.

 

 Top Cyber Security Trends for 2012: #5
Pin It

Our other trends are here:

Nine
Eight
Seven
Six
Five
Four
Three
Two
One

On December 14th, Imperva's CTO Amichai Shulman will be hosting a webinar, talking you through the ADC's predictions.  To register, click here.

 

Trend #5:  NoSQL = No Security?

The IT world is quickly embracing Big Data.  Huge data stores are the next big step in analyzing the massive amounts of data that is being collected in order to identify trends. For example, new start ups use these systems to analyze trillions of DNA strips to gain an understanding of our genealogy. To well-established companies who are adopting the technology to map and time transportation systems across the world to make our traveling easier and cheaper. While Big Data is becoming a buzzword in information systems, there has not been much investigation into the security implications. Many predict that in 2012 we’ll see a growing interest in Big Data and it’s underlying technology, NoSQL. We predict that the inadequate security mechanisms of these systems will inhibit enterprises from fully integrating these systems as third party components within the corporation.

NoSQL is a common term to describe data stores that store all types of data – from structured to unstructured. Due to this diversity, these data stores are not accessed through the standard SQL language. Up until recently, we categorized our conception of data stores in two groups: relational databases (RDBMS) and file servers. The new kid in town, NoSQL, opened our minds to a database that, unlike the conventional relational concepts, does not follow a structural form. The advantage? Scalability and availability.  With a technology where each data store is mirrored across different locations in order to guarantee constant up-time and no loss of data, these systems are commonly used to analyze trends. These systems are not suitable for financial transactions requiring a real-time update, but could be employed at a financial institution to analyze the most efficient or busiest branch.

However, as applications using NoSQL are being rolled out, little time has been taken to think or re-think security. Ironically, security in database and file servers have seen their share of problems over the years. And these are systems that have gained mileage over the years which allowed this type of security inspection. We cannot say the same about NoSQL.

Many may claim that the developers of different NoSQL systems have purposefully pushed out security aspects from their systems. For instance, Cassandra has only basic built-in authentication procedures. This lack of security is considered their feature and built in mind that database administrators do not need to trouble themselves with security aspects. Security, then, should be an offloaded process to be dealt with by a dedicated team.

We believe the NoSQL systems will suffer from a number of issues:

  • Lack of expertise. Currently, there are hardly enough experts who understand the security aspects of NoSQL technologies. When building a NoSQL system, there is no obvious security model that fits. The lack of such a model makes the implementation of security a non-trivial process and requires extensive design. As a result, security features that need to be considered get pushed out over and over again.
  • Buggy applications. Until third party solutions roll out to provide the necessary security solutions, it is the NoSQL applications that will carry the security load.  Issues include:
    • Adding authentication and authorization processes to the application. This requires more security considerations which make the application much more complex. For example, the application would need to define users and roles. Based on this type of data, the application can decide whether to grant the user access to the system.
    • Input validation. Once again we are seeing issues that have haunted RDBMS applications come back and haunt NoSQL databases. For example, in Blackhat 2011, researchers showed how a hacker can use a “NoSQL Injection” to access restricted information. For example, “The Web Application Hacker's Handbook: Finding and Exploiting Security Flaws” contains a new separate chapter focused solely on the security of programming frameworks used for NoSQL.
    • Application awareness. In the case where each application needs to manage the security, it will have to be aware of every other application. This is required in order to disable access to any non-application data.
    • When new data types are added to the data store, the data store administrator would have to figure out and ensure what application cannot access that specific data.
    • Vulnerability-prone code. There are a certain amount of NoSQL products, but a magnitude more of applications and application server products. The more applications, the more code in general prone to bugs.
  • Data Duplicity. In NoSQL systems, data is not strictly saved in particular tables. Instead, the data is duplicated to many tables in order to optimize query processing. As a result, it is not possible to classify credit cards according to a particular sensitive table. On the contrary, this type of data can be found in different places: transaction logs, personal details, specific tables which represents all credit cards, and other locations which may have not even been considered.
  • Privacy.  Although our focus is on security, privacy concerns cannot be ignored. Take for example a healthcare platform where providers get together and share patient data. A patient might access the system for genetic information, and later access it in respect to drug info. An application which analyzes this data can correlate the information to find purchasing trends relating to genetics and health. The problem is that this type of correlation was not considered when the data was initially inserted. As a result, the data was never anonymized allowing anyone to identify specific individuals from the bigger picture.

NoSQL is still in its infancy. It will take awhile until we will see these systems fully deployed at the majority of enterprises. For this precise reason it is so important to invest in the in the security of these systems. 

 

 

 

 

 

Find Us Online
RSS Feed - Subscribe Twitter Facebook iTunes LinkedIn YouTube
Authors
Monthly Archives
Email Subscription
Sign up here to receive our blog: