Tuesday, November 18, 2014

Die Laughing from a Billion Laughs

By Fernando Arnaboldi

Recursion is the process of repeating items in a self-similar way, and that’s what the XML Entity Expansion (XEE)[1] is about: a small string is referenced a huge number of times. 

Technology standards sometimes include features that affect the security of applications. Amit Klein found in 2002 that XML entities could be used to make parsers consume an unlimited amount of resources and then crash, which is called a billion laughs attack. When the XML parser tries to resolve, the external entities that are included cause the application to start consuming all the available memory until the process crashes. 

Thursday, November 6, 2014

ELF Parsing Bugs by Example with Melkor Fuzzer

By Alejandro Hernandez @nitr0usmx

(Extract from white paper at )

Too often the development community continues to blindly trust the metadata in Executable and Linking Format (ELF) files. In this paper, Alejandro Hernández walks you through the testing process for seven applications and reveals the bugs that he found. He performed the tests using Melkor, a file format fuzzer he wrote specifically for ELF files.

Thursday, October 23, 2014

Bad Crypto 101

By Yvan Janssens

This post is part of a series about bad cryptography usage . We all rely heavily on cryptographic algorithms for data confidentiality and integrity, and although most commonly used algorithms are secure, they need to be used carefully and correctly. Just as holding a hammer backwards won't yield the expected result, using cryptography badly won't yield the expected results either.

To refresh my Android skillset, I decided to take apart a few Android applications that offer to encrypt personal files and protect them from prying eyes. I headed off to the Google Play Store and downloaded the first free application it recommended to me. I decided to only consider free applications, since most end users would prefer a cheap (free) solution compared to a paid one.

Thursday, October 16, 2014

Vicious POODLE Finally Kills SSL

By Robert Zigweid

The poodle must be the most vicious dog, because it has killed SSL. 

POODLE is the latest in a rather lengthy string of vulnerabilities in SSL (Secure Socket Layer) and a more recent protocol, TLS (Transport layer Security). Both protocols secure data that is being sent between applications to prevent eavesdropping, tampering, and message forgery

POODLE (Padding Oracle On Downgraded Legacy Encryption) rings the death knell for our 18-year-old friend SSL version 3.0 (SSLv3), because at this point, there is no truly safe way to continue using it.  

Google announced Tuesday that its researchers had discovered POODLE. The announcement came amid rumors about the researchers’ security advisory white paper which details the vulnerability, which was circulating internally.

Thursday, September 18, 2014

A Dirty Distillation of Proposed V2V Readiness

By Chris Valasek @nudehaberdasher

Good Afternoon Internet
Chris Valasek here. You may remember me from such automated information kiosks as "Welcome to Springfield Airport", and "Where's Nordstrom?" Ever since Dr. Charlie Miller and I began our car hacking adventures, we’ve been asked about the upcoming Vehicle-to-Vehicle (V2V) initiative and haven’t had much to say because we only knew about the technology in the abstract. 
I finally decided to read the proposed documentation from the National Highway Traffic Safety Administration (NHTSA) titled: “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application” ( This is my distillation of a very small portion of the 327-page document. 

Wednesday, September 10, 2014

Killing the Rootkit

By Shane Macaulay

Cross-platform, cross-architecture DKOM detection

To know if your system is compromised, you need to find everything that could run or otherwise change state on your system and verify its integrity (that is, check that the state is what you expect it to be).

“Finding everything” is a bold statement, particularly in the realm of computer security, rootkits, and advanced threats. Is it possible to find everything? Sadly, the short answer is no, it’s not. Strangely, the long answer is yes, it is.

By defining the execution environment at any point in time, predominantly through the use of hardware-based hypervisor or virtualization facilities, you can verify the integrity of that specific environment using cryptographically secure hashing.

Tuesday, August 19, 2014

Silly Bugs That Can Compromise Your Social Media Life

By Ariel Sanchez

A few months ago while I was playing with my smartphone, I decided to intercept traffic to see what it was sending. The first thing that caught my attention was the iOS Instagram app. For some reason, the app sent a request using a Facebook access token through an HTTP plain-text communication.

Thursday, August 14, 2014

Remote survey paper (car hacking)

Good Afternoon Interwebs, 
Chris Valasek here. You may remember me from such nature films as “Earwigs: Eww”
Charlie and I are finally getting around to publicly releasing our remote survey paper. I thought this went without saying but, to reiterate, we did NOT physically look at the cars that we discussed. The survey was designed as a high level overview of the information that we acquired from the mechanic’s sites for each manufacturer. The ‘Hackability’ is based upon our previous experience with automobiles, attack surface, and network structure. 

  • cv & cm 

Tuesday, August 5, 2014

Upcoming Blackhat & DEF CON talk: A Survey of Remote Automotive Attack Surfaces

Hi Internet,

Chris Valasek here; you may remember me from such movies as ‘They Came to Burgle Carnegie Hall’. In case you haven’t heard, Dr. Charlie Miller and I will be giving a presentation at Black Hat and DEF CON titled ‘A Survey of Remote Automotive Attack Surfaces’. You may have seen some press coverage on Wired, CNN, and Dark Reading several days ago. I really think they all did a fantastic job covering what we’ll be talking about.

We are going to look at a bunch of cars’ network topology, cyber physical features, and remote attack surfaces. We are also going to show a video of our automotive intrusion prevention/detection system.

While I’m sure many of you want find out which car we think is most hackable (and you will), we don’t want that to be the focus of our research. The biggest problem we faced while researching the Toyota Prius and Ford Escape was the small sample set. We were able to dive deeply into two vehicles, but the biggest downfall was only learning about two specific vehicles.

Our research and presentation focus on understanding the technology and implementations, at a high level, for several major automotive manufacturers. We feel that by examining how different manufacturers design their automotive networks, we’ll be able to make more general comments about vehicle security, instead of only referencing the two aforementioned automobiles.

I hope to see everyone in Vegas and would love it if you show up for our talk. It’s at 11:45 AM in Lagoon K on Wednesday August 6.

-- CV

P.S. Come to the talk for some semi-related, never-before-seen hacks.

Thursday, July 31, 2014

Hacking Washington DC traffic control systems

By Cesar Cerrudo @cesarcer

This is a short blog post, because I’ve talked about this topic in the past. I want to let people know that I have the honor of presenting at DEF CON on Friday, August 8, 2014, at 1:00 PM. My presentation is entitled “Hacking US (and UK, Australia, France, Etc.) Traffic Control Systems. I hope to see you all there. I'm sure you will like the presentation.

I am frustrated with Sensys Networks (vulnerable devices vendor) lack of cooperation, but I realize that I should be thankful. This has prompted me to further my research and try different things, like performing passive onsite tests on real deployments in cities like Seattle, New York, and Washington DC. I’m not so sure these cities are equally as thankful, since they have to deal with thousands of installed vulnerable devices, which are currently being used for critical traffic control.

The latest Sensys Networks numbers indicate that approximately 200,000 sensor devices are deployed worldwide. See Based on a unit cost of approximately $500, approximately $100,000,000 of vulnerable equipment is buried in roads around the world that anyone can hack. I’m also concerned about how much it will cost tax payers to fix and replace the equipment.

One way I confirmed that Sensys Networks devices were vulnerable was by traveling to Washington DC to observe a large deployment that I got to know, as this video shows: 

When I exited the train station, the fun began, as you can see in this video. (Thanks to Ian Amit for the pictures and videos.)

Disclaimer: no hacking was performed. I just looked at wireless data with a wireless sniffer and an access point displaying it graphically using Sensys Networks software along with sniffer software; no data was modified and no protections were bypassed. I just confirmed that communications were not encrypted and that sensors and repeaters could be completely controlled with no authentication necessary.

Maybe the devices are intentionally vulnerable so that the Secret Service can play with them when Cadillac One is around. :)

As you can see, Washington DC and many cities around the world will remain vulnerable until Sensys Networks takes action. In the meantime, I really hope no one does hack these devices causing traffic problems and accidents.

I would recommend a close monitoring of these systems, watch for any malfunction, and always have secondary controls in place. These types of devices should be security audited before being used to avoid this kind of problems and to increase their security. Vendors should also be required, in some way, to properly document and publish the security controls, functionality, and so on, of their products in order to quickly determine if they are good and secure.

See you at DEFCON!

By the way, I will also be at IOAsis (, so come through for a discussion and demo.

Wednesday, July 30, 2014

DC22 Talk: Killing the Rootkit

By Shane Macaulay

I'll  be at DefCon22 a to present information about a high assurance tool/technique that helps to detect hidden processes (hidden by a DKOM type rootkit).  It works very well with little bit testing required (not very "abortable" The process  also works recursively (detect host and guest processes inside a host memory dump).

Plus, I will also be at our IOAsis ( , so come through for a discussion and a demo.

Monday, June 16, 2014

Video: Building Custom Android Malware for Penetration Testing

By Robert Erbes  @rr_dot 

In this presentation, I provide a brief overview of the Android environment and a somewhat philosophical discussion of malware. I also take look at possible Android attacks in order to help you test your organization's defenses against the increasingly common Bring Your Own Device scenario.

Wednesday, May 7, 2014

Glass Reflections in Pictures + OSINT = More Accurate Location

By Alejandro Hernández - @nitr0usmx

Disclaimer: The aim of this article is to help people to be more careful when taking pictures through windows because they might reveal their location inadvertently. The technique presented here might be used for many different purposes, such as to track down the location of the bad guys, to simply know in which hotel is that nice room or by some people, to follow the tracks of their favorite artist.
All of the pictures presented here were posted by the owners on Twitter. The tools and information used to determine the locations where the pictures were taken are all publically available on the Internet. No illegal actions were performed in the work presented here. 


Travelling can be enriching and inspiring, especially if you’re in a place you haven’t been before. Whether on vacation or travelling for business, one of the first things that people usually do, including myself, after arriving in their hotel room, is turn on the lights (even if daylight is still coming through the windows), jump on the bed to feel how comfortable it is, walk to the window, and admire the view. If you like what you see, sometimes you grab your camera and take a picture, regardless of reflections in the window.

Wednesday, April 30, 2014

Hacking US (and UK, Australia, France, etc.) Traffic Control Systems

By Cesar Cerrudo @cesarcer

Hacking like in the movies

Probably many of you have watched scenes from "Live Free or Die Hard" (Die Hard 4) where "terrorist hackers" manipulate traffic signals by just hitting Enter or typing a few keys. I wanted to do that! I started to look around, and while I couldn't exactly do the same thing (too Hollywood style!), I got pretty close. I found some interesting devices used by traffic control systems in important US cities, and I could hack them :) These devices are also 
used in cities in the UK, France, Australia, China, etc., making them even more interesting.

After getting the devices, it wasn't difficult to find vulnerabilities (actually, it was more difficult to make them work properly, but that's another story).

Wednesday, April 23, 2014

Hacking the Java Debug Wire Protocol - or - “How I met your Java debugger”

By Christophe Alladoum - @_hugsy_

TL;DR: turn any open JDWP service into reliable remote code execution (exploit inside)

<plagiarism> Kids, I’m gonna tell you an incredible story. </plagiarism>
This is the story of how I came across an interesting protocol during a recent engagement for IOActive and turned it into a reliable way to execute remote code. In this post, I will explain the Java Debug Wire Protocol (JDWP) and why it is interesting from a pentester’s point of view. I will cover some JDWP internals and how to use them to perform code execution, resulting in a reliable and universal exploitation script. So let’s get started.

Disclaimer: This post provides techniques and exploitation code that should not be used against vulnerable environments without prior authorization. The author cannot be held responsible for any private use of the tool or techniques described therein.

Note: As I was looking into JDWP, I stumbled upon two brief posts on the same topic (see [5] (in French) and [6]). They are worth reading, but do not expect that a deeper understanding of the protocol itself will allow you to reliably exploit it. This post does not reveal any 0-day exploits, but instead thoroughly covers JDWP from a pentester/attacker perspective. 

Thursday, April 17, 2014

A Wake-up Call for SATCOM Security

By Ruben Santamarta @reversemode

During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted.

We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand.

Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include:
  • Aerospace
  • Maritime
  • Military and governments
  • Emergency services
  • Industrial (oil rigs, gas, electricity)
  • Media
It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft's ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. 

IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. 

Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals.  

In previous blog posts I've explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically.

What about the results? 

Insecure and undocumented protocols, backdoors, hard-coded credentials...mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors.

Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities.

I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned.
The following white paper comprehensively explains all the aspects of this research

Thursday, April 10, 2014

Bleeding Hearts

By Robert Erbes @rr_dot

The Internet is ablaze with talk of the "heartbleed" OpenSSL vulnerability disclosed yesterday (April 7, 2014) here:

While the bug itself is a simple “missing bounds check,” it affects quite a number of high-volume, big business websites (

Make no mistake, this bug is BAD. It's sort of a perfect storm: the bug is in a library used to encrypt sensitive data (OpenSSL), and it allows attackers a peak into a server's memory, potentially revealing that same sensitive data in the clear.

Initially, it was reported that private keys could be disclosed via this bug, basically allowing attackers to decrypt captured SSL sessions. But as more people start looking at different sites, other issues have been revealed – servers are leaking information ranging from user sessions ( to encrypted search queries (duckduckgo) and passwords ( The type of information accessible to an attacker is entirely a function of what happens to be in the target server’s memory at the time the attacker sends the request.

While there's lot of talk about the bug and its consequences, I haven't seen much about what actually causes the bug. Given that the bug itself is pretty easy to understand (and even spot!), I thought it would be worthwhile to walk through the vulnerability here for those of you who are curious about the why, and not just the how-to-fix.

The Bug

The vulnerable code is found in OpenSSL's TLS Heartbeat Message handling routine - hence the clever "heartbleed" nickname. The TLS Heartbeat protocol is defined in RFC 6520 - "Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) Heartbeat Extension".  

Before we get into the OpenSSL code, we should examine what this protocol looks like.

The structure of the Heartbeat message is very simple, consisting of a 8-bit message type, a 16-bit payload length field, the payload itself, and finally a sequence of padding bytes. In pseudo-code, the message definition looks like this (copied from the RFC):

The type (line 2) is simply 1 or 2, depending on whether the message is a request or a response.

The payload_length (line 3) indicates the size of the payload part of the message, which follows immediately. Being a 16-bit unsigned integer, its maximum value is 2^16-1 (which is 65535). If you've done some other reading about this bug, you'll recognize the 64k as being the upper limit on how much data can be accessed by an attacker per attack sent.

The payload (line 4) is defined to be "arbitrary content" of length payload_length.

And padding (line 5) is "random content" at least 16 bytes long, and "MUST be ignored."

Easy enough!

Now I think it should be noted here that the RFC itself appears sound - it describes appropriate behavior for what an implementation of the protocol should and shouldn't do. In fact, the RFC explicitly states that "If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently." Granted, "too large" isn't defined, but I digress…

There is one last part of the RFC that's important for understanding this bug: The RFC states that "When a HeartbeatRequest message is received ... the receiver MUST send a corresponding HeartbeatResponse message carrying an exact copy of the payload of the received HeartbeatRequest." This is important in understanding WHY vulnerable versions of OpenSSL are sending out seemingly arbitrary blocks of memory.

Now, on to OpenSSL code!  

Here's a snippet from the DTLS Heartbeat handling function in (the vulnerable) openssl-1.0.1f\ssl\d1_both.c:

If you're not familiar with reading C, this can be a lot to digest. The bug will become clearer if we go through this function piece by piece. We’ll start at the top…

This part just defines local variables for the function. The most important one here is the char pointer p (line 1457), which points to the heartbeat message the attacker controls. hbtype (line 1458)will hold the HeartbeatMessageType value mentioned in the RFC, and payload (line 1459)will hold the payload_length value. Don't let the fact that the payload_length value is being stored in the payload variable confuse you!

Here, the first byte of the message is copied into the hbtype variable (line 1463), and the 16-bit payload-length is copied from p (the attacker-controlled message) to the payload variable using the n2s function (line 1464). The n2s function simply converts the value from the sequence of bits in the message to a number the program can use in calculations. Finally, the pl pointer is set to point to the payload section of the attacker-controlled message (line 1465).

On line 1474, a variable called buffer is defined and then allocated (line 1481) an area of memory using the attacker-controlled payload variable to calculate how much memory should be allocated. Then, on line 1482, the bp pointer is set to point to the buffer that was just allocated for the server's response.
As an aside, if the payload_length field which gets stored in the payload variable were greater than 16-bits (say, 32 or 64-bits instead), we'd be looking at another couple of vulnerabilities: either an integer overflow leading to a buffer overflow, or potentially a null-pointer dereference. The exact nature and exploitability of either of these would depend upon the platform itself, and the exact implementation of OPENSSL_malloc. Payload_length *is* only 16-bits however, so we'll continue...

This code snippet shows the server building the response. It's here that this bug changes from being an attacker-controlled length field leading to Not Very Much into a serious information disclosure bug causing a big stir. Line 1485 simply sets the type of the response message pointed to by bp to be a Heartbeat Response. According to the RFC, the payload_length should be next, and indeed - it is being copied over to the bp response buffer via the s2n function on line 1486. The server is just copying the value the attacker supplied, which was stored in the payload variable. Finally, the payload section of the attacker message (pointed to by pl, on line 1465) is copied over to the response buffer, pointed to by the bp variable (again, according to the RFC specification), which is then sent back to the attacker.  

And herein lies the vulnerability - the attacker-controlled payload variable (which stores the payload_length field!) is used to determine exactly how many bytes of memory should be copied into the response buffer, WITHOUT first being checked to ensure that the payload_length supplied by the attacker is not bigger than the size of the attacker-supplied payload itself.

This means that if an attacker sends a payload_length greater than the size of the payload, any data located in the server’s memory after the attacker’s payload would be copied into the response. If the attacker set the payload_length to 10,000 bytes and only provided a payload of 10 bytes, then a little less than an extra 10,000 bytes of server memory would be copied over to the response buffer and sent back to the attacker. Any sensitive information that happened to be hanging around in the process (including private keys, unencrypted messages, etc.) is fair game. The only variable is what happens to be in memory. In playing with some of the published PoCs against my own little test server (openssl s_server FTW), I got a whole lot of nothing back, in spite of targeting a vulnerable version, because the process wasn't doing anything other than accepting requests. To reiterate, the data accessed entirely depends on what's in memory at the time of the attack.

Testing it yourself

There are a number of PoCs put up yesterday and today that allow you to check your own servers. If you're comfortable using an external service, you can check out Or you can grab filippo’s golang code from, compile it yourself, and test on your own. If golang is not your cup of tea, I'd check out the simple Python version here: All that's required is a Python2 installation.

Happy Hunting!

Tuesday, April 8, 2014

Car Hacking 2: The Content

By Chris Valasek @nudehaberdasher

Does everyone remember when those two handsome young gentlemen controlled automobiles with CAN message injection ( I sure do. However, what if you don’t have the resources to purchase a car, pay for insurance, repairs to the car, and so on? 

Fear not Internet! 

Chris and Charlie to the rescue. Last week we presented our new automotive research at Syscan 2014 ( To make a long story short, we provided the blueprints to setup a small automotive network outside the vehicle so security researchers could start investigating Autosec (TM pending) without requiring the large budget needed to procure a real automobile. (Update: Andy Greenberg just released an article explaining our work,

Additionally, we provided a solution for a mobile testing platform (a go-cart) that can be fashioned with ECUs from a vehicle (or purchased on Ebay) for testing that requires locomotion, such as assisted braking and lane departure systems. 

For those of you that want the gritty technical details, download this paper. As always, we’d love feedback and welcome any questions. 

Wednesday, March 26, 2014

A Bigger Stick To Reduce Data Breaches

By Gunter Ollmann, @gollmann 

On average I receive a postal letter from a bank or retailer every two months telling me that I've become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB - and that's just the ones that were disclosed publicly.

It's clear to me that something is broken and it's only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I'd question why they're collecting it in the first place. All too often these organizations - of which I'm supposedly a customer - are collecting personal data about "my experience" doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they'd be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:
  • Persistent difficulty discarding or parting with possessions, regardless of the value others may attribute to these possessions.
  • This difficulty is due to strong urges to save items and/or distress associated with discarding.
  • The symptoms result in the accumulation of a large number of possessions that fill up and clutter active living areas of the home or workplace to the extent that their intended use is no longer possible.
  • The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
  • The hoarding symptoms are not due to a general medical condition.
  • The hoarding symptoms are not restricted to the symptoms of another mental disorder.
Whether or not the organizations hording personal data know how to profit from it or not, it's clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they're doing. The gray market for identity laundering has expanded phenomenonly since I talked about at Blackhat in 2010.

We can moan all we like about the state of the situation now, but we'll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.

The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the "what data are you collecting and why?" questions, and incentivize corporations to take much more care protecting the personal data they've been entrusted with.

I feel that the data hording problem can be dealt with fairly easily. At the end of the day it's about transparency and the ability to "opt out". If I was to choose a role model for making a sizable fraction of this threat go away, I'd look to the basic component of the UK's Data Protection Act as being the cornerstone of a solution - especially here in the US. I believe the key components of personal data collection should encompass the following:
  • Any organization that wants to collect personal data must have a clearly identified "Data Protection Officer" who not only is a member of the executive board, but is personally responsible for any legal consequences of personal data abuse or data breaches.
  • Before data can be collected, the details of the data sought for collection, how that data is to be used, how long it would be retained, and who it is going to be used by, must be submitted for review to a government or legal authority. I.e. some third-party entity capable of saying this is acceptable use - a bit like the ethics boards used for medical research etc.
  • The specifics of what data a corporation collects and what they use that data for must be publicly visible. Something similar to the nutrition labels found on packaged foods would likely be appropriate - so the end consumer can rapidly discern how their private data is being used.
  • Any data being acquired must include a date of when it will be automatically deleted and removed.
  • At any time any person can request a copy of any and all personal data held by a company about themselves.
  • At any time any person can request the immediate deletion and removal of all data held by a company about themselves.
If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You'd hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the "big stick" approach needs to be reinforced.

A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer's personal data should be bigger and divvied up. Essentially I'd argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.

Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I've become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today. 

Thursday, February 27, 2014

Beware Your RSA Mobile App Download

By Gunter Ollmann, @gollmann 

It's been half a decade since Apple launched their iPhone campaign titled "There's an app for that". In the years following, the mobile app stores (from all the major players) have continued to blossom to the point that not only are there several thousand apps that help light your way (i.e. by keeping the flash running bright), but every company, cause, group, or notable event is expected to publish their own mobile application. 

Today there are several hundred good "rapid development" kits that allow any newbie to craft and release their own mobile application and several thousand small professional software development teams that will create one on your behalf. These bespoke mobile applications aren't the types products that their owners are expecting to make much (if any) money off of. Instead, these apps are generally helpful tools that appeal to a particular target audience.

Now, while the cynical side of me would like to point out that some people should never be trusted with tools as lofty as HTML and setting up WordPress sites–let alone building a mobile app, many corporate marketing teams I've dealt with have not only drunk the "There's an app for that" Kool-Aid, they appear to bath in the stuff each night. As such, a turnkey approach to app production is destined to involve many sacrifices and, at the top of the sacrificial pillar, data security and integrity continue to reign supreme.

A few weeks ago I noticed that, in the run up to the RSA USA 2014 conference, a new mobile application was conceived and thrust upon the Apple and Google app stores and electronically marketed to the world at large. Maybe it was a reaction to being spammed with a never-ending tirade of "come see us at RSA" emails, or it was topical off the back of a recent blog on the state of mobile banking application security, or maybe both. I asked some of the IOActive consulting team who had a little bench-time between jobs to have a poke at freshly minted "RSA Conference 2014" mobile application. 

The Google Play app store describes the RSA Conference 2014 application like this:
With the RSA Conference Mobile App, you can stay connected with all Conference activities, view the event catalog, manage session schedules and engage with colleagues and peers while onsite using our social and professional networking tools. You'll have access to dynamic agenda updates, venue maps, exhibitor listing and more!
Now, I wasn't expecting the application to be particularly interesting–it's not as if it was a transactional banking application etc.–but I would have thought that RSA (or whoever they tasked with commissioning the application) would have at least applied some basic elbow grease so as to not potentially embarrass themselves. Alas, that was not to be the case.

The team came back rather quickly with a half-dozen security issues. Technically the highest impact vulnerability had to do with the app being vulnerable to man-in-the-middle attacks, where an attacker could inject additional code into the login sequence and phish credentials. If we were dealing with a banking application, then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.

It was the second most severe vulnerability that caught my eye though. The RSA Conference 2014 application downloads a SQLite DB file that is used to populate the visual portions of the app (such as schedules and speaker information) but, for some bizarre reason, it also contains information of every registered user of the application–including their name, surname, title, employer, and nationality.

I have no idea why the app developers chose to do that, but I'm pretty sure that the folks who downloaded and installed the application are unlikely to have thought that their details were being made public and published in this way. Marketers love this kind of information though!

Some readers may think I'm targeting RSA, and in a small way I guess I am. Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications.

I'm betting that RSA didn't even create the application themselves. The Google Play store indicates that a company called QuickMobile was the developer. With one small click it's possible to get a list of all the other applications QuickMobile have created for what I would assume to be on their clients behalf.

As you can see from above, there are lots of popular brands and industry conferences employing their app creation services. I wonder if many of them share the same vulnerabilities as the RSA Conference 2014 application?

Here's a little bit of advice to any corporate marketing team. If you're going to release your own mobile application, the security and integrity of that application are your responsibility. While you can't outsource that, you can get another organization to assess the application on your behalf.

In the meantime, readers of this blog may want to refrain from downloading the RSA Conference 2014 (and related) mobile applications–unless you're a hacker or marketing team that wants to acquire a free list of conference attendees names, positions, and employers.

Wednesday, February 19, 2014

PCI DSS and Security Breaches

By Christian Moldes, Director of Compliance Services

Every time an organization suffers a security breach and cardholder data is compromised, people question the effectiveness of the Payment Card Industry Data Security Standard (PCI DSS). Blaming PCI DSS for the handful of companies that are breached every year shows a lack of understanding of the standard’s role. 

Two major misconceptions are responsible for this.

First, PCI DSS is a compliance standard. An organization can be compliant today and not tomorrow. It can be compliant when an assessment is taking place and noncompliant the minute the assessment is completed.

Unfortunately, some organizations don’t see PCI DSS as a standard that applies to their day-to-day operations; they think of it as a single event that they must pass at all costs. Each year, they desperately prepare for their assessment and struggle to remediate the assessor’s findings before their annual deadline. When they finally receive their attestation, they check out and don’t think about PCI DSS compliance until next year, when the whole process starts again. 

Their information security management system is immature, ad-hoc, perhaps even chaotic, and driven by the threat of losing a certificate or being fined by their processor.

To use an analogy, PCI DSS compliance is not a race to a destination, but how consistently well you drive to that destination. Many organizations accelerate from zero to sixty in seconds, braking abruptly, and starting all over again a month later. The number of security breaches will be reduced as soon as organizations and assessors both understand that a successful compliance program is not a single state, but an ongoing process. As such, an organization that has a mature and repeatable process will be compliant continuously with rare exceptions and not only during the time of the assessment.

Second, in the age of Advanced Persistent Threats (APTs), the challenge for most organizations it is not whether they can successfully prevent an attack from ever occurring, but how quickly they can become aware that a breach has actually occurred.

PCI DSS requirements can be classified into three categories:  

1. Requirements intended to prevent an incident from happening in the first place. 
These requirements include implementing network access controls, configuring systems securely, applying periodic security updates, performing periodic security reviews, developing secure applications, providing security awareness to the staff, and so on. 

2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.

3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.

Too many organizations focus their compliance resources on the first group of requirements. They give the second and third groups as little attention as possible. 

This is painfully obvious. According to the Verizon Data Breach Investigation Report (DBIR) and public information available for the most recent company breaches, most organizations become aware of a security breach many weeks or even months after the initial compromise, and only when notified by the payment card brands or law enforcement. This confirms a clear reality. Breached organizations do not have the proper tools and/or qualified staff to monitor their security events and logs. 

Once all the preventive and detective security controls required by PCI DSS have been properly implemented, the only thing left for an organization is to thoroughly monitor logs and events. The goal is to detect anomalies and take any necessary actions as soon as possible.

Having sharp individuals in this role is critical for any organization. The smarter the individuals doing the monitoring are, the less opportunity attackers have to get to your data before they are discovered. 

You cannot avoid getting hacked. Sooner or later, to a greater or lesser degree, it will happen. What you can really do is monitor and investigate continuously.

In PCI DSS compliance, monitoring is where companies are really failing.

Monday, February 17, 2014


By Cesar Cerrudo @cesarcer

At IOActive Labs, I have the privilege of being part of a great team with some of the world’s best hackers. I also have access to really cool research on different technologies that uncovers security problems affecting widely used hardware and software. This gives me a solid understanding of the state of security for many different software and hardware devices, not just opinions based on theories and real life experience.

Currently, the term Internet-of-Things (IoT) is becoming a buzzword used in the media, announcements from hardware device manufacturers, etc. Basically, it’s used to describe an Internet with everything connected to it. It describes what we are seeing nowadays, including:
  • Laptops, tablets, smartphones, set-top boxes, media-streaming devices, and data-storage devices
  • Watches, glasses, and clothes
  • Home appliances, home switches, home alarm systems, home cameras, and light bulbs
  • Industrial devices and industrial control systems
  • Cars, buses, trains, planes, and ships
  • Medical devices and health systems
  • Traffic sensors, seismic sensors, pollution sensors, and weather sensors
     …and more; you name it, and it is or soon will be connected to the Internet.

While the devices and systems connected to the Internet are different, they have something in common–most of them suffer from serious security vulnerabilities. This is not a guess. It is based on IOActive Labs’ security research into many of these types of devices currently being used worldwide. Sadly, we are seeing almost the exact same vulnerabilities on these devices that have plagued software vendors over the last decade. Vulnerabilities that the most important software vendors are trying hard to eradicate. It seems that many hardware companies are following really poor security practices when adding software to their products and connecting them to the Internet. What is worse is that sometimes vendors don’t even respond to security vulnerability reports or just downplay the threat and don’t fix the vulnerabilities. Many vendors don’t even know how to properly deal with the security vulnerabilities being reported.

Some of common vulnerabilities IOActive Labs finds include:
  • Sensitive data sent over insecure channels
  • Improper use of encryption
    • No SSL certificate validation
    • Things like encryption keys and signing certificates easily available to anyone
  • Hardcoded credentials/backdoor accounts
  • Lack of authentication and/or authorization
  • Storage of sensitive data in clear text
  • Unauthenticated and/or unauthorized firmware updates
  • Lack of firmware integrity check during updates
  • Use of insecure custom made protocols
Also, data ambition is working against vendors and is increasing attack surfaces considerably. For example, all data collected is sent to “vendor cloud” and device commands are sent from “vendor cloud”, instead of just allowing users to connect directly to and command their devices. Hacking into “vendor cloud” = thousands of devices compromised = lots of lost money.

Why should we worry about all of this? Well, these devices affect our everyday life and will continue to do so more and more. We’ve only seen the tip of the iceberg when it comes to the attacks that people, companies, and governments face and how easily they can be performed. If the situation doesn’t change soon, it is just matter of time before we witness attacks with tragic consequences.

If a headline like “+100K Digital Toilets from XYZ1.3 Inc. Found Sending Spam and Distributing Malware” doesn’t scare you because you think it’s funny and improbable, you could be wrong. We shouldn’t wait for headlines such as “Dozens of People Injured When Home Automation Devices Hacked” before we react.

Something must be done! From enforcing secure practices during product development to imposing high fines when products are hacked, action must be taken to prevent the loss of money and possibly even lives.

Companies should strongly consider:
    • Training developers on secure development
    • Implementing security development practices to improve software security
    • Training company staff on security best practices
    • Implementing a security patch development and distribution process
    • Performing product design/architecture security reviews
    • Performing source code security audits
    • Performing product penetration tests
    • Performing company network penetration tests
    • Staying up-to-date with new security threats
    • Creating a bug bounty program to reward reported vulnerabilities and clearly defining how vulnerabilities should be reported
    • Implementing a security incident/emergency response team
It is difficult to give advice to end users given that the best solution is just not to buy or use many products because they are insecure by design. At this stage, it’s just matter of being lucky and hoping that you won’t be hacked. Maybe opportunistic vendors could come up with some novel solution such as an IPS/anti* device that will protect all of your devices from attacks. Just pray that the protection device itself is not vulnerable.

Sometimes end users are forced to live with insecure devices since there isn’t any way to turn them off or not to use them. These include devices provided by TV cable companies, electricity and gas companies, public services companies, governments, etc. These companies and the government should take responsibility for deploying secure products.

This is not BS–in a couple of days we will be releasing some of the extensive research I mentioned and on which this blog post is based.

I intend for this post to be a wakeup call for everyone! I’m really concerned about the current situation. In the meantime, I will use the term INTERNET-of-THREATS (not Internet-of-Things). Maybe this new buzzword will make us more conscious of the situation. If it doesn’t, then at least I have tried.

Friday, February 14, 2014

The password is irrelevant too

By Eireann Leverett @blackswanburst

In this follow up to a blog post on the Scalance-X200 series switches, we look at an authentication bypass vulnerability. It isn’t particularly complicated, but it does allow us to download configuration files, log files, and a firmware image. It can also be used to upload configuration and firmware images, which causes the device to reboot.

The code can be found in IOActive Labs github repository.

If an attacker has access to a configuration file with a known password, they can use this code to update the configuration file and take over the switch’s management functions. It can also be used to mirror ports and enable or disable other services, such as telnet, SSH, or SNMP. Lastly, the same script can be used to upload a firmware image to the device sans authentication. In other words, it is *delightfully reprogrammable* until you install the patch.

This brings us to an interesting point. I asked Siemens if the SSH keys in Firmware V5.X (the fixed version) are unique per device, and I was assured that they are. If this is true, there should be no problem with me publishing a hash of the private key for my device. Don’t worry damsels and chaps, I can always patch my device with a new key later, as a demonstration of my enthusiasm for firmware. 

Anyway, here are two fingerprints of the private SSH key: 

MD5   6f09a4d77569236fd90483a85920912d
SHA256    505166f90ee05761b11a5feda24d0ccdc53ef902cdd617b330db3634cc2788f7

If you have one of these devices and have patched to the version that contains fixes, you could assist the community greatly by verifying that the key gets a different finger-print. This will independently confirm what those outstanding gentry at Siemens told me and promote confidence in their security solutions.

This neatly segues into some changes we’ve seen in the ICS-space over the last few years. 

The primary change in behavior I’d like to applaud is how companies are striving to develop better relationships with independent security researchers such as myself. The increase in constructive dialogue is evidenced by Siemen’s ability to receive notification and then produce a patch within three months. Years ago we were regularly waiting six months to two years for fixes.

In fact, I challenged vendors at S4x14 to commit to an AVERAGE TIME of security patching for externally supplied vulnerabilities. We purposefully chose the average time for this challenge, because we know that providing quality assurance for these systems is difficult and can be time consuming. After all, some bugs are just thornier than others

Incidentally, this is backed up by empirical research shared with me by the inimitable Sean McBride during our conversations at S4x14. I wouldn’t want you to think I am just some un-gentlemanly shuffler or simkin, challenging hecatonchires for the sport of it (hat-tip @sergeybratus).

Follow @digitalbond to see the response I got to committing to an average security patch time, when my ”Red/Blue Live” talk goes online. You’ll also notice that my two attackers (red team) did not manage to use the script to take over the device, despite doing so in practice sessions the night before. The ingenious Rotem Bar (blue team) demonstrated that the secret of ICS security is to simply *patch*. Apparently, it is not only possible, but effective!
...and btw, happy Valentine's!