Thursday, April 17, 2014

A Wake-up Call for SATCOM Security

By Ruben Santamarta @reversemode

During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted.

We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand.

Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include:
  • Aerospace
  • Maritime
  • Military and governments
  • Emergency services
  • Industrial (oil rigs, gas, electricity)
  • Media
It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft's ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. 

IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. 

Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals.  

In previous blog posts I've explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically.


What about the results? 

Insecure and undocumented protocols, backdoors, hard-coded credentials...mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors.

Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities.

I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned.
The following white paper comprehensively explains all the aspects of this research http://www.ioactive.com/pdfs/IOActive_SATCOM_Security_WhitePaper.pdf



Thursday, April 10, 2014

Bleeding Hearts

By Robert Erbes @rr_dot

The Internet is ablaze with talk of the "heartbleed" OpenSSL vulnerability disclosed yesterday (April 7, 2014) here: https://www.openssl.org/news/secadv_20140407.txt

While the bug itself is a simple “missing bounds check,” it affects quite a number of high-volume, big business websites (https://github.com/musalbas/heartbleed-masstest/blob/master/top1000.txt).

Make no mistake, this bug is BAD. It's sort of a perfect storm: the bug is in a library used to encrypt sensitive data (OpenSSL), and it allows attackers a peak into a server's memory, potentially revealing that same sensitive data in the clear.

Initially, it was reported that private keys could be disclosed via this bug, basically allowing attackers to decrypt captured SSL sessions. But as more people start looking at different sites, other issues have been revealed – servers are leaking information ranging from user sessions (https://www.mattslifebytes.com/?p=533) to encrypted search queries (duckduckgo) and passwords (https://twitter.com/markloman/status/453502888447586304). The type of information accessible to an attacker is entirely a function of what happens to be in the target server’s memory at the time the attacker sends the request.

While there's lot of talk about the bug and its consequences, I haven't seen much about what actually causes the bug. Given that the bug itself is pretty easy to understand (and even spot!), I thought it would be worthwhile to walk through the vulnerability here for those of you who are curious about the why, and not just the how-to-fix.


The Bug

The vulnerable code is found in OpenSSL's TLS Heartbeat Message handling routine - hence the clever "heartbleed" nickname. The TLS Heartbeat protocol is defined in RFC 6520 - "Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) Heartbeat Extension".  

Before we get into the OpenSSL code, we should examine what this protocol looks like.

The structure of the Heartbeat message is very simple, consisting of a 8-bit message type, a 16-bit payload length field, the payload itself, and finally a sequence of padding bytes. In pseudo-code, the message definition looks like this (copied from the RFC):


The type (line 2) is simply 1 or 2, depending on whether the message is a request or a response.

The payload_length (line 3) indicates the size of the payload part of the message, which follows immediately. Being a 16-bit unsigned integer, its maximum value is 2^16-1 (which is 65535). If you've done some other reading about this bug, you'll recognize the 64k as being the upper limit on how much data can be accessed by an attacker per attack sent.

The payload (line 4) is defined to be "arbitrary content" of length payload_length.

And padding (line 5) is "random content" at least 16 bytes long, and "MUST be ignored."

Easy enough!

Now I think it should be noted here that the RFC itself appears sound - it describes appropriate behavior for what an implementation of the protocol should and shouldn't do. In fact, the RFC explicitly states that "If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently." Granted, "too large" isn't defined, but I digress…

There is one last part of the RFC that's important for understanding this bug: The RFC states that "When a HeartbeatRequest message is received ... the receiver MUST send a corresponding HeartbeatResponse message carrying an exact copy of the payload of the received HeartbeatRequest." This is important in understanding WHY vulnerable versions of OpenSSL are sending out seemingly arbitrary blocks of memory.

Now, on to OpenSSL code!  

Here's a snippet from the DTLS Heartbeat handling function in (the vulnerable) openssl-1.0.1f\ssl\d1_both.c:


If you're not familiar with reading C, this can be a lot to digest. The bug will become clearer if we go through this function piece by piece. We’ll start at the top…



This part just defines local variables for the function. The most important one here is the char pointer p (line 1457), which points to the heartbeat message the attacker controls. hbtype (line 1458)will hold the HeartbeatMessageType value mentioned in the RFC, and payload (line 1459)will hold the payload_length value. Don't let the fact that the payload_length value is being stored in the payload variable confuse you!



Here, the first byte of the message is copied into the hbtype variable (line 1463), and the 16-bit payload-length is copied from p (the attacker-controlled message) to the payload variable using the n2s function (line 1464). The n2s function simply converts the value from the sequence of bits in the message to a number the program can use in calculations. Finally, the pl pointer is set to point to the payload section of the attacker-controlled message (line 1465).


On line 1474, a variable called buffer is defined and then allocated (line 1481) an area of memory using the attacker-controlled payload variable to calculate how much memory should be allocated. Then, on line 1482, the bp pointer is set to point to the buffer that was just allocated for the server's response.
As an aside, if the payload_length field which gets stored in the payload variable were greater than 16-bits (say, 32 or 64-bits instead), we'd be looking at another couple of vulnerabilities: either an integer overflow leading to a buffer overflow, or potentially a null-pointer dereference. The exact nature and exploitability of either of these would depend upon the platform itself, and the exact implementation of OPENSSL_malloc. Payload_length *is* only 16-bits however, so we'll continue...


This code snippet shows the server building the response. It's here that this bug changes from being an attacker-controlled length field leading to Not Very Much into a serious information disclosure bug causing a big stir. Line 1485 simply sets the type of the response message pointed to by bp to be a Heartbeat Response. According to the RFC, the payload_length should be next, and indeed - it is being copied over to the bp response buffer via the s2n function on line 1486. The server is just copying the value the attacker supplied, which was stored in the payload variable. Finally, the payload section of the attacker message (pointed to by pl, on line 1465) is copied over to the response buffer, pointed to by the bp variable (again, according to the RFC specification), which is then sent back to the attacker.  

And herein lies the vulnerability - the attacker-controlled payload variable (which stores the payload_length field!) is used to determine exactly how many bytes of memory should be copied into the response buffer, WITHOUT first being checked to ensure that the payload_length supplied by the attacker is not bigger than the size of the attacker-supplied payload itself.

This means that if an attacker sends a payload_length greater than the size of the payload, any data located in the server’s memory after the attacker’s payload would be copied into the response. If the attacker set the payload_length to 10,000 bytes and only provided a payload of 10 bytes, then a little less than an extra 10,000 bytes of server memory would be copied over to the response buffer and sent back to the attacker. Any sensitive information that happened to be hanging around in the process (including private keys, unencrypted messages, etc.) is fair game. The only variable is what happens to be in memory. In playing with some of the published PoCs against my own little test server (openssl s_server FTW), I got a whole lot of nothing back, in spite of targeting a vulnerable version, because the process wasn't doing anything other than accepting requests. To reiterate, the data accessed entirely depends on what's in memory at the time of the attack.


Testing it yourself

There are a number of PoCs put up yesterday and today that allow you to check your own servers. If you're comfortable using an external service, you can check out http://filippo.io/Heartbleed/. Or you can grab filippo’s golang code from https://github.com/FiloSottile/Heartbleed, compile it yourself, and test on your own. If golang is not your cup of tea, I'd check out the simple Python version here: https://gist.github.com/takeshixx/10107280. All that's required is a Python2 installation.


Happy Hunting!

Tuesday, April 8, 2014

Car Hacking 2: The Content

By Chris Valasek @nudehaberdasher

Does everyone remember when those two handsome young gentlemen controlled automobiles with CAN message injection (https://www.youtube.com/watch?v=oqe6S6m73Zw)? I sure do. However, what if you don’t have the resources to purchase a car, pay for insurance, repairs to the car, and so on? 

Fear not Internet! 

Chris and Charlie to the rescue. Last week we presented our new automotive research at Syscan 2014 (http://www.syscan.org/index.php/download). To make a long story short, we provided the blueprints to setup a small automotive network outside the vehicle so security researchers could start investigating Autosec (TM pending) without requiring the large budget needed to procure a real automobile. (Update: Andy Greenberg just released an article explaining our work, http://www.forbes.com/sites/andygreenberg/2014/04/08/darpa-funded-researchers-help-you-learn-to-hack-a-car-for-a-tenth-the-price/)


Additionally, we provided a solution for a mobile testing platform (a go-cart) that can be fashioned with ECUs from a vehicle (or purchased on Ebay) for testing that requires locomotion, such as assisted braking and lane departure systems. 


For those of you that want the gritty technical details, download this paper. As always, we’d love feedback and welcome any questions. 



Wednesday, March 26, 2014

A Bigger Stick To Reduce Data Breaches

By Gunter Ollmann, @gollmann 

On average I receive a postal letter from a bank or retailer every two months telling me that I've become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB - and that's just the ones that were disclosed publicly.

It's clear to me that something is broken and it's only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I'd question why they're collecting it in the first place. All too often these organizations - of which I'm supposedly a customer - are collecting personal data about "my experience" doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they'd be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:
  • Persistent difficulty discarding or parting with possessions, regardless of the value others may attribute to these possessions.
  • This difficulty is due to strong urges to save items and/or distress associated with discarding.
  • The symptoms result in the accumulation of a large number of possessions that fill up and clutter active living areas of the home or workplace to the extent that their intended use is no longer possible.
  • The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
  • The hoarding symptoms are not due to a general medical condition.
  • The hoarding symptoms are not restricted to the symptoms of another mental disorder.
Whether or not the organizations hording personal data know how to profit from it or not, it's clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they're doing. The gray market for identity laundering has expanded phenomenonly since I talked about at Blackhat in 2010.

We can moan all we like about the state of the situation now, but we'll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.

The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the "what data are you collecting and why?" questions, and incentivize corporations to take much more care protecting the personal data they've been entrusted with.

I feel that the data hording problem can be dealt with fairly easily. At the end of the day it's about transparency and the ability to "opt out". If I was to choose a role model for making a sizable fraction of this threat go away, I'd look to the basic component of the UK's Data Protection Act as being the cornerstone of a solution - especially here in the US. I believe the key components of personal data collection should encompass the following:
  • Any organization that wants to collect personal data must have a clearly identified "Data Protection Officer" who not only is a member of the executive board, but is personally responsible for any legal consequences of personal data abuse or data breaches.
  • Before data can be collected, the details of the data sought for collection, how that data is to be used, how long it would be retained, and who it is going to be used by, must be submitted for review to a government or legal authority. I.e. some third-party entity capable of saying this is acceptable use - a bit like the ethics boards used for medical research etc.
  • The specifics of what data a corporation collects and what they use that data for must be publicly visible. Something similar to the nutrition labels found on packaged foods would likely be appropriate - so the end consumer can rapidly discern how their private data is being used.
  • Any data being acquired must include a date of when it will be automatically deleted and removed.
  • At any time any person can request a copy of any and all personal data held by a company about themselves.
  • At any time any person can request the immediate deletion and removal of all data held by a company about themselves.
If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You'd hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the "big stick" approach needs to be reinforced.

A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer's personal data should be bigger and divvied up. Essentially I'd argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.

Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I've become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today. 

Thursday, February 27, 2014

Beware Your RSA Mobile App Download

By Gunter Ollmann, @gollmann 

It's been half a decade since Apple launched their iPhone campaign titled "There's an app for that". In the years following, the mobile app stores (from all the major players) have continued to blossom to the point that not only are there several thousand apps that help light your way (i.e. by keeping the flash running bright), but every company, cause, group, or notable event is expected to publish their own mobile application. 

Today there are several hundred good "rapid development" kits that allow any newbie to craft and release their own mobile application and several thousand small professional software development teams that will create one on your behalf. These bespoke mobile applications aren't the types products that their owners are expecting to make much (if any) money off of. Instead, these apps are generally helpful tools that appeal to a particular target audience.

Now, while the cynical side of me would like to point out that some people should never be trusted with tools as lofty as HTML and setting up WordPress sites–let alone building a mobile app, many corporate marketing teams I've dealt with have not only drunk the "There's an app for that" Kool-Aid, they appear to bath in the stuff each night. As such, a turnkey approach to app production is destined to involve many sacrifices and, at the top of the sacrificial pillar, data security and integrity continue to reign supreme.

A few weeks ago I noticed that, in the run up to the RSA USA 2014 conference, a new mobile application was conceived and thrust upon the Apple and Google app stores and electronically marketed to the world at large. Maybe it was a reaction to being spammed with a never-ending tirade of "come see us at RSA" emails, or it was topical off the back of a recent blog on the state of mobile banking application security, or maybe both. I asked some of the IOActive consulting team who had a little bench-time between jobs to have a poke at freshly minted "RSA Conference 2014" mobile application. 



The Google Play app store describes the RSA Conference 2014 application like this:
With the RSA Conference Mobile App, you can stay connected with all Conference activities, view the event catalog, manage session schedules and engage with colleagues and peers while onsite using our social and professional networking tools. You'll have access to dynamic agenda updates, venue maps, exhibitor listing and more!
Now, I wasn't expecting the application to be particularly interesting–it's not as if it was a transactional banking application etc.–but I would have thought that RSA (or whoever they tasked with commissioning the application) would have at least applied some basic elbow grease so as to not potentially embarrass themselves. Alas, that was not to be the case.

The team came back rather quickly with a half-dozen security issues. Technically the highest impact vulnerability had to do with the app being vulnerable to man-in-the-middle attacks, where an attacker could inject additional code into the login sequence and phish credentials. If we were dealing with a banking application, then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.

It was the second most severe vulnerability that caught my eye though. The RSA Conference 2014 application downloads a SQLite DB file that is used to populate the visual portions of the app (such as schedules and speaker information) but, for some bizarre reason, it also contains information of every registered user of the application–including their name, surname, title, employer, and nationality.



I have no idea why the app developers chose to do that, but I'm pretty sure that the folks who downloaded and installed the application are unlikely to have thought that their details were being made public and published in this way. Marketers love this kind of information though!

Some readers may think I'm targeting RSA, and in a small way I guess I am. Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications.

I'm betting that RSA didn't even create the application themselves. The Google Play store indicates that a company called QuickMobile was the developer. With one small click it's possible to get a list of all the other applications QuickMobile have created for what I would assume to be on their clients behalf.



As you can see from above, there are lots of popular brands and industry conferences employing their app creation services. I wonder if many of them share the same vulnerabilities as the RSA Conference 2014 application?

Here's a little bit of advice to any corporate marketing team. If you're going to release your own mobile application, the security and integrity of that application are your responsibility. While you can't outsource that, you can get another organization to assess the application on your behalf.

In the meantime, readers of this blog may want to refrain from downloading the RSA Conference 2014 (and related) mobile applications–unless you're a hacker or marketing team that wants to acquire a free list of conference attendees names, positions, and employers.

Wednesday, February 19, 2014

PCI DSS and Security Breaches

By Christian Moldes, Director of Compliance Services

Every time an organization suffers a security breach and cardholder data is compromised, people question the effectiveness of the Payment Card Industry Data Security Standard (PCI DSS). Blaming PCI DSS for the handful of companies that are breached every year shows a lack of understanding of the standard’s role. 

Two major misconceptions are responsible for this.

First, PCI DSS is a compliance standard. An organization can be compliant today and not tomorrow. It can be compliant when an assessment is taking place and noncompliant the minute the assessment is completed.

Unfortunately, some organizations don’t see PCI DSS as a standard that applies to their day-to-day operations; they think of it as a single event that they must pass at all costs. Each year, they desperately prepare for their assessment and struggle to remediate the assessor’s findings before their annual deadline. When they finally receive their attestation, they check out and don’t think about PCI DSS compliance until next year, when the whole process starts again. 

Their information security management system is immature, ad-hoc, perhaps even chaotic, and driven by the threat of losing a certificate or being fined by their processor.

To use an analogy, PCI DSS compliance is not a race to a destination, but how consistently well you drive to that destination. Many organizations accelerate from zero to sixty in seconds, braking abruptly, and starting all over again a month later. The number of security breaches will be reduced as soon as organizations and assessors both understand that a successful compliance program is not a single state, but an ongoing process. As such, an organization that has a mature and repeatable process will be compliant continuously with rare exceptions and not only during the time of the assessment.

Second, in the age of Advanced Persistent Threats (APTs), the challenge for most organizations it is not whether they can successfully prevent an attack from ever occurring, but how quickly they can become aware that a breach has actually occurred.

PCI DSS requirements can be classified into three categories:  

1. Requirements intended to prevent an incident from happening in the first place. 
These requirements include implementing network access controls, configuring systems securely, applying periodic security updates, performing periodic security reviews, developing secure applications, providing security awareness to the staff, and so on. 

2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.


3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.


Too many organizations focus their compliance resources on the first group of requirements. They give the second and third groups as little attention as possible. 

This is painfully obvious. According to the Verizon Data Breach Investigation Report (DBIR) and public information available for the most recent company breaches, most organizations become aware of a security breach many weeks or even months after the initial compromise, and only when notified by the payment card brands or law enforcement. This confirms a clear reality. Breached organizations do not have the proper tools and/or qualified staff to monitor their security events and logs. 

Once all the preventive and detective security controls required by PCI DSS have been properly implemented, the only thing left for an organization is to thoroughly monitor logs and events. The goal is to detect anomalies and take any necessary actions as soon as possible.

Having sharp individuals in this role is critical for any organization. The smarter the individuals doing the monitoring are, the less opportunity attackers have to get to your data before they are discovered. 

You cannot avoid getting hacked. Sooner or later, to a greater or lesser degree, it will happen. What you can really do is monitor and investigate continuously.


In PCI DSS compliance, monitoring is where companies are really failing.

Monday, February 17, 2014

INTERNET-of-THREATS

By Cesar Cerrudo @cesarcer

At IOActive Labs, I have the privilege of being part of a great team with some of the world’s best hackers. I also have access to really cool research on different technologies that uncovers security problems affecting widely used hardware and software. This gives me a solid understanding of the state of security for many different software and hardware devices, not just opinions based on theories and real life experience.

Currently, the term Internet-of-Things (IoT) is becoming a buzzword used in the media, announcements from hardware device manufacturers, etc. Basically, it’s used to describe an Internet with everything connected to it. It describes what we are seeing nowadays, including:
  • Laptops, tablets, smartphones, set-top boxes, media-streaming devices, and data-storage devices
  • Watches, glasses, and clothes
  • Home appliances, home switches, home alarm systems, home cameras, and light bulbs
  • Industrial devices and industrial control systems
  • Cars, buses, trains, planes, and ships
  • Medical devices and health systems
  • Traffic sensors, seismic sensors, pollution sensors, and weather sensors
     …and more; you name it, and it is or soon will be connected to the Internet.

While the devices and systems connected to the Internet are different, they have something in common–most of them suffer from serious security vulnerabilities. This is not a guess. It is based on IOActive Labs’ security research into many of these types of devices currently being used worldwide. Sadly, we are seeing almost the exact same vulnerabilities on these devices that have plagued software vendors over the last decade. Vulnerabilities that the most important software vendors are trying hard to eradicate. It seems that many hardware companies are following really poor security practices when adding software to their products and connecting them to the Internet. What is worse is that sometimes vendors don’t even respond to security vulnerability reports or just downplay the threat and don’t fix the vulnerabilities. Many vendors don’t even know how to properly deal with the security vulnerabilities being reported.

Some of common vulnerabilities IOActive Labs finds include:
  • Sensitive data sent over insecure channels
  • Improper use of encryption
    • No SSL certificate validation
    • Things like encryption keys and signing certificates easily available to anyone
  • Hardcoded credentials/backdoor accounts
  • Lack of authentication and/or authorization
  • Storage of sensitive data in clear text
  • Unauthenticated and/or unauthorized firmware updates
  • Lack of firmware integrity check during updates
  • Use of insecure custom made protocols
Also, data ambition is working against vendors and is increasing attack surfaces considerably. For example, all data collected is sent to “vendor cloud” and device commands are sent from “vendor cloud”, instead of just allowing users to connect directly to and command their devices. Hacking into “vendor cloud” = thousands of devices compromised = lots of lost money.

Why should we worry about all of this? Well, these devices affect our everyday life and will continue to do so more and more. We’ve only seen the tip of the iceberg when it comes to the attacks that people, companies, and governments face and how easily they can be performed. If the situation doesn’t change soon, it is just matter of time before we witness attacks with tragic consequences.

If a headline like “+100K Digital Toilets from XYZ1.3 Inc. Found Sending Spam and Distributing Malware” doesn’t scare you because you think it’s funny and improbable, you could be wrong. We shouldn’t wait for headlines such as “Dozens of People Injured When Home Automation Devices Hacked” before we react.

Something must be done! From enforcing secure practices during product development to imposing high fines when products are hacked, action must be taken to prevent the loss of money and possibly even lives.

Companies should strongly consider:
    • Training developers on secure development
    • Implementing security development practices to improve software security
    • Training company staff on security best practices
    • Implementing a security patch development and distribution process
    • Performing product design/architecture security reviews
    • Performing source code security audits
    • Performing product penetration tests
    • Performing company network penetration tests
    • Staying up-to-date with new security threats
    • Creating a bug bounty program to reward reported vulnerabilities and clearly defining how vulnerabilities should be reported
    • Implementing a security incident/emergency response team
It is difficult to give advice to end users given that the best solution is just not to buy or use many products because they are insecure by design. At this stage, it’s just matter of being lucky and hoping that you won’t be hacked. Maybe opportunistic vendors could come up with some novel solution such as an IPS/anti* device that will protect all of your devices from attacks. Just pray that the protection device itself is not vulnerable.

Sometimes end users are forced to live with insecure devices since there isn’t any way to turn them off or not to use them. These include devices provided by TV cable companies, electricity and gas companies, public services companies, governments, etc. These companies and the government should take responsibility for deploying secure products.

This is not BS–in a couple of days we will be releasing some of the extensive research I mentioned and on which this blog post is based.


I intend for this post to be a wakeup call for everyone! I’m really concerned about the current situation. In the meantime, I will use the term INTERNET-of-THREATS (not Internet-of-Things). Maybe this new buzzword will make us more conscious of the situation. If it doesn’t, then at least I have tried.