Containers Aren’t New, But Ecosystem Growth Has Driven Development

January 21, 2016 | Leave a Comment

By Thomas Campbell, Container World 2016

kyle

Containers are getting a fair bit of hype at the moment, and February 2016 will see the first ever event dedicated to both the business and technical advantages of containers take place in Silicon Valley in the US.

Here, Container World talks to Kyle Anderson, who is the lead developer for Yelp, to learn about the company’s use of containers, and whether containers will ultimately live up to all the hype.

What special demands does Yelp’s business put on its internal computing?
Kyle Anderson: I wouldn’t say they are very special. In some sense our computing demands are boring. We need standard things like capacity, scaling, and speed. But boring doesn’t quite cut it though, and if you can turn your boring compute needs into something that is a cut above the status quo, it can become a business advantage.

And what was the background to building your own container-based PaaS? What was the decision-making process there?
KA: Building our own container-based PaaS came from a vision that things could be better if they were in containers and could be scheduled on-demand.

Ideas started bubbling internally until we decided to “just build it” with manager support. We knew that containers were going to be the future, not VMS (virtual machines). At the same time, we evaluated what was out there and wrote down what it was that we wanted in a PaaS, and saw the gap. The decision-making process there was just internal to the team, as most engineers at Yelp are trusted to make their own technical decisions.

How did you come to make the decision to open-source it?
KA: Many engineers have the desire to open-source things, often simply because they are proud of their work and want to share it with their peers.

At the same time, management likes open-source because it increases brand awareness and serves as a recruiting tool. It was natural progression for us. I tried to emphasize that it needs to work for Yelp first, and after one and a half years in production, we were confident that it was a good time to announce it.

There’s a lot of hype around containers, with some even suggesting this could be the biggest change in computing since client-server architecture. Where do you stand on its wider significance?
KA: Saying it’s the biggest change in computing since client-server architecture is very exaggerated. I am very anti-hype. Containers are not new, they just have enough ecosystem built up around them now, to the point where they become a viable option for the community at large.

Container World is taking place on February 16-18, 2016 at the Santa Clara Convention Center, CA, USA. Visit www.containervent.com to register for your pass.

 

What Is Data Deduplication and Who Cares?

January 19, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

code42 data duplication blogData deduplication is a critical component of managing the size (and cost) of a continuously growing data store that you will hear about when you research endpoint backup. Intelligent compression or “single-instance storage” eliminates redundant data by storing one copy of a file and referencing subsequent instances of the file back to the saved copy.

There is some misunderstanding of deduplication in the marketplace even among analysts, in part because vocal endpoint backup vendors have positioned deduplication capabilities around the concept of upload speed and cost of storage rather than security and speed to recovery.

What is data deduplication?
Data deduplication is a process by which an enterprise eliminates redundant data within a data set and only stores one instance of a unique piece of data. Data deduplication can be completed at the file level or at the data block level and can occur on either the endpoint device or the server. Each of these variables plays a role in how deduplication works and its overall efficiency, but the biggest question for most folks is, “Does data deduplication matter?” Or is data deduplication a differentiator that I should care about?

If you are considering a robust and scalable enterprise endpoint backup solution, you can count on the fact that the software uses some sort of data deduplication process. Some solutions use global deduplication, others local deduplication and some use a combination of the two.

Local deduplication happens on the endpoint before data is sent to the server. Duplicate data is removed from the endpoint and then clean data is stored in a unique data set sorted by user archive on the server. Each data set is encrypted with a unique encryption key.

Global deduplication sends all of the data on an endpoint to the server. Every block of data is compared to the data index on the server and new data blocks are indexed and stored. All but one identical block of data is removed from the data store and duplicate data is replaced with a redirect to the unique data file. Since multiple users must be able to access any particular data block, data is encrypted using a common encryption key across all sets.

Regardless of the deduplication method used, the actual process should happen silently in the background, causing no slow-down or perceived impairment for the end user.

So, should I care about global deduplication?
In short, not as much as some vendors might want you to care. Data deduplication—whether global or local—is largely considered table stakes in the world of enterprise endpoint backup. There are instances where each type may be beneficial—the key is to understand how each type affects your stored data, security requirements and restore times.

Privileged-Account Attacks Are Behind Every Major Cyber Crime

January 14, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

code42 keys (1)It’s unsettling: the people most accountable for an organization’s security are those most likely to compromise it. Privileged user accounts—IT administrators, application developers and C-suite executives—have been the cause of high-profile breaches over the past few years. Some cases involve intentional actions by the privileged users themselves, such as Edward Snowden’s NSA leaksand the South Korean Credit Bureau breach that exposed the personal information of almost half of all South Koreans. In other cases, cyber criminals steal or hack all-access credentials, as was the case with the 2013 Target and the 2014 Home Depot breaches. Regardless of the cause, studies of major breaches find that 100 percent of cyber crime attacks exploit privileged credentials.

Two factors make privileged-account attacks particularly devastating. First, the wide-ranging access granted by privileged credentials (dubbed “the keys to the kingdom” by Dell Security Executive Director John Milburn), whether acquired through insider threat or theft of credentials by an outside party, allow a perpetrator to move horizontally and often vertically through an organization’s data infrastructure, accessing sensitive information and installing malware to wreak further damage.

Secondly, privileged credential attacks make harmful activity harder to detect and address. For IT administrators and other privileged technical staff, accessing sensitive areas of a network doesn’t trigger red flags—it looks like everyday activity. Identifying suspicious action from executive accounts is also a challenge, as these individuals’ activities often fall outside the view of traditional data security.

A 2014 Ponemon Institute survey on insider threats reports that sixty-nine percent of IT security professionals feel they lack sufficient contextual information to identify suspicious activity from privileged accounts. So what can an organization do to mitigate the threat posed by privileged users?

Start by tightening and standardizing control over privileged user credentials. The Ponemon Institute’s Privileged User Abuse & The Insider Threat report found that forty-nine percent of respondents do not have officially defined policies for assigning privileged user access. This can lead to over-privileging—where users are granted greater access than is critically necessary for their job functions—and makes it extremely difficult to ensure accountability for all activity.

Carefully consider the level of access that is necessary for privileged users. If executives truly require all-access credentials, create an accurate log of which individuals possess which privileged credentials.

Make privileged user activities completely transparent to data security personnel. Enterprise-wide visibility into privileged user activities—whether on a server, in the cloud or on an endpoint device—is critical to establishing regular activity patterns, quickly identifying abnormal and suspicious activities, and determining context and intent in the event of a privileged account breach.

Increase in Federal Government Cyber Attacks Lays Groundwork for 2016

January 11, 2016 | Leave a Comment

By John Sellers, Vice President/Federal, Lancope

unnamedAt the end of last year, we looked back and said that 2014 was the year of high profile cyber attacks on the private sector. Target, Michaels, Sony and several healthcare companies and universities were plastered all over the news headlines. So did it get any better this year? In retrospect, 2015 was the year that government agencies were targeted. From the theft of more than 21 million records of government employees and their families from the Office of Personnel Management to breaches at the IRS, Defense Department, and the Census Bureau, both military and civilian agencies suffered significant intrusions.

Following the discovery of the OPM breach, and later revelations regarding its size and duration, President Obama ordered the federal government to undertake a 30-day Cyber Sprint to improve network defenses. Federal CIO Tony Scott directed agencies to take several steps to improve their cybersecurity postures, most notably to accelerate the use of multi-factor authentication, especially for privileged users. Though short-term in execution, the sprint has resulted in agencies stepping up their implementation of multi-factor authentication – with long-term benefits – and tightened up their standards for users’ access to different parts of their networks.

Another area of emphasis was, and remains, improved overall situational awareness within agencies’ networks – using tools such as dashboards to identify abnormal or unexpected behaviors by an asset within the network. For instance, in the OPM breach, important data was exfiltrated out of the network by a resource that never should have been communicating with the data repository.

With all of these incidents, agencies started to take a deeper look at threats with a specific look at insider threat. They began to reassess their misconceptions about what constitutes an insider threat. It’s easy to view Edward Snowden as an insider threat; it is harder, but necessary, to recognize that any employee’s action which causes loss of data or degradation of network performance, even if taken innocently, also is an insider threat.

All of these lay the groundwork for 2016.

First, there is every reason to believe there will be more breaches. Intrusions set a record in 2014 – which was promptly obliterated by the pace of intrusions in 2015. The United States and China may have come to an agreement on cyber theft of intellectual property, but there are other players interested in that type of information, and cyber espionage, whether by nation-states or non-state actors, will continue to accelerate.

The government is doing what it can to address fundamental cyber hygiene as quickly as possible, but these problems grew over time and it will take time to fix them. For many years, organizations focused on building bigger (fire)walls, fortifying the perimeters of their networks, but that can only go so far before the walls themselves cause performance degradation. It’s fair to say that organizations have prioritized network availability over security.

Part of resetting that tradeoff, and expanding on the idea of situational awareness, I see the emergence of “context-aware security.” It is not sufficient to be able to see what’s happening on a network; it is important to know what normal, everyday activity looks like on that network, in order to identify anomalous behavior, whether by a device or a user.

The application of the concept of risk management to data resources will continue. Agencies have realized that all data are not created equal – the “crown jewels,” the databases with the information critical to meeting agencies’ missions, need the greatest protection. The containerization of these data assets will reinforce the categorization of users and devices that are allowed access.

The normalization of cloud services within government agencies will lead to another security development – ongoing discussions about how to enforce policy and monitor security in an asset not actually owned and controlled by the government. FedRAMP has done a lot in this regard, but it does not require visibility at the transaction level – identifying who is making a request, where that request is going inside the cloud.

Software-defined networks will continue to spread, as they provide both affordability and flexibility in configuration and management. But has there been enough threat modeling of SDNs to understand what their potential vulnerabilities may be? There should be concern that attackers may figure out those weaknesses for us, and attention paid to finding them before they are targeted.

Another trend in government IT that raises security implications in 2016 is the rapid growth of the Internet of Things. This reinforces the need for context-aware security; the proliferation of devices, the explosion of data, makes it imperative to have a better understanding of “normal” network behavior. With IoT, the stakes become very high – whether it’s driverless cars or automated power systems, intrusions could put many lives physically at risk.

A final observation for the New Year: Agencies will continue to be hamstrung by procurement rules written to buy tanks and aircraft, commodity products and services. Between FedRAMP and FITARA, the government is doing what it can to address fundamental flaws in its purchases of IT systems, software, and services. More reform is needed, even if it is directed solely at IT security products and services – until the rules are changed for IT security solution procurements, the government won’t be able to keep up with the changing threat landscape.

Cost of Data Breach, Loss and Remediation

January 7, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

Cost of Breach Code42The Ponemon Institute’s 2015 Cost of Data Breach Study: Global Analysis reported that fallout and clean up efforts associated with a data breach cost companies across the globe $3.79 million on average. In the United States, organizations paid an average of $6.53 million per instance. This includes the cost of the data loss itself, impact on the company’s reputation, lost business and actual remediation costs. Breach hits organizations hard and squarely in the wallet. And year-over-year growth indicates the breach trend is heating up.

Estimating the outlay an organization can expect following a data breach is not a simple calculation. Factors affecting the per-record-cost of the breach include the cause of the breach, where it occurred and the organization’s industry or category. Organizations must also factor in the inevitable cost of lost business following breach.

Breach origin affects cost-per-record
In 2015, the average data breach in the United States exposed 28,070 records. If the breach was caused by human error or negligence, global average cost per record reached $134 or $3.76 million per breach. System glitches cost a global average of $142 per lost or breached record (or $3.9 million per breach) and malicious attacks, whether from inside or outside the organization, caused the most damage at a global average of $170 per record or $4.77 million per breach. This is up from $159 per record in 2014.

World trends and per capita costs
The per capita cost associated with a data breach varies based on where the breach occurred. Factors that increase per-capita cost include third party involvement, lost or stolen device value, time to identify and notify, and cost of consultants hired to mitigate breach damages. Factors that reduce the per-capita cost of a data breach include the use of an incident response team, the presence of a CISO at the organization, extensive use of encryption, employee training, business continuity management involvement, board-level involvement and insurance protection.

The United States continues to lead the pack at $217 per capita with Germany a close second at $211 per capita. Conversely, the cost of breach per capita is cheapest in India and Brazil at $56 and $78 respectively.

Industry (dis)advantage
Remember, the estimates above are averages. Depending on its industry sector, some companies face much higher financial consequences when a data breach occurs. For example, data privacy and security are heavily regulated in health care and education organizations to protect the personal information of patients and students. Breaches in these industries reach $363 and $300 per record respectively, while breached transportation and public sector records cost just $121 and $68 per exposed record. Clearly, not all data are created equal.

Lost business
Loss of business in the wake of a data breach is often overlooked. Suffering a data breach may result in an abnormally high amount of customer turnover and diminished trust. Moreover, customers (and prospects) will view the organization with suspicion after news of the data breach is announced. To overcome objections and win new customers, internal teams may incur additional costs to increase marketing and customer acquisition activities.

The cost attributed to lost business is significant. In 2015, an organization could expect to fork over a global average of $1.57 million in lost business alone, up from $1.33 million in 2014.

Reality check
Data breach is a very real threat. Ponemon reports that as of 2014, sixty percent of companies experienced more than one data breach in the previous two years. Organizations that aren’t worried about (or protected from) data breach because they’ve “never had one before,” are increasingly vulnerable to financial risk. Savvy organizations work to limit the risk and put a strategy in place to mitigate damage when breach occurs.

To learn more about how endpoint backup helps organizations recover quickly from data breach or loss and save money in the process, download the white paper, Protecting Data in the Age of Employee Churn.

Cost of Breach Code42

Five Ways Your Employees Sidestep Information Security Policies

December 29, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

code42 security policiesA good employee finds ways to overcome roadblocks and get the job done. But in the case of enterprise IT security, good employees may be your biggest threat. In fact, a recent Dell survey found that nearly seventy percent of IT professionals believe employee workarounds are the greatest risk to their organizations’ security.

We’ve all been there: juggling numerous log-in credentials, following tedious document transfer policies, struggling with subpar app functionality—all the while knowing there’s a better way. IT security policies have a knack for getting in the way of getting the job done. Dell also found that ninety-one percent of workers feel their work productivity is negatively impacted by IT security measures. So what are some of the most common workarounds used by imaginative, driven but often password-fatigued employees?

Easy-to-remember passwords. The average person today has twenty-five personal and professional digital access points. Changing those twenty-five passwords every ninety days, as recommended, results in creating and recalling 125 passwords each year. It’s no wonder people use easy-to-remember passwords; and unfortunate that simple passwords negate much of the security benefit of password-based authentication. One 2015 study found that seventy-three percent of online accounts are guarded by duplicated passwords—that is, the same key unlocks many different doors. Another study found that even those who try to be clever by using unique passwords are unlikely to beat the hackers: 1 in 2 passwords follow one of thirteen predictable (read: hackable) patterns. And finally, to skirt the password-reset problem altogether, some savvy users simply call their help desk to claim a forgotten password. The IT-driven reset often overrides the regular password reset requirements, meaning employees can continually recycle the same password. Thanks to this workaround, TeleSign found that 1 in 2 people are using passwords that are at least five years old.

Tricking the session time-out. Most systems and applications have automatic session time-out features, based on a defined idle period. But many organizations take this security feature a step further, using proximity detectors that time out a user’s session as soon as they step out of range. However, many users “beat” this security feature by placing a piece of tape on the detector, or by placing a cup over the detector. When they do step away from their desks, their devices remain completely unsecured and vulnerable.

Transferring documents outside the secure network. The mobile workforce demands anytime-anywhere access to their documents and data. Most organizations have strict protocols on accessing data through secure network connections, such as a virtual personal network (VPN). But many mobile workers aim to streamline their productivity by circumventing these protocols: emailing sensitive documents to themselves, storing files in a personal Dropbox account or other public cloud, and even taking photos/screenshots with a smartphone and texting these images.

Intentionally disabling security features. One of the most popular workarounds is also the most straightforward. Where possible, users will simply turn off security features that hinder their productivity. This is especially true for BYOD workplaces, where employees have greater control over the features, functionalities and settings of their endpoint devices.

The Post-It Note Pandemic. The most common workaround is also very simple. A survey by Meldium found that most people record their passwords somewhere—whether in a spreadsheet containing all their log-in credentials, on their smartphones, or on a piece of paper, such as a trusty Post-It Note™—likely affixed to the very device it is intended to secure.

So, what’s an IT administrator to do with all these well-intentioned, hard-working, security risk takers? Most experts agree that communication is key. IT security policies should avoid edicts without explanation, leaving the end user with productivity loss and no apparent upside. Instead, many organizations are implementing more rigorous IT security training for all employees, showing them specifically how security protocols protect against data leakage, data breaches and other threats, highlighting how workarounds put data (and their jobs) at risk, and keeping IT security top-of-mind with regular communications and meetings with staff.

Download the executive brief, Protecting Data in the Age of Employee Churn, to learn more about how endpoint backup can mitigate the risks associated with insider threat.

A Perspective on the Next Big Data Breach

December 23, 2015 | Leave a Comment

By Kevin Beaver, Guest Blogger, Lancope

iStock_000021503754Medium (1)In looking at the headlines and breach databases, there haven’t been any spectacular, high-visibility incidents in recent weeks. It’s almost as if the criminals are lurking in the weeds, waiting to launch their next attack during the busy, upcoming holiday season. After all, the media tends to sensationalize such breaches given the timing and that’s part of the payoff for those with ill intent. Whether the next big breach will impact consumers, corporate intellectual property or national security, no one really knows. It may be that we witness all of the above before year’s end. One thing’s for sure, the next big data breach will be predictable.

Once the dust settles and the incident response team members, investigators and lawyers have done their work and had their say, I can foresee how it’s all going to go down. It’s not at all unlike what happened a couple of years ago with the crippling snowstorms that we experienced in my hometown of Atlanta:

  • There’s an impending threat that most people are aware of. Some argue that threats are evolving. I’m not convinced that’s true. I think the technologies and techniques the threats use against us are maturing, but the threats themselves – criminal hackers, malicious insiders, unaware users, etc. – have been the same since the beginning.
  • People will get caught “off-guard” and get themselves (and their organizations) into a pickle.
  • The subsequent impact will be a lot worse than expected, or assumed.
  • Key individuals will ponder the situation and try to figure out who’s to blame.
  • Management will vow to never let it happen again, including but not limited to, providing short-term political and budgetary support for much-needed security initiatives.
  • Things will go back to normal – the typical daily routine will set back in and then months, perhaps years, will go by. Either the same people will forget the pain of what transpired or new people will be in charge and then, all of a sudden, out of nowhere – it’ll happen again.

With practically all data breaches, there are no surprises. There’s really nothing new. It’s the same story that’s repeated time and again. Comedian Groucho Marx was quoted as saying “Politics is the art of looking for trouble, finding it everywhere, misdiagnosing it and then misapplying the wrong remedies.” In most cases, the same can be said for information security. There’s a lot of talk. Some tangible action (often wheel spinning and going through the motions). There are even policies and contracts that are signed and audits that come up clean. Yet, history repeats itself.

As businessman Warren Buffett once said, there seems to be some perverse human characteristic that likes to make easy things difficult. I know it’s not truly “easy” to manage an overall information security program. I don’t envy CISOs and others in charge of this business function. However, knowing what we know today, it is easy to not repeat the mistakes of others. It’s also easy to become complacent. That’s where you have to be really careful. Too many people feel like they’ve “made it” – that they’ve got everything in place in order to be successful. Then they end up relaxing too much and letting their guard down. Then they become vulnerable again. It’s a vicious, yet predictable, cycle that leads to breach after breach after breach.

When all is said and done, your primary goal should be to determine what the very worst thing is that could happen on your network and then go about doing whatever it takes to make sure that worst thing doesn’t happen. That’s how you’ll prevent the next data breach from happening to your organization. Let the criminals go pick on someone else.

Kevin Beaver is an information security consultant, expert witness and professional speaker with Atlanta-based Principle Logic, LLC.

Code42 CSO says, “Beware the data-stealing Grinch”

December 22, 2015 | Leave a Comment

By Rick Orloff, Chief Security Officer, Code42

code42 shopping (1)Historically, corporations viewed security as an overhead expense required to meet regulatory controls and audits. As we head into a new year, we know breaches are inevitable and questions about security and data protection are being asked at a higher level. Boards of directors and C-level executives want situational awareness. They want to know, as much as they can, how effective their security programs are and how they compare to peer group programs.

Companies are learning that their security tech stack should enable business functions, not restrict them. Companies are focusing on securing many different layers of their corporate infrastructure but the real focus is on the data (e.g., customer PII, HIPAA, financial records and intellectual property). In today’s workplace, a company’s most critical data isn’t living on a desktop connected to a server—it’s living on laptops, tablets, third-party applications and mobile devices. Many of those devices spend less than half of their time in the office, and represent the disappearing network edge, which can mean an increased risk of data loss. Now and into 2016, the data living on endpoint devices has become a central pillar of a company’s security strategy.

But data protection isn’t just for companies, especially this time of year. We should all follow these four tips to protect our data and ourselves during the busy holiday shopping season:

TIP 1: Don’t shop online using borrowed or public computers, such as those at a cyber cafe. A borrowed computer may be infected and could be recording all of your information.

TIP 2: Public Wi-Fi spots have significant security risks and should be avoided when possible. You’re much safer using your own Wi-Fi or cellular connection.

TIP 3: Protect your passwords—and your data. Do not reuse passwords for multiple accounts. Your email password is the most important password you have. If a hacker can access your email, he or she can simply go to your bank’s website and request a password reset, and quickly gain access to your personal information and bank account.

TIP 4: Do not use your ATM card for any shopping. If you’re the victim of fraud, you often don’t know until all of the cash has been drained from your account. It’s much better to use a credit card as a security buffer. If there is fraud, they typically reverse charges in minutes but it’s not always the same situation with an ATM card.

How can people check to make sure they are going to a reputable website versus a fake one?
Customers should not provide their personal information to e-commerce sites with which they are not familiar. Secure sites use Secure Sockets Layer (SSL) and depict a “lock image” in or near their website address. As a precaution, it’s also best to always make sure antivirus software is updated.

To learn more about how endpoint backup can help your organization protect its data, download the ebook, Backup & Beyond.

Predicting Cyber Security Trends in 2016

December 21, 2015 | Leave a Comment

By TK Keanini, Chief Technology Officer, Lancope

9632811349_5a06d2c6f7_zOne of my annual rituals is to take stock of the cyber security industry and determine what trends and challenges we are likely to see in the coming year. In the ever-evolving cyberspace, technology changes on a daily basis, and attackers are always there to take advantage of it.

But before we get into what is coming, I’d like to look back on my predictions for 2015 and see how clear my crystal ball was.

2015: Three out of four

Last year, I predicted four major cyber security trends would rise to prominence – or continue rising – in 2015: Muleware, re-authentication exploitation, ransomware and targeted extortionware.

Three out of the four came true with muleware being the odd one out because it is difficult to track. That said, there were some rumblings of hotel staff physically delivering exploits to laptops left in the rooms of certain persons of interest.

Re-authentication exploitation remains popular as more attackers realize a compromised email account can facilitate the theft of many different kinds of accounts for other websites. Once an attacker controls your email account, he can begin the “forgot password” process of a website and steal the password before you notice. We need to stop looking at password authentication as single point in time, but instead as an entire lifestyle. You could have the strongest password system in the world, but if the re-authentication process is weak, then the attacker has the upper hand.

Ransomware continues to thrive in the current environment and has expanded from only Windows to Apple, Android and Linux. These attacks are countered with proper backups, which are cheaper and easier than ever, but organizations are still failing to back up their data. This method has proved to be lucrative for attackers, and as long as people are still vulnerable to it, ransomware will become even more popular.

Targeted extortionware seeks to steal sensitive data about a person and threaten to publish the data publicly if the victim doesn’t pay up. Everyone has something they would like to keep secret, and some are undoubtedly willing to pay for it. Events like the breach at adult matchmaking site Ashley Madison led to cases of extortionware, and this trend is likely to continue in 2016.

What to expect in 2016

If 2014 was the “Year of the Data Breach,” then 2015 is on track to match it. We saw insurance companies, dating sites, U.S. federal agencies, surveillance technology companies and more fall victim to attacks this year, and there are no reasons to believe it is going to slow down in 2016.

Cracking as a service
Encryption has always been a moving target. As technology becomes more advanced, encryption has to evolve with it or else it becomes too easy to crack. Certain trends such as Bitcoin mining have already led to large farms of compute clusters that could be setup for cryptanalysis without a lot effort. Like any other Software as a Service provider, it could be as simple as setting up an account. You could submit a key with some metadata and within a few minutes – maybe even seconds – a clear-text WEP key is delivered. This could include different hashes and ciphertext. Charging per compute cycle would make it an elastic business. A development such as this would require everyone to utilize longer key lengths or risk compromise.

DNA breach
Every year, more and more sensitive data is stored on Internet-connected machines, and health data in particular is on the rise. Millions of people use DNA services that track an individual’s genetic history or search for markers of disease, and it is only a matter of time until a DNA repository is compromised. Unlike a credit card number or an account password, health information cannot be changed, which mean once it is compromised, it is compromised forever. This makes it an exceptionally juicy target for attackers. A breach like this could affect millions, and compensation would be impossible.

Attack on the overlay network
As more and more organizations rush to develop and implement software-defined networking (SDN), there is widespread adoption of microarchitectures like Docker containers. In the case of Docker, VXLAN tagging facilitates an overlay network that defines the structure of the system of applications. This could have severe security implications if there is no effective entity authenticating and checking the tags. Without adequate authentication, attackers could impersonate or abuse a tag, giving them privileged access to the system and data stored within.

VXLAN is only one example of overlay technology, and frankly, there has not been enough threat modeling to determine how vulnerable it is to attack. Like all new technologies, if we don’t give enough thought to security during development, attackers will discover the vulnerabilities for us. There will be exploitation of overlay networks in 2016, and then defenders will be forced to implement security in the middle of a vulnerable and hostile environment.

Namespace is the new battleground
Software developers are quickly adopting container technology to ensure performance is consistent across different machines and environments. When hypervisor-based virtualization became common, attackers learned how to compromise the hypervisor to gain control of the operating systems on virtual machines. With container technology like Docker, these attacks take aim at namespaces in userland, including networking, processes and filesystem namespaces. In the coming year, there will be attacks originating from malicious containers trying to share the same namespace as legitimate ones. A compromise like this could give attackers complete control of the container and potentially allow them to erase all evidence of the attack.

There are companies working on cryptographic methods of securing namespace, but until a major attack on these systems take place, there won’t be a lot of demand for this as a required feature.

New approaches for new technology
Whenever new technology receives widespread adoption, people often attempt to apply old security principles to them. In some cases that works, but it often creates inefficiencies and vulnerabilities. When virtual machines first became popular, operators would often attempt to patch VMs like they were a physical machine. It didn’t take long for them to realize it was quicker and easier to just kill the VM and start a new one with up-to-date software.

As we run headlong into new technology and continue to connect more and more sensitive information to the Internet, we must consider the security implications before a breach occurs. Every year attackers develop more ways to monetize and facilitate cybercrimes, and if we fail to evolve with them then we are inviting disaster.

Smart City Security

December 17, 2015 | Leave a Comment

By Brian Russell, Co-Chair CSA IoT Working Group

RussellBrianGartner defines a smart city as an “urbanized area where multiple sectors cooperate to achieve sustainable outcomes through the analysis of contextual, real time information shared among sector-specific information and operational technology systems,” and estimates that 9.7 billion devices will be used within smart cities by the year 2020.  

A smart city connects multiple technologies and services together, often in manners that were not previously thought possible. According to Juniper Research, there are five essential components of a smart city: technologies, buildings, utilities, transportation and road infrastructure, and the smart city itself. All of these building blocks are brought together, according to the Intelligent Community Forum (ICF), to “create high quality employment, increase citizen population and become great places to live and work.

There are myriad use cases for smart cities. City Pulse provides a great starting point for defining some of these. In the near future, citizens will benefit from improved service delivery as cities enable capabilities such as smart waste management, pollution sensors and smart transportation systems. Cities will also be able to stand up improved security and safety capabilities – from managing crisis situations using coordinated aerial and ground robotic tools, to monitoring seniors to identify elevated stress levels (e.g., potential falls or worse) in their home. New services will likely be stood up, both public and private, to leverage these new capabilities.    

This smart city ecosystem is dynamic.  This is true for the devices that will make up the edges of the smart city, as well as the cloud services that will support data processing, analytics and storage.  The data within a smart city is itself dynamic, crossing private and public boundaries, being shared between organizations, being aggregated with other data streams and having metadata attached, throughout its lifetime.  This all creates significant data privacy challenges that must be adequately addressed.    

These complex smart city implementations also introduce challenges to the task of keeping them secure. As an example, services will likely be implemented that ingest data from personal devices (e.g., connected automobiles, heart-rate monitors, etc) making it important that only permitted data is collected and that citizens opt-in. Interfacing to personally-owned devices also introduces new attack vectors, requiring that solutions for determining and continuously monitoring the security posture of these devices be designed and used.    

City infrastructures will also be updated and extended to support new smart capabilities. There are smart city management solutions that tie together inputs from smart devices and sensors and enable automated workflows. These solutions can be hosted in the cloud and can reach out across the cloud through integration with various web services, creating a rich attack surface that must be evaluated on a regular basis as new inputs and outputs are added. This requires the upkeep of a living security architecture and routine threat modeling activities.  

Understanding the threats facing smart cities and the vulnerabilities being introduced by new smart city technologies requires a collaborative effort between municipalities, technology providers and security researchers.  Technology providers would be well served to review secure development guidance from organizations such as the Open Web Application Security Project (OWASP) and smart device vendors should make use of 3rd party security evaluations from organizations such as builditsecure.ly.  Municipalities should look to secure implementation guidance from organizations such as the Securing Smart Cities initiative, as well as the Cloud Security Alliance (CSA).  .    

The CSA Internet of Things (IoT) Working Group (IoTWG) recently teamed up with Securing Smart Cities, to publish a document titled Cyber Security Guidelines for Smart City Technology Adoption. This document is an effort to provide city leaders with the knowledge needed to acquire secure smart city solutions, and includes guidance on technology selection, technology implementation and technology disposal. Download the document.  

The CSA IoTWG will continue to support the Securing Smart Cities initiative in their focus on providing security guidance for smart cities, and we will continue our work on providing security guidance for the IoT as a whole, to include recommendations for securing IoT cloud services, research on the uses for blockchain technology to secure the IoT, and guidance on how to design and develop secure IoT components. Keep a look-out for new publications from our WG.  

Join the CSA IoTWG.

Brian Russell (twitter: @pbjason9) is Chief Engineer/CyberSecurity for Leidos.