Info security: an eggshell defense or a layer cake strategy Arrow to Content

September 2, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

58118920_1920_1080Eggshell security describes a practice in which organizations depend on a traditional model of a “hardened outer layer of defenses and a network that is essentially wide open, once the attacker has made it past perimeter defenses.”

In an article published in The Register, a leading global online publication headquartered in London, Trevor Pott describes the four pillars of Modern IT security as layers of protection in lieu of a brittle and penetrable outer shell protecting the interior.

Eggshell computing is a fantastically stupid concept, Pott says, yet our entire industry is addicted to it. We focus on the “bad guys” battering down the WAN with port scans and spam. We ignore the insider threats from people downloading malware, being malicious or even just Oopsie McFumbleFingers YOLOing the delete key.

Prevention is only the first layer of security surrounding the network. It includes firewalls, patches, security access lists, two-factor authentication and other technology designed to prevent security compromises.

Detection is the second layer of defense: it includes real time monitoring of breach types via periodic scanning. In this category, intrusion detection systems, mail gateways that scan for credit card numbers moving through email, or auditing systems that scan logs comprise the layer.

Mitigation is the third layer. This is a series of practices in which the idea of compromise is accepted as part of doing business. Thus, an organization designs a network so that a compromise in one system will not result in a compromise of the entire network.

Because an incident is inevitable, incident response rounds out the layered security methodology.

Accepting that your network will inevitably be compromised, what do you do about it? How do you prevent a malware infection, external malicious actor, or internal threat from escalating their beachhead into a network-wide compromise?

The ability to respond to the inevitable by reloading from clean backups, learning via forensic analysis and returning to work from compromised systems (thereby assuring business continuity) isn’t giving up the fight, it’s understanding that the enemy will penetrate (or is already inside)—but recovery is always within reach.




M&A Concern: Is your data walking out the door with employees? Arrow to Content

August 25, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

Code42_Blog_Is_your_data_walking_out_the_doorIf you’re at one of the 40,000+ companies a year worldwide that announce a merger or acquisition, your biggest worry may not be combining IT systems. It may be all those employees walking out the door with your data.

Layoffs and voluntary departures are a given after a merger or acquisition. That means stolen data is a given, too: Half of departing employees keep confidential corporate data when they leave organizations, according to a recent study. And 42% believe it’s their right. The BYOD trend just adds insult to injury: departing employees leave with fully stocked devices they own and operate.

So what are employees taking? Some pilfered data is innocuous and already in the public realm. But some of it is classified. A partner at a law firm that specializes in labor and employment law says 90% of the data losses he sees involve customer lists. Not just names and addresses, but confidential information such as buying habits, contract terms and potential deals.

Other classified information could include credit card information, health information, financial records, software code, email lists, strategic plans, proprietary formulas, databases and employee records.

To avoid data breaches by departing employees—and the risk of operational, financial and reputation damage—security experts recommend three key steps:

  1. Educate employees: Make it very clear to employees that taking confidential information is wrong. Your security awareness training should include a detailed section on intellectual property theft.
  2. Enforce and clarify non-disclosure agreements: In nearly half of insider theft cases there were IP agreements in place, but the employee either misunderstood them or they weren’t enforced, according to the study. Start by including stronger, more specific language in employment agreements. Then make sure employees are aware that policy violations will be enforced and that theft of company information will have negative consequences—to them and any future employer who benefits from the stolen data. Just as importantly, make sure exit interviews include focused conversations around the employee’s continued responsibility to protect confidential information and return all company property and information—including company data stored on personal devices.
  3. Monitor technology to catch breaches early: By monitoring all the data that moves with your employees—on any device and in the cloud—you can quickly identify and rectify any inappropriate access and use of confidential data.

A good endpoint backup system, one that can be rapidly deployed to the acquired company via a single platform and managed via one console enables companies to track and audit the content passing through a device. So if there is an insider theft, you have the ability to build a legal case and recover the data. An added benefit? A good endpoint backup system lessens the temptation to steal intellectual property.

The Cloud and Cybersecurity Arrow to Content

August 20, 2015 | Leave a Comment

By Vibhav Agarwal, Senior Manager of Product Marketing, MetricStream

VibhavTomorrow’s digital enterprise is at war today. War not only with external cybersecurity hackers and viruses, but also within the organization itself – a conclusion based on my discussions with information security managers and cloud architects around the world. While most executives understand the importance of a business driven, risk management-focused cybersecurity model, they do not address cybersecurity as an organizational issue, but more as a compliance or IT checklist issue.

As business models transform, becoming the leading and modern digital enterprises of the future, we see a shift in other areas as well. This is the age of the customer, and in a digital world, customer service or dis-service can be decided by one successful phishing attempt on an organization’s website. As recent events have proven, a successful cyber-attack has the ability to not only bring the organization down to its knees in minutes, but makes getting up quickly nearly impossible.

Furthermore, as business leaders lean more and more on the Cloud as a default choice for newer, faster systems of engagement with customers, new complexities come into the picture. Fast speed and customer centric front-end application characteristics like zero downtime, instant cross channel functionality deployment, and real time performance management make the cloud an ideal environment. But cloud and cybersecurity – how do we take care of that?

There are five key things that every IT Manager and Architect should think about as they aspire to be the CISO of tomorrow’s leading digital enterprise:

  1. It’s a Business Problem: As custodians of sensitive customer information and business value delivery, the CISOs of tomorrow should understand the importance of keeping data safe and secure. CISOs should ensure that they are part of a core team looking at the organizational risk appetite, which includes aspects like loss of IP, customer information loss, business operation disruption, and more. The CISO should present the organizational cybersecurity risk in the context of business by correlating IT assets and their residual scores with their business importance. The trade-offs of newer cybersecurity investments versus the status quo need to be examined from a more strategic and organizational perspective, rather than mere annual investment or upgrade perspective.
  2. The First C of an Effective Cloud Strategy Is Controls: The focus needs to be on controls, not on cost. If the CISO of tomorrow is not able to effectively implement controls with regards to data segregation, data security and infrastructure security, then the cost of keeping the data in the cloud can be prohibitive. Incorporating the right set of controls into your organization’s cloud deployments from the start and establishing a sustainable monitoring mechanism is key to ensuring that cloud investments have a positive trade-off from a total cost of ownership perspective.
  3. Effective Governance and Reporting is Not an Afterthought: Keeping business stakeholders informed on IT policies and controls from the start, especially those critical to business operations and cybersecurity, is important. The CISO of tomorrow should put in place a granular governance and reporting mechanism to encapsulate not only the organizational IT assets and ecosystem, but also cloud deployments. This system should handle all risk and compliance reporting related-requirements and their correlation with the business operations in order to make sense to business heads.
  4. Is the Business Continuity Plan in Place: Cyber attack planning and response is one of the biggest challenges for the CISO of tomorrow. With cloud-based infrastructure, the problem gets even more complicated. Having a clear incident response strategy and manual, a well-defined business impact analysis, and a mass notification and tracking mechanism are just some of the aspects that will be highly critical for ensuring that business disruptions are handled in a tightly coordinated manner. Again, having business context is important to achieve this.
  5. Should we Consider Cyber-Insurance: Indemnification against cyber attacks and the resulting loss of reputation, data and revenue is going to become a trend fairly soon. The CISOs of tomorrow should monitor the need and requirements of getting cyber insurance proactively, and counsel business stakeholders appropriately. This will be an important hedging strategy to minimize possible financial losses from lawsuits, business disruptions and data losses.

Today, with ubiquitous Internet connectivity, cloud-based IT ecosystems and an ever-evolving cyber-engagement business model, cybersecurity is a growing social and business issue. The CISO and CIOs of tomorrow need to ensure sustained support and focus from top management if they want to succeed in their cyber-fortification efforts. They also need to enhance their horizons across the business context, financial aspects and wider strategic objectives to guarantee that the organization’s data security is evolving in line. If the digital enterprise of tomorrow wants to grow and innovate, the question is not “are we doing enough today”, but rather, “are we thinking enough about tomorrow?

MITRE Matrix: Going on the ATT&CK Arrow to Content

August 19, 2015 | Leave a Comment

By TK Keanini, Chief Technology Officer, Lancope

Tim KeaniniMost cybersecurity work is geared towards finding ways to prevent intrusions – passwords, two-factor authentication, firewalls, to name a few – and to identify the “chinks in the armor” that need to be sealed. The characteristics of malware are shared publicly, to give everyone from system administrators through users a heads up to guard against an attack.

Little has been done, however, to identify the characteristics of an adversary after they are already inside a network, where they have ways to hide their presence.

Now the MITRE Corporation is working to remedy that. The government-funded, non-profit research and development center has released ATT&CK™ – the Adversarial Tactics, Techniques & Common Knowledge matrix. We recently sat down with the folks at MITRE to discuss the new matrix and the impact that it will have on the industry.

“There are a lot of new reports [of] new threat action groups…We want to find the commonality across incidents [and] adversary behavior,” Blake Strom told us. Strom is MITRE’s lead cybersecurity engineer heading up the project. “We want to focus on behaviors at a technical level.”

In the Cyber Attack Lifecycle, attention has been paid mostly to the opening rounds – reconnaissance, weaponization, delivery and exploitation. The ATT&CK wiki addresses the later stages – control, execution and maintenance – when the malware is already resident on the network.

According to Strom, ATT&CK further refines these three stages into a collection of nine different tactics: persistence, privilege escalation, credential access, host enumeration, defense evasion, lateral movement, execution, command and control, and exfiltration. Under these categories, the matrix identifies numerous adversarial techniques, such as logon scripts (characterized by their persistence), or network sniffing (a way to gain credential access).

“Some techniques require very specific [diagnostic tools], like BIOS implant,” Strom said. “It’s harder to detect because there aren’t a whole lot of tools.” Others might be very intricate, requiring several tools, he said.

A major purpose of developing the matrix is to give cybersecurity professionals signposts for what security tools need to be created, Strom said. System administrators that have seen some kinds of attacks and exploits may not have seen others yet, so the matrix also might provide guidance about what attackers might try in the future.

ATT&CK might also prove useful in addressing insider threats, since the matrix focuses on how attackers perform their actions once inside a system. “There’s some overlap between what an insider could do and what attack vector groups are doing,” Strom said.

As for gathering the information, MITRE invites contributions to the database of tactics and techniques. Strom said the organization is serving as curator of the wiki; contributors can’t modify the ATT&CK matrix on their own, but can submit the information to his group.

Strom said MITRE is working to bring the matrix to the attention of the broader IT community. It was presented at the NSA Information Assurance Symposium at the end of June, and will be presented again at the CyberMaryland conference in October.

How to create the perfect climate for endpoint data migration Arrow to Content

August 18, 2015 | Leave a Comment

By Andy Hardy, EMEA Managing Director, Code42

57190577_1920_1080Today’s enterprise organizations face an escalating problem. As the use of laptops, tablets and smartphones continues to grow, the amount of data created and stored on these endpoint devices is increasing at pace.

In fact, typically half of an organization’s business data is now stored on these ‘endpoints,’ and away from the traditional domain of the centralized corporate data center.

As workforce mobility continues to grow, new software, operating systems and regular firmware rollouts have had to match pace to keep up with end-user expectations.

This constant need to upgrade the user experience has become a strategic nightmare for the IT department to manage—especially as valuable enterprise data now shifts from device to device, and platform to platform. Endpoint data is more vulnerable to loss as a result of the frequency of operating system (OS) migrations and technology refreshes.

In fact, IT departments spend significant amounts of time and money each year migrating data onto new devices. On average, organizations typically replace 25% to 30% of their client devices annually.

This means that for routine replacements due to device losses, failure and planned migrations, an organization with 10,000 devices could be faced with 3,000 time-consuming and disruptive migration processes in a mere 12-month period, or roughly eight per day. It becomes easy to see why, when handled incorrectly, data migration is a drain on resources and a waste of time for employees whose devices undergo the process.

Hassle-free data migration from devices is now a top concern for CIOs
It is best practice to ensure files are continuously and automatically backed up prior to undertaking data migration. Continuous, automatic backup eliminates ad hoc backup and saves time and money that would otherwise be associated with replacing workstations and laptops.

Some endpoint data protection platforms allow individuals to easily carry out their own migrations, without IT department intervention. This self-service approach means employees migrate data when it works for them, and are up and running in mere minutes—as opposed to leaving their devices with the IT department for a few days or weeks.

Ease the pressure with worry-free data migration
In order to avoid inefficient and costly data migration processes, companies must first identify their endpoint backup needs and ensure they are met. To do so requires evaluation of a range of factors, including the quantity of data that needs to be stored, the location and duration of data storage. Other critical factors to consider include the time available for a backup, how to preserve the integrity of the stored data, and whether the implemented system is scalable.

In many organizations, requests for file restore assistance may sit unaddressed for several days due to overstretched IT departments or lack of resources. However, the pressure caused by these types of situations can be eased without increasing overhead. Instead, enterprise users should be empowered to quickly and easily restore their own data whenever necessary.

This serves the dual purpose of making the lives of IT teams easier, whilst freeing them up to concentrate on projects that add real business value.

A better path to workstation and laptop migration
So what does an effective, efficient migration process look like? In the past, the IT department would roll out upgrades in waves, which had the tendency to frustrate users. By ensuring files are first automatically and transparently backed up, the most recent version of employees’ work can quickly be restored to new devices. The result is a streamlined, simplified operation where desktop support focuses on the migration process itself rather than troubleshooting backup—all the while keeping the end-user experience intact, and reducing IT costs associated with migration significantly.

It really is crucial to implement a comprehensive endpoint backup solution before undertaking and implementing a data migration process. Without it, your company is wasting time and money, not to mention risking an expensive disaster recovery process.

Correct implementation should protect everyone in the enterprise, regardless of the number of users or their locations. It should run quietly in the background, allowing users to continue to work, and it should be transparent with users unaware that backup is occurring. Also, the knowledge that restoring a file is only a few clicks away means that users are often happy to restore data themselves; a ‘golden win’ for the IT department.

Trusting the Cloud: A Deeper Look at Cloud Computing Market Maturity Arrow to Content

August 12, 2015 | Leave a Comment

By Frank Guanco, Research Project Manager, CSA

Due to its exponential growth in recent years, cloud computing is no longer considered an emerging technology. Cloud computing, however, cannot yet be considered a mature and stable technology. Cloud computing comes with both the benefits and the drawbacks of innovation. To better understand the complexity of cloud computing, the Cloud Security Alliance (CSA) and ISACA recently released the results of a study examining cloud market maturity through four lenses: cloud use and satisfaction level, expected growth, cloud-adoption drivers, and limitations to cloud adoption.

The study determined that the increased rate of cloud adoption is the result of perceived market maturity and the number of available services to implement, integrate and manage cloud services. Cloud adoption is no longer thought of as just an IT decision; it’s a business decision. Cloud has become a critical part of a company’s landscape and a cost effective way to create more agile IT resources and support the growth of a company’s core business.

Cloud Computing Maturity Stage
Cloud computing is still in a growing phase. This growth stage is characterized by the significant adoption, rapid growth and innovation of products offered and used, clear definitions of cloud computing, the integration of cloud into core business activities, a clear ROI and examples of successful usage. With roles and responsibilities still somewhat unclear, especially in the areas of data ownership and security and compliance requirements, cloud computing has yet to reach its market growth peak.

Cloud Adoption and Growth
How does cloud computing continue to mature? Security and privacy continue to be the main inhibitors of cloud adoption because of insufficient transparency into cloud-provider security. Cloud providers do not supply cloud users with information about the security that is implemented to protect cloud-user assets. Cloud users need to trust the operations and understand any risk. Providing transparency into the system of internal controls gives users this much needed trust.

Companies are experimenting with cloud computing and trying to determine how cloud fits into their business strategy. For some, it is clear that cloud can provide new process models that can transform the business and add to their competitive advantage. By adopting cloud-based applications to support the business, Software as a Service (SaaS) adoption is enabling organizations to channel resources into the development of their core competencies. Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) adoptions enable businesses to experiment with new technologies and new services that require resources that would be expensive if they were completed through in-house implementation. IaaS and PaaS also allow companies to adapt to the rapid changes in market demand, because they create a completely new, faster and cheaper offering.

User Satisfaction
According to the ISACA/CSA respondents, the level of satisfaction with cloud services is on the rise. Cloud services are now commonly being used to meet business as usual (BAU) and strategic goals, with the expectation that they will be more important for BAU than strategic plans in the future.

It’s not perfect yet, but the level of satisfaction with cloud services and deployment models is expected to increase as the market matures and vendors define standards to minimize the complexity around cloud adoption and management. The increase of cloud service brokers and integrators is helping businesses to integrate applications, data and shared storage in a more efficient way, making ongoing maintenance much easier.

Moving Past the Challenges
The ISACA/CSA study found that the most significant cloud concerns involve security and international data privacy requirements, data custodianship, legal and contractual issues, provider control over information, and regulatory compliance. Both cloud providers and cloud users have a role is moving past cloud concerns. Cloud providers need to demonstrate their capabilities to deliver services in a secure and reliable manner. Companies must understand their own accountability for security and compliance and their responsibility for implementing the necessary controls to protect their assets.

Gaining Maturity

The decision to invest in cloud products and services needs to be a strategic decision. Top management and business leaders need to be involved throughout a cloud product’s life cycle. Any cloud-specific risk should be treated as a business risk, requiring management to understand cloud benefits and challenges to be able to address cloud-specific risk. The need remains for better explanations of the benefits that cloud can bring to an organization and how cloud computing can fit into the overall core strategy of a business.


To read the entire “Cloud Computing Market Maturity” white paper as well as the study results, please click here.
To learn more about CSA, visit
More information on ISACA may be found at

What is Quantum Key Distribution? Arrow to Content

August 11, 2015 | Leave a Comment

By Frank Guanco, Research Project Manager, CSA

Following this year’s RSA Conference, the Cloud Security Alliance formed a new working group called the Quantum-Safe Security Working Group (QSSWG). The QSSWG working group recently published a follow-up paper entitled “What is Quantum-Safe Computing” to help raise awareness and promote the early adoption of technologies to protect data in the cloud in preparation for commercially available quantum computers, which should theoretically be able to crack RSA and ECC encryption algorithms.

As a follow-up to this paper, the QSSWG has recently published a new paper titled “What is Quantum Key Distribution” which addresses the issues around sharing and securing encryption keys in a quantum world. The new position paper provides an overview of key distribution in general, examines some of current approaches and existing challenges of key distribution, and provides a brief overview of how Quantum Key Distribution works in the real world. Finally the paper looks at how QKD has evolved from a largely experimental field into becoming a solid commercial proposition and what the road ahead for QKD might look like. We welcome your thoughts on this latest research, and hope you find it valuable.

Are endpoints the axis of evil or the catalyst of creation? Arrow to Content

August 11, 2015 | Leave a Comment

By Dave Payne, Vice President of Systems Engineering, Code42

Code42 Evil or AwesomeIf security pros had their way, they’d make laptops so secure they’d be virtually unusable. Protecting against every imaginable attack–not to mention the fallibility of the human connected to the laptop–is a battle we keep losing. Seventy percent of successful breaches happen at the endpoint. So it’s either keep layering the security stack or abolish laptops altogether—because they’re counterintuitive to a secure enterprise.

On the flip side, the workforce views endpoint devices as a marvelous, immutable extension of self: the computers they carry are magical devices that transform mere mortals into digital superhumans—giving them speed, power, boundless knowledge and connection. Take away that muscular machine, and employees will rebel.

Are endpoints awesome or evil?
I look at the conundrum between IT and the workforce as the classic good vs. evil story. The looming threats are disorienting, but if IT takes the right approach, they can give “analog” humans what they want AND protect the enterprise too.

The first step is accepting reality: the Haddon matrix theory, the most commonly used paradigm in the injury prevention field, says you plan for a disaster by planning for three phases of the disaster – pre-disaster, disaster, post-disaster. The presumption is that disaster is inevitable.

How does this translate in IT? Through acceptance that the world is inherently dangerous, by blocking and tackling to address known issues, and planning for pre-disaster, disaster and post-disaster to limit risk.

Survive in the wild with a simple plan
My session at the Gartner Catalyst Conference 2015—called Mitigate Risk Without Multiplying the Tech Stack—is about the first commandment of IT: thou shalt have a copy of data in an independent system in a separate location. But more than that, it’s about utilizing the backup agent already on employee laptops for additional tasks. Once the data is stored, IT can use it to rapidly remediate and limit risk following breach, protect against insider threats, even add back up sets from third party clouds—where employees sync and share data—to a centralized platform where all the data can be protected and utilized to recover from any type of data loss.

Data security is a combination of protection, detection and response. Like Bruce Schneier says,

You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

What I tell IT and infosec pros is this: focus on what you can control and leverage what you have. Instead of deploying a new agent every time you need to block a behavior or protect against a threat, wrap protection around the data with real-time, continuous data capture.

With this approach, you give employees their magical machines while staying focused on data recovery, visibility and forensics, as well as security and analytics. Now instead of good vs. evil, it’s win/win.

Private cloud deployments don’t own the monopoly on data security Arrow to Content

August 4, 2015 | Leave a Comment

By Aimee Simpson, Integrated Marketing Manager, Code42

56531783A recent Cloud Security Alliance (CSA) survey shows 73 percent of respondents cited security as a top challenge to cloud adoption for the enterprise.

For this reason, the enterprise majority still requires on-premises, private cloud deployments to achieve data security goals. But should the storage location itself be the primary concern?

Don’t mistake control for security
In the May 7 Forbes article entitled, “Why Cloud Security and Privacy Fears are Completely Misguided,” writer Marc Clark argues that it’s time to stop assuming on-premises deployments are the most secure cloud architecture available. Clark believes IT/IS leaders have confused on-premises access and control of data with true data center security.

When it comes to security, says Clark, the major cloud providers undergo rigorous audits to prove controls and policies meet compliance and security certifications. By contrast, most on-premises data centers do not undergo these same audits. Furthermore, corporate data centers generally do not receive as much budget and attention as cloud providers give to their data centers.

As Clark explains, this “makes perfect sense:”

If a retailer has a data breach, maybe some people don’t shop there for a few weeks or months. Or maybe customers start paying in cash more than by debit/credit card. So although security is important to these types of companies, the fact is that until they have a breach that costs them WAY more than they would have ever paid for better security, they typically aren’t putting the money and resources needed to really stay ahead in the security game. The insurance is considered more expensive than the risk. But for cloud providers, their product IS the cloud—not hammers or hobby crafts or paper towels. If a cloud provider has a security breach, trust is lost in their core product, full stop. And it is hard to recover that trust. Therefore, securing their product—the cloud—should and in most cases does get the money and resources it needs.

Don’t fit a cloud vendor, pick a vendor that fits you
Security is critical, but as Clark explains, it’s possible to find with the right cloud provider.

Ultimately you have to feel as though your cloud provider will take care of your data as well as, or better than, you will. And this is a question of security, not one of control. It’s time to stop assuming that the cloud is a less safe place to put your data than in an on-premises system. That ship (of excuses) has sailed.

It’s after questions of security are answered that control can and should be addressed.

Enterprises should partner with a vendor that, rightfully, doesn’t lessen control of data. At the most basic level, this can be accomplished by keeping encryption keys on-premises regardless of cloud architecture. Such a deployment provides IT with a new realm of benefits like on-demand storage scalability, reduced hardware management and no storage/network provisioning—all while rendering data unreadable to unauthorized people and agencies.

Effective Access Control with Active Segmentation Arrow to Content

July 30, 2015 | Leave a Comment

By Scott Block, Senior Product Marketing Manager, Lancope

Fotolia_46647398_XSAs the threat landscape has evolved to include adversaries with deep pockets, immense resources and plenty of time to compromise their intended target, security professionals have been struggling to stave off data breaches. We’ve all heard it ad nauseam – it’s not a matter of if your network will be compromised, but when.

Since many companies have built up their perimeter defenses to massive levels, attackers have doubled down on social engineering. Phishing and malware-laden spam are designed to fool company employees into divulging login information or compromising their machine. According to the security consulting company Mandiant, 100 percent of data breaches the company has studied involved stolen access credentials.

Since threat actors have become so good at circumventing traditional defenses, we cannot afford to have only a single point of failure. Without proper internal security, attackers are given free reign of the network as soon as they gain access to it.

Instead, attackers should encounter significant obstacles between the point of compromise and the sensitive data they are after. One way to accomplish this is with network segmentation.

Keep your hands to yourself
In an open network without segmentation, everyone can touch everything. There is nothing separating Sales from Legal, or Marketing from Engineering. Even third-party vendors may get in on the action.

The problem with this scenario is that it leaves the data door wide open for anyone with access credentials. In a few hours, a malicious insider could survey the network, collect everything of value and make off with the goods before security personnel get wind of anything out of the ordinary.

What makes this problem even more frustrating is that there is no reason everyone on the network should be able to touch every resource. Engineers don’t need financial records to perform their job, and accountants don’t need proprietary product specifications to do theirs.

By simply cordoning off user groups and only allowing access to necessary resources, you can drastically reduce the potential damage an attacker could inflict on the organization. Instead of nabbing the crown jewels, the thief will have to settle for something from the souvenir shop. Additionally, the more time the attacker spends trying to navigate and survey your network, the more time you have to find them and throw them out, preventing even the slightest loss of data in the process.

How it works
It is best to think of a segmented network as a collection of zones. Groups of users and groups of resources are defined and categorized, and users are only able to “see” the zones appropriate to their role. In practice, this is usually accomplished by crafting access policies and using switches, virtual local area networks (VLANs) and access control lists to enforce them.

While this is all well and good, segmentation can quickly become a headache in large corporate environments. Network expansion, users numbering in the thousands and the introduction of the cloud can disrupt existing segmentation policies and make it difficult to maintain efficacy. Each point of enforcement could contain hundreds of individual policies. As the network grows in users and assets, segmentation policies can quickly become outdated and ineffective.

Retaining segmentation integrity is an important security function in today’s world of advanced threats and high-profile data breaches. To properly protect themselves, organizations need to constantly maintain segmentation, adding new policies and adjusting existing ones as network needs change.

One way to tackle the challenges of traditional access control is with software-defined segmentation, which abstracts policies away from IP addresses and instead bases them on user identity or role. This allows for much more effective and manageable segmentation that can easily adapt to changes in the network topology.

Active segmentation for effective access control
When you couple software-defined segmentation with an intelligent planning and implementation methodology, you get active segmentation. This approach to segmentation allows network operators to effectively cordon off critical network assets and limit access appropriately with minimal disruption to normal business functions.

When implemented correctly, active segmentation is a cyclical process of:

  • Identifying and classifying all network assets based on role or function
  • Understanding user behavior and interactions on the network
  • Logically designing access policies
  • Enforcing those policies
  • Continuously evaluating policy effectiveness
  • Adjusting policies where necessary

Here is a high-level overview of the active segmentation cycle:


Network visibility enables active segmentation
One of the cornerstones of active segmentation is comprehensive network visibility. Understanding how your network works on a daily basis and what resources users are accessing as part of their role is paramount to designing an adequate policy schema.

Leveraging NetFlow and other forms of network metadata with advanced tools like Lancope’s StealthWatch® System provides the information needed to understand what users are accessing and their behavior when operating on the network. This end-to-end visibility allows administrators to group network hosts and observe their interactions to determine the best way to craft segmentation policies without accidently restricting access to resources by people who need it.

After the segmentation policies have been implemented, the visibility allows security personnel to monitor the effectiveness of the policies by observing access patterns to critical network assets. Additionally, the network insight quickly highlights new hosts and traffic on the network, which can help assign segmentation policies to them. This drastically reduces the amount of time and effort required to ensure segmentation policies are keeping pace with the overall growth of the enterprise network.

In short, active segmentation is the process of logically designing policies based on network data and constantly keeping an eye on network traffic trends to make sure access controls are utilized effectively and intelligently to obstruct attackers without impeding normal business functions. With the right tools and management, organizations can minimize the headaches and time involved with network segmentation while significantly improving their overall cybersecurity posture.

Page Dividing Line