Sunday, June 6, 2010

Adapting a Security Program to an Evolving Threat Landscape

Security programs (e.g. processes and procedures, not applications) must evolve and must be able to adapt in order to survive an evolutionary arms race.  Programs as they are implemented today are largely unable to survive in the diverse and evolving threat environment.  One prediction of the Red Queen hypothesis is that overtime fitness of an entity is reduced as a result of selection pressures eliminating those attacks which are unsuccessful (e.g. selection works against the unsuccessful and promotes the successful).

Security programs need to evolve in order to account for how an evolutionary arms race will affect the enterprise.  These changes will need to be more than simply employing a technology which is biologically inspired or employs an evolutionary learning algorithm.  There are a number of different ways in which a security program can be adapted to increase the likelihood of surviving a diverse threat environment.  Among those aspects of a security program that can be adapted to an environment in which the threat is evolving include:
  • Risk Management - Moving from a "risk management" based methodology to a methodology that understands the threat environment of the enterprise. The methodology should be updated to include processes that expect that the threat will adapt to the countermeasures that are implemented, and eventually bypass them.
  • Security Testing - Moving from a "compliance checking" based testing methodology to one that mirrors and mimics how adversaries are attacking the enterprise.
  • System Development Life Cycle - Moving from implementing security at the end of the Software Development Life-cycle (SDLC) to through out the SDLC. Beyond just implementing security through out the SDLC, it should be understood by program management and the developers that security requirements may change during development as the threat environment changes.
  • Attack Research - Monitoring and researching attacks, along with providing information about potential attacks as they are discovered.
The "classical" risk management process should evolve to account for the evolutionary arms races that it exists within information security.  At a minimum, the "classical" risk management process will need to be adapted to include researching the effort that it will take to bypass implemented countermeasures, and ensuring that processes and procedures are developed to detect these compromises.  As an example, an application is determined to be vulnerable to a number of injection attacks, so the enterprise determines that it would be appropriate (and most likely less costly) to implement a Web Application Firewall (WAF).  In the process of determining if the WAF as implemented is effective, the enterprise should include technical evaluations to determine if the WAF will actually protect the application.  As another example, an enterprise may understand that anti-virus is largely ineffective and determine that white-listing executables is an appropriate counter measure to the threat. Attackers and malware have had the ability to make use of DLL injection, and perform entirely memory-only operations for several years.  Executable white-listing does not stop DLL injection and will not defend against memory-only operations.  Executable white-listing will not protect an operating system, if an attacker is able to compromise the hardware.  Beyond implementing application white-listing, the enterprise must know how an attacker will adapt to the new counter measures and it will need to develop and implement methods for dealing with these attacks.  They will need to develop methods for detecting the next round of attacks which include memory-only and hardware.

Depending upon the industry in which the enterprise participates, they may be subject to a wide variety of requirements.  Federal entities are required to be compliant with the Federal Information Security Management Act (FISMA).  Part of this compliance process includes implementing the National Institute of Science & Technology (NIST) Special Publication (SP) 800-53 Security Controls according to the System's security impact.  It is expected that most systems will be part of the 'Low' baseline.  Security controls like Information Input Validation (SI-10) and Error Handling (SI-11) are not required at this level.  The lack of Input Validation can lead to successful attacks and compromises resulting from Buffer Overflows, Cross Site Scripting (XSS), SQL Injection (SQLi), O/S Command injection, LDAP injection (LDAPi), etc. Buffer Overflows are common in desktop and operating systems, while XSS and SQLi are some of the most common causes of web application compromises.  Information Input Validation was only finalized as a requirement in Revision 3 of SP 800-53.  Requirements should be updated and applied in order to mitigate threats that are present and active in the environment.  Implementation of a security control to prevent threats from actively exploiting systems in the environment should not be held off until a system meets higher impact level.

Another reason that the "classical" risk management methodology must be adapted is that the methodology does not take into account for the operational strategies that threats are employing in the environment. The recommended standard for assessing and mitigating risk, as described in the Official (ISC)2 - Guide to The Certified Information Systems Security Professional (CISSP) - Common Body of Knowledge (CBK), controls are implemented based on the cost or the value of the information being protected against the cost of the control that is going to be implemented.  This is the value of the information to the enterprise if it is lost, stolen or otherwise exposed.  This process does not take into account the value of the system in relationship to other internal or external entities.  This methodology falls apart if the attacker does not care about the value of the information on the system, but is more interested in the business relationships of the enterprise.  An attacker may be willing to spend significant resources to compromise the enterprise in order to attack an external entity.

In addition to the "classical" risk management methodology, the methods used for testing should be adaptable.  Simply stated, there is more to security testing then performing a "compliance" check of the system or clicking a button to generate a security report.  Although compliance checks are required, they do very little in terms of aiding an enterprise in understanding their risk posture.  One could assume that simply subjecting the enterprise to "internal predation" by using testers would be of benefit.  Security Test and Evaluation (ST&E), Compliance Scanning, Independent Verification and Validation (IV&V), and Penetration Testing are different types of testing that most enterprises use to evaluate a system.  Of those, Penetration Testing is potentially the one that offers to most similarities to how an attacker works to compromise a system.  It can be implemented blindly such that penetration testers are permitted to attack the enterprise as though an attacker would.  There are a number of issues with simply assuming that a penetration tester will mimic an attacker:
  • Skill/Resource Levels - The penetration tester may or may not have the resources or skill to mimic an attacker.
  • Test Methodology - The penetration tester may not be permitted to attack the enterprise using the methods that an attacker is using (or even be aware of the methods that attackers have used to successfully exploit the enterprise).
  • Test Duration - An enterprise is not going to be willing to wait months or years for a penetration test to conclude.  Most enterprises do not have the resources to waste on an extended penetration test. 
  • Test Depth and Breath - Most importantly a penetration tester should be reporting multiple attack vectors while an attacker only needs one.
When a penetration tester is attacking the enterprise, they will be using common assessment tools such as Nessus, Metasploit Framework, CORE Impact Professional, etc.  Attackers in the wild are not using these tools to compromise an enterprise.  Attackers have custom tools which they have written or they have access to a small community which provides limited access to specialized tools and resources.  In order to provide a more through simulation of an attack against the enterprise, penetration testers should have the ability to use and employ custom tools during the attack.  Penetration testers should not be the only individuals within the enterprise testing the effectiveness of countermeasures.  Internal research staff should be conducting exercises to determine the effectiveness.  For example, a researcher should be collecting malware samples and periodically testing if the enterprise's malware solution can routinely detect and mitigate the threat.  They should also be able to modify, test with evasion/obfuscation algorithms, and report on the continued effectiveness of solutions.

Some enterprises do not think there is any value to allow penetration testers to use attacks which are commonly seen against the enterprise.  If they are not allowing some types of attacks, they should at minimum be providing some type of mitigation to protect the enterprise.  For example, most enterprises are subjected to phishing and spear phishing but they do not allow penetration testers to use this methodology during a test.  If the enterprise does not already have a program which actively phishes the enterprise, they are leaving an actively exploited vulnerability unmitigated.  Similarly enterprises do not commonly allow clients and end points to be targeted during a penetration test, despite client side attacks being cited as a source of enterprise compromises as a user visits a website and the browser was exploited and/or hijacked.

Enterprises also do not have the luxury of allowing penetration tests to occur over months and years, despite attackers being allowed to attack the enterprise for extended periods.  An enterprise could allow a penetration tester extended periods to attack the enterprise, but it would be more beneficial to actively engage the penetration tester during the exercise.  As an example, instead of requiring a penetration tester to acquire credentials via a brute force attack against the system, they should be provided credentials for the test.  Or during the test, they should have limited access to the source code of an application to help them refine their attacks.  Requiring a tester to brute force an attack vector will be a waste of both parties time, and unnecessarily increase the time before exploitable vulnerabilities can be identified, reporting and beginning mitigation.

Lastly as far as testing is concerned, an attacker only needs to find one exploitable vector while a penetration tester should find and report as many exploitable vulnerabilities.  A test should not conclude when a single exploitable vector is found, the penetration tester should continue to find as many exploitable vectors as possible.  Although testing can provide valuable information as far as the exploitability of vulnerabilities within a system, it typically only occurs at the end of the SDLC.  Correcting vulnerabilities at the end of development prior to an application being placed into production can be expensive in time and resources.

The integration of security into the SDLC needs to be adapted to ensure that systems placed in production are capable of defending themselves against the threats inherent in the environment.  Integrating security from an evolutionary point of view into the SDLC is more than just implementing static source code analysis or application/protocol fuzzing into the unit tests.  It is also more than just selecting an appropriate set of security requirements at the time in which requirements are collected.  Between the time in which the application is designed and the time in which it is deployed into production the threat environment can drastically change.  It should be understood that security requirements leveled against a program can change throughout the SDLC as new threats adapt and evolve in the environment.  Beyond ensuring that program management and the developers understanding that requirements will change, the application should be designed to with stand attacks that the enterprise has seen or will likely see during its operational phase.  This means investigating and tracking the attacks are actually occurring in the environment, and understanding what other attacks will occur in the near future.

In addition to monitoring attacks that the enterprise is actually encountering, the enterprise should also monitor what attacks are being used against other enterprises in the same industry, and monitoring what researchers are presenting.  Only by observing what is actually occurring in the environment, can the enterprise understanding how to adapt and modify the security requirements as the threat environment changes. When monitoring the research community, it should be understood that it can take between 5 and 10 years before the popularization of an attack and before the attack becomes common place although it can occur more quickly. When looking for new attacks, attackers are not going to publish them before they use them, by contrast researchers will publish their attacks.  In some cases attackers will adopt these new attacks, but in other cases a researcher will publish details about a previously undisclosed attack that only attackers have been employing.  By watching what researchers are publishing, strategies can be planned and implemented to mitigate attacks which before they are common place.

Implementing a security program that is able to adapt and respond to the evolving threat landscape requires more than selecting vendor solutions using "leap ahead" or evolutionary algorithms.  It will require that the way in which a security program operates to be adaptable.  In summary, some of the necessary adaptations will require changes ranging from modifying the risk management methodology that is employed to adapting a more predatory approach to security testing, and modifying the SDLC (and expectations on the part of developers) to watching (and participating) in active security research.

Tuesday, May 18, 2010

Why Apply Evolutionary Biology to Information Security?

Very few security professionals will agree that the situation within information security is globally improving.  There may be local pockets in which an organization is able to hold/maintain a strong security posture.  Problems discovered over a decade ago (e.g. Buffer Overflows, Cross Site Scripting, etc.) still persist, and are consistently rated as being some of the most dangerous programming flaws (see the OWASP Top 10 and the CWE/SANS Top 25).  The state of cybersecurity is severe enough that some professionals are seeking solutions for financial institutions which assume that the clients that they are conducting business transactions with are compromised.  Given that some estimates find that well over 90% of the systems on the Internet are not fully patched, and a significant percentage of the systems on the Internet are compromised with at least one form of malware, this is a reasonable approach.

Events like these can be considered to be signs that efforts in the area information security are failing.  There can be many reasons that an entity fails in a game; one possible reason is that the rules of the game are not understood.  If the rules of the game are not understood, it can be difficult at best to consistently play a game well, especially if the rules are stacked against you.  Lately there have been a number of organizations looking to implement "game changing" strategies.  Again, changing the rules requires that the rules are understood.

Most fields of science have one or more major theories which are used to explain observable phenomenon and provide a basis for testing and interacting with the world.  Physics has the Theory of General Relativity and the Standard Model, Chemistry has the Periodic Table and Quantum Mechanics, and Biology has Genetics and Evolution.  Despite being drawn from several scientific fields of study such as Mathematics, Linguistics, and Solid State Physics, with the more recent introduction of Psychology and Economics, information security lacks a framework to provide predictive and testable hypothesizes.

Some institutions have recognized that simply teaching computer science provides an approach that is too narrow of a focus for their curriculum, and have reorganized their departments to apply a more broad-based and interdisciplinary approach to their studies and moved into the field of Informatics.  Bioinformatics and Security Informatics being some specific examples of the resulting reorganization.  There are already attempts to apply the concepts of biology to information security, as there are attempts to build automated immune systems, predicting computer virus outbreaks with models similar to those that are used for their biological analogies, and programs are being implemented with evolutionary algorithms to facilitate machine learning.

The hypothesis that is being presented is that information security is an evolutionary system, similar to what is occurring naturally and can be modeled and explained by the field of evolutionary biology.  Specifically some of the situations that are occurring in the field can be understood as an evolutionary arms race (e.g. malware).  Evolutionary Biology has existed for 150 years and been able to provide an understanding of one of the most complicated natural systems in existence; life.  Some of the frameworks within evolutionary biology can be directly applied, while others may need to be modified or even replaced, and others may even prove to not apply.  Applying evolutionary biology could provide a richer understanding of the rules in which the game is being played.  Once the rules are understood, it should be possible to understand where and how the rules can be modified to change the game in a meaningful and substantive way.