Sunday, June 6, 2010

Adapting a Security Program to an Evolving Threat Landscape

Security programs (e.g. processes and procedures, not applications) must evolve and must be able to adapt in order to survive an evolutionary arms race.  Programs as they are implemented today are largely unable to survive in the diverse and evolving threat environment.  One prediction of the Red Queen hypothesis is that overtime fitness of an entity is reduced as a result of selection pressures eliminating those attacks which are unsuccessful (e.g. selection works against the unsuccessful and promotes the successful).

Security programs need to evolve in order to account for how an evolutionary arms race will affect the enterprise.  These changes will need to be more than simply employing a technology which is biologically inspired or employs an evolutionary learning algorithm.  There are a number of different ways in which a security program can be adapted to increase the likelihood of surviving a diverse threat environment.  Among those aspects of a security program that can be adapted to an environment in which the threat is evolving include:
  • Risk Management - Moving from a "risk management" based methodology to a methodology that understands the threat environment of the enterprise. The methodology should be updated to include processes that expect that the threat will adapt to the countermeasures that are implemented, and eventually bypass them.
  • Security Testing - Moving from a "compliance checking" based testing methodology to one that mirrors and mimics how adversaries are attacking the enterprise.
  • System Development Life Cycle - Moving from implementing security at the end of the Software Development Life-cycle (SDLC) to through out the SDLC. Beyond just implementing security through out the SDLC, it should be understood by program management and the developers that security requirements may change during development as the threat environment changes.
  • Attack Research - Monitoring and researching attacks, along with providing information about potential attacks as they are discovered.
The "classical" risk management process should evolve to account for the evolutionary arms races that it exists within information security.  At a minimum, the "classical" risk management process will need to be adapted to include researching the effort that it will take to bypass implemented countermeasures, and ensuring that processes and procedures are developed to detect these compromises.  As an example, an application is determined to be vulnerable to a number of injection attacks, so the enterprise determines that it would be appropriate (and most likely less costly) to implement a Web Application Firewall (WAF).  In the process of determining if the WAF as implemented is effective, the enterprise should include technical evaluations to determine if the WAF will actually protect the application.  As another example, an enterprise may understand that anti-virus is largely ineffective and determine that white-listing executables is an appropriate counter measure to the threat. Attackers and malware have had the ability to make use of DLL injection, and perform entirely memory-only operations for several years.  Executable white-listing does not stop DLL injection and will not defend against memory-only operations.  Executable white-listing will not protect an operating system, if an attacker is able to compromise the hardware.  Beyond implementing application white-listing, the enterprise must know how an attacker will adapt to the new counter measures and it will need to develop and implement methods for dealing with these attacks.  They will need to develop methods for detecting the next round of attacks which include memory-only and hardware.

Depending upon the industry in which the enterprise participates, they may be subject to a wide variety of requirements.  Federal entities are required to be compliant with the Federal Information Security Management Act (FISMA).  Part of this compliance process includes implementing the National Institute of Science & Technology (NIST) Special Publication (SP) 800-53 Security Controls according to the System's security impact.  It is expected that most systems will be part of the 'Low' baseline.  Security controls like Information Input Validation (SI-10) and Error Handling (SI-11) are not required at this level.  The lack of Input Validation can lead to successful attacks and compromises resulting from Buffer Overflows, Cross Site Scripting (XSS), SQL Injection (SQLi), O/S Command injection, LDAP injection (LDAPi), etc. Buffer Overflows are common in desktop and operating systems, while XSS and SQLi are some of the most common causes of web application compromises.  Information Input Validation was only finalized as a requirement in Revision 3 of SP 800-53.  Requirements should be updated and applied in order to mitigate threats that are present and active in the environment.  Implementation of a security control to prevent threats from actively exploiting systems in the environment should not be held off until a system meets higher impact level.

Another reason that the "classical" risk management methodology must be adapted is that the methodology does not take into account for the operational strategies that threats are employing in the environment. The recommended standard for assessing and mitigating risk, as described in the Official (ISC)2 - Guide to The Certified Information Systems Security Professional (CISSP) - Common Body of Knowledge (CBK), controls are implemented based on the cost or the value of the information being protected against the cost of the control that is going to be implemented.  This is the value of the information to the enterprise if it is lost, stolen or otherwise exposed.  This process does not take into account the value of the system in relationship to other internal or external entities.  This methodology falls apart if the attacker does not care about the value of the information on the system, but is more interested in the business relationships of the enterprise.  An attacker may be willing to spend significant resources to compromise the enterprise in order to attack an external entity.

In addition to the "classical" risk management methodology, the methods used for testing should be adaptable.  Simply stated, there is more to security testing then performing a "compliance" check of the system or clicking a button to generate a security report.  Although compliance checks are required, they do very little in terms of aiding an enterprise in understanding their risk posture.  One could assume that simply subjecting the enterprise to "internal predation" by using testers would be of benefit.  Security Test and Evaluation (ST&E), Compliance Scanning, Independent Verification and Validation (IV&V), and Penetration Testing are different types of testing that most enterprises use to evaluate a system.  Of those, Penetration Testing is potentially the one that offers to most similarities to how an attacker works to compromise a system.  It can be implemented blindly such that penetration testers are permitted to attack the enterprise as though an attacker would.  There are a number of issues with simply assuming that a penetration tester will mimic an attacker:
  • Skill/Resource Levels - The penetration tester may or may not have the resources or skill to mimic an attacker.
  • Test Methodology - The penetration tester may not be permitted to attack the enterprise using the methods that an attacker is using (or even be aware of the methods that attackers have used to successfully exploit the enterprise).
  • Test Duration - An enterprise is not going to be willing to wait months or years for a penetration test to conclude.  Most enterprises do not have the resources to waste on an extended penetration test. 
  • Test Depth and Breath - Most importantly a penetration tester should be reporting multiple attack vectors while an attacker only needs one.
When a penetration tester is attacking the enterprise, they will be using common assessment tools such as Nessus, Metasploit Framework, CORE Impact Professional, etc.  Attackers in the wild are not using these tools to compromise an enterprise.  Attackers have custom tools which they have written or they have access to a small community which provides limited access to specialized tools and resources.  In order to provide a more through simulation of an attack against the enterprise, penetration testers should have the ability to use and employ custom tools during the attack.  Penetration testers should not be the only individuals within the enterprise testing the effectiveness of countermeasures.  Internal research staff should be conducting exercises to determine the effectiveness.  For example, a researcher should be collecting malware samples and periodically testing if the enterprise's malware solution can routinely detect and mitigate the threat.  They should also be able to modify, test with evasion/obfuscation algorithms, and report on the continued effectiveness of solutions.

Some enterprises do not think there is any value to allow penetration testers to use attacks which are commonly seen against the enterprise.  If they are not allowing some types of attacks, they should at minimum be providing some type of mitigation to protect the enterprise.  For example, most enterprises are subjected to phishing and spear phishing but they do not allow penetration testers to use this methodology during a test.  If the enterprise does not already have a program which actively phishes the enterprise, they are leaving an actively exploited vulnerability unmitigated.  Similarly enterprises do not commonly allow clients and end points to be targeted during a penetration test, despite client side attacks being cited as a source of enterprise compromises as a user visits a website and the browser was exploited and/or hijacked.

Enterprises also do not have the luxury of allowing penetration tests to occur over months and years, despite attackers being allowed to attack the enterprise for extended periods.  An enterprise could allow a penetration tester extended periods to attack the enterprise, but it would be more beneficial to actively engage the penetration tester during the exercise.  As an example, instead of requiring a penetration tester to acquire credentials via a brute force attack against the system, they should be provided credentials for the test.  Or during the test, they should have limited access to the source code of an application to help them refine their attacks.  Requiring a tester to brute force an attack vector will be a waste of both parties time, and unnecessarily increase the time before exploitable vulnerabilities can be identified, reporting and beginning mitigation.

Lastly as far as testing is concerned, an attacker only needs to find one exploitable vector while a penetration tester should find and report as many exploitable vulnerabilities.  A test should not conclude when a single exploitable vector is found, the penetration tester should continue to find as many exploitable vectors as possible.  Although testing can provide valuable information as far as the exploitability of vulnerabilities within a system, it typically only occurs at the end of the SDLC.  Correcting vulnerabilities at the end of development prior to an application being placed into production can be expensive in time and resources.

The integration of security into the SDLC needs to be adapted to ensure that systems placed in production are capable of defending themselves against the threats inherent in the environment.  Integrating security from an evolutionary point of view into the SDLC is more than just implementing static source code analysis or application/protocol fuzzing into the unit tests.  It is also more than just selecting an appropriate set of security requirements at the time in which requirements are collected.  Between the time in which the application is designed and the time in which it is deployed into production the threat environment can drastically change.  It should be understood that security requirements leveled against a program can change throughout the SDLC as new threats adapt and evolve in the environment.  Beyond ensuring that program management and the developers understanding that requirements will change, the application should be designed to with stand attacks that the enterprise has seen or will likely see during its operational phase.  This means investigating and tracking the attacks are actually occurring in the environment, and understanding what other attacks will occur in the near future.

In addition to monitoring attacks that the enterprise is actually encountering, the enterprise should also monitor what attacks are being used against other enterprises in the same industry, and monitoring what researchers are presenting.  Only by observing what is actually occurring in the environment, can the enterprise understanding how to adapt and modify the security requirements as the threat environment changes. When monitoring the research community, it should be understood that it can take between 5 and 10 years before the popularization of an attack and before the attack becomes common place although it can occur more quickly. When looking for new attacks, attackers are not going to publish them before they use them, by contrast researchers will publish their attacks.  In some cases attackers will adopt these new attacks, but in other cases a researcher will publish details about a previously undisclosed attack that only attackers have been employing.  By watching what researchers are publishing, strategies can be planned and implemented to mitigate attacks which before they are common place.

Implementing a security program that is able to adapt and respond to the evolving threat landscape requires more than selecting vendor solutions using "leap ahead" or evolutionary algorithms.  It will require that the way in which a security program operates to be adaptable.  In summary, some of the necessary adaptations will require changes ranging from modifying the risk management methodology that is employed to adapting a more predatory approach to security testing, and modifying the SDLC (and expectations on the part of developers) to watching (and participating) in active security research.

No comments:

Post a Comment