Software Security: An Emerging Problem In Software Engineering

The objective of this research paper is to discuss the software security as a problem in software engineering. The solution of this problem leads to the new constructive approach to the modeling, specification and analysis of application specific security requirements. The approach is based on a framework we developed before for generating and resolving obstacles to requirements achievement. Our framework intentional obstacles set up by attackers to break security goals. Attack tree are derived systematically through anti-goal refinement until leaf nodes are reached that are software vulnerabilities observable by the attacker or anti-requirements implementable by this attacker. New security requirements are then obtained as countermeasures by application of threat resolution operators to the anti requirements and vulnerabilities revealed by the analysis. The paper also introduces formal epistemic specification constructs and patterns that may be used to support a formal derivation and analysis process. The method is illustrated on a web-based system for which subtle attacks have been reported recently.

The objective of this research paper is to discuss the software security as a problem in software engineering. The solution of this problem leads to the new constructive approach to the modeling, specification and analysis of application specific security requirements. The approach is based on a framework we developed before for generating and resolving obstacles to requirements achievement. Our framework intentional obstacles set up by attackers to break security goals. Attack tree are derived systematically through anti-goal refinement until leaf nodes are reached that are software vulnerabilities observable by the attacker or anti-requirements implementable by this attacker. New security requirements are then obtained as countermeasures by application of threat resolution operators to the anti requirements and vulnerabilities revealed by the analysis. The paper also introduces formal epistemic specification constructs and patterns that may be used to support a formal derivation and analysis process. The method is illustrated on a web-based system for which subtle attacks have been reported recently.
Security guarantee
At the bottom, the crypto layer offers solid and well-established techniques for basic services such as encryption/decryption or signature; the state of the art on this layer precisely tells us what can be guaranteed, what cannot and what are the problems still left open [5]. Above the crypto layer, the security protocol layer offers a wide range of standard procedures for services such as secure communication, authentication or key exchange; the state of the art on this layer provides us with specific logics and formal techniques for verifying security protocols to point out errors or hidden assumptions [6,7,8]. Above the security layer, the system layer provides standard services, implemented in some programming language, such as remote file access; services like SSH, SSL or SSHTP and language technologies like Authenticode, Active X or Java provide some level of security but are subject to multiple types of attacks such as denial of service or spoofing. Above the system layer, the application layer offers services such as web-based banking operations that must implement application-specific security requirements in terms of primitives from lower layers. The state of the art in security engineering at the application layer is much more limited [9]. This paper focuses on security engineering at the application layer exclusively. A necessary condition for application software to be secure is obviously that all application-specific security requirements be met by the software. Such requirements must therefore be engineered with great care. They need to be explicit, precise, adequate, non-conflicting with other requirements and complete. In particular, application-specific requirements should anticipate application-specific attack scenarios such as, e.g., attacks on a web-based banking application that may result in disclosure of sensitive information about bank accounts or in fraudulous money transfer. Security requirements engineering must thus address a broader system perspective where the software environment is explicitly modeled and analyzed as well.
This paper presents some ongoing work on the security requirements engineering side of this project. We start by briefly reviewing in Section 2 some critical (meta-) requirements on development processes for high assurance systems (including security-critical ones). Then we address the problem of specifying security-related requirements and eliciting them in a goal-oriented requirements engineering framework (Section 3). Section 4 introduces security specification patterns together with some epistemic extensions to our real-time temporal logic needed to formalize them. Section 5 shows how our techniques for generating and resolving obstacles to goal achievement are extended to integrate malicious obstacles set up by attackers who want to break security goals. Section 6 addresses the generation of countermeasures. Section 7 then illustrates our method on the engineering of security requirements for web-based banking services; we show in particular how a critical episode from a real attack recently reported can be generated formally using our technique. To conclude, we summarize the contribution, compare it with related efforts and discuss various issues left open by our approach. 
2.    Problems & Challenges with High Assurance Systems: High assurance systems are computer systems where compelling evidence is required that the system delivers its services in a manner that satisfies certain critical properties such as safety, security, fault-tolerance and survivability. The elaboration, specification, analysis and documentation of application-specific security requirements is an area that has been left virtually unexplored by requirements engineering research to date (with a very few recent exceptions discussed at the end of this paper). A requirements engineering (RE) technique for security critical systems should ideally meet the following meta requirements.
Early on operation: In view of the criticality of security requirements, the technique should be applicable as early as possible in the RE process, that is, to declarative assertions as they arise from stakeholder interviews and documents.
Incrementally: The technique should support the intertwining of model building and analysis and Therefore allow for reasoning about partial models.
• Analysis about alternatives: The technique should make it possible to represent and assess alternative options so that a “best” route to security can be selected.
• High assurance: The technique should allow for formal analysis when and where needed so that compelling evidence of security assurance can be provided.
• Security-by-construction: To avoid the endless cycle of defect fixes generating new defects, the RE process should be guided so that a satisfactory level of security is guaranteed by construction.
• Partition of concerns: the technique should keep security requirements separate from other types of requirements so as to allow for interaction analysis
 
This paper presents a method for elaborating security requirements aimed at addressing the above meta requirements. The general idea is to build two models iteratively and concurrently:
• a model of the system-to-be that covers both the software and its environment and inter-relates their goals, agents, objects, operations, requirements and assumptions;
• an anti-model, derived from the model that exhibits how specifications of model elements could be maliciously threatened, why and by whom. In this overall picture, security requirements are elaborated systematically by iterating the following steps:
(a) Instantiate specification patterns associated with property classes such as confidentiality, privacy, integrity availability, authentication or non-repudiation;
(b) Derive anti-model specifications threatening such specifications;
(c) Derive alternative countermeasures to such threats and define new requirements by selection of alternatives that best meet other quality requirements from the model.
Our method builds on a goal-oriented framework we developed before for generating and resolving obstacles to requirements achievement. Our extension to malicious obstacles allows the obstacle refinement process to be guided by the attacker’s own goals and by the target of deriving observable vulnerabilities and implementable threats. The extension also includes security-specific operators for resolving malicious obstacles.
3.    Goal-Oriented Requirements Engineering: We briefly recall some of the concepts and techniques involved in a requirements engineering method we have developed to address those requirements before discussing how such concepts and techniques can be extended to engineer security requirements in a more specific way. A goal is a prescriptive statement of intent about some system whose satisfaction in general requires the cooperation of some of the agents forming that system. Agents are active components such as humans, devices, legacy software or software-to-be components that play some role towards goal satisfaction. Some agents thus define the software whereas others define the environment. Goals may refer to services to be provided (functional goals) or to quality of service (non-functional goals). Unlike goals, domain properties are descriptive statements about the environment such as physical laws, organizational norms or policies, etc.
Goals are organized into AND/OR refinement-abstraction structures where higher-level goals are in general strategic, coarse-grained and involve multiple agents whereas lower-level goals are in general technical, finegrained and involve less agents . In such structures, AND-refinement links relate a goal to a set of subgoals (called refinement) possibly conjoined with domain properties; this means that satisfying all subgoals in the refinement is a sufficient condition in the domain for satisfying the goal. OR-refinement links relate a goal to an alternative set of refinements; this means that satisfying one of the refinements is a sufficient condition
in the domain for satisfying the goal. Goal refinement ends when every subgoal is realizable by some individual agent assigned to it, that is, expressible in terms of conditions that are monitorable and controllable by the agent . A requirement is a terminal goal under responsibility of an agent in the software-to-be; an expectation is a terminal goal under responsibility of an agent in the environment (unlike requirements, expectations cannot be enforced by the software-to-be). Goals prescribe intended behaviors; they can be formalized in a real-time temporal logic . Semiformal keywords such as Achieve, Avoid, Maintain are provided to name goals according to the temporal behavior pattern they prescribe, without going into temporal logic details. Softgoals prescribe preferred behaviors; they can be used for selecting preferred alternatives in an AND/OR goal refinement graph [10].  Goals are operationalized into specifications of operations to achieve them ; in the specification of an operation, a distinction is made between domain pre- and post conditions that capture what any application of the operation means in the application domain, and required pre-, trigger, and post conditions that capture requirements on the operations that are necessary for achieving the underlying goals. Goals refer to objects that can be incrementally derived from their specification to produce a structural model of the system (represented by UML class diagrams). Objects have states defined by their attributes and links to other objects; they are passive (entities, associations, events) or active (agents). Agents are related together via their interface made of object attributes they monitor and control, respectively Obstacles were first introduced in [11] as a means for identifying goal violation scenarios. In declarative terms, an obstacle to some goal is a condition whose satisfaction may prevent the goal from being achieved. An obstacle O is said to obstruct a goal G in some domain characterized by a set of domain properties
Dom iff
{O, Dom} |= Ø G obstruction
Dom |¹ Ø O domain consistency
Obstacle analysis consists in taking a pessimistic view at the goals, requirements and expectations being elaborated. The principle is to identify as many ways of obstructing them as possible in order to resolve each such obstruction when likely and critical so as to produce more complete requirements for more robust systems. Formal techniques for generation and AND/OR refinement of obstacles are available. The basic technique amounts to a precondition calculus that regresses goal negations Ø G backwards through known domain properties Dom. Formal obstruction patterns may be used as a cheaper alternative to shortcut formal derivations.
Both techniques allow domain properties involved in obstructions to be incrementally elicited as well. Obstacles that appear to be likely and critical need to be resolved once they have been generated. Resolution tactics are available for generating alternative resolutions, notably, goal substitution, agent substitution, goal weakening, goal restoration, obstacle prevention and obstacle mitigation. The selection of preferred alternatives depends on the degree of criticality of the obstacle, its likelihood of occurrence and on high-priority softgoals that may drive the selection. The selected resolution may then be deployed at specification time, resulting in specification transformation, or at runtime through obstacle monitoring [12].
4.    Specification Patterns for Security Goals: The preliminary elicitation of security-related goals is driven by generic specification patterns associated with each specialization of this goal class, namely, confidentiality, integrity, availability and privacy goals. The patterns are expressed in terms of abstractions from the language meta-model and security-specific language constructs. For each subclass, the instantiation of the corresponding specification pattern to “sensitive” objects found in the object model yields corresponding candidates for application-specific security goals (to be refined if necessary). For example, the specification pattern for Confidentiality goals refers to meta-model elements such as Agent and Object (the latter having a generic attribute Info) to capture the definition of confidentiality:
Goal Avoid [SensitiveInfoKnownByUnauthorizedAgent]
FormalSpec " ag: Agent, ob: Object
Ø Authorized (ag, ob.Info) Þ Ø KnowsVag (ob.Info)
 
In this pattern, the Authorized predicate is generic and has to be instantiated through an application-specific domain definition; e.g., for web-based banking services we would certainly consider the instantiation Object / Account while browsing the object model and might introduce the following instantiating definition:
" ag: Agent, acc: Account
Authorized (ag, acc) º
Owns (ag, acc) Ú Proxy (ag, acc) Ú Manages (ag, acc)
In the pattern above, the epistemic operator KnowsVag is a security-specific construct defined by
KnowsVag (v) º $x: KnowsVag (x = v) (“knows value”)
Knowsag (P) º Beliefag (P) Ù P (“knows property”)
 
where the operational semantics of Beliefag(P) is “P is among the properties stored in the local memory of agent ag”. An agent thus knows a property if that property is found in its local memory and it is indeed the case that the property holds. For web-based banking services, the “sensitive” information about accounts would be the pair of objects (Acc#, PIN), both partOf the aggregation Account, and linked in the object model through the Match association; the instantiation then yields the following confidentiality goal as candidate for inclusion in the goal graph:
Goal Avoid [PaymentMediumKnownBy3rdParty]
FormalSpec " p: Person, acc: Account
Ø (Owns (p, acc) Ú Proxy (p, acc) Ú Manages (p, acc) )
Þ Ø [ KnowsVp (acc.Acc#) Ù KnowsVp (acc.PIN) ])
The same principle can be applied to elicit instantiations from the generic patterns for Privacy, Integrity and Availability goals, namely:
 
Maintain [PrivateInfoKnownOnlyIfPermittedByOwner]
FormalSpec " ag, ag’: Agent, ob: Object
KnowsVag (ob.Info) Ù OwnedBy (ob.Info, ag’) Ù ag ¹ ag’
  Þ AuthorizedBy (ag, ob.Info, ag’)
Maintain [ObjectInfoChangeOnlyIfCorrectAndByAuthorizedAgent]
FormalSpec " ag: Agent, ob: Object, v : Value
ob.Info = v Ù o (ob.Info ¹ v) Ù UnderControl (ob.Info, ag)
  Þ Authorized (ag, ob.Info) Ù o Integrity (ob.Info)
Goal Achieve [ObjectInfoUsableWhenNeededAndAuthorized]
FormalSpec " ag: Agent, ob: Object, v : Value
Needs (ag, ob.Info) Ù Authorized (ag, ob.Info)
Þ à£d Using (ag, ob.Info)
 
Specifications of application-specific security goal candidates are thus obtained from such specification patterns by (a) instantiating meta-classes such as Object, Agent and generic attributes such as Info to application specific sensitive classes, attributes and associations in the object model, and (b) specializing predicates such as Authorized, Under Control, Integrity or Using through substitution by application-specific definitions.
In the context of security it may be worth recalling that “sensitive” objects found in the object model can be either passive (entities, associations, events) or active (agents).

5.    Building Intentional Threat Models: As noted in [Lam00b], obstacle refinement trees for security goals correspond to the popular hazard trees (sometimes called threat trees or attack trees) used frequently for modeling or documenting known hazards or potential attacks [13, 14, 15, 16]. There are two important differences, however.
• Obstacle trees are goal-anchored; the analyst thus knows exactly where to start the analysis from – it is often much easier to concentrate first on what is desired rather than on what is not desired.
• Obstacle refinements are formalized in our real-time temporal logic and can therefore be generated systematically from goal assertions and domain properties using regression techniques or formal refinement patterns . In the context of security engineering, standard obstacle analysis appears to be too limited for handling malicious obstacles; the reasons are the following.
• The goals underlying malicious obstacles are not captured; one can therefore not use them for driving the obstacle refinement process.
• There is no modeling of attacker agents and their capabilities in terms of operations they can perform and objects they can monitor/control; one can therefore not reason about them.
• Software vulnerabilities are not explicit in standard models; one can therefore not reason about them or, better, derive them.
• The outcome of the obstacle likelihood and criticality assessment process may be quite different; for example, standard obstacle analysis for a ground collision avoidance component of an air traffic control system might miss the obstacle of multiple planes crashing into adjacent buildings at almost the same time, or assess it to be extremely unlikely, whereas the explicit incorporation of terrorist agents and their goal of causing major damage to symbols of economic power might result in totally different conclusions.
Richer models should thus be built to capture attackers, their goals and capabilities, the software vulnerabilities they can monitor or control, and attacks that satisfy their goals based on their capabilities and on the system’s vulnerabilities. Let us call anti-models such richer models and anti-goals the intentional obstacles to security goals (we want of course to distinguish them clearly from the goals the system under consideration should satisfy). Anti-models should lead to the generation of more subtle attacks, the discarding of non-realizable or unlikely ones, and the derivation of more effective, “customized” resolutions. Anti-models can be built systematically as follows.
1. Root anti-goals are obtained just by negation of Confidentiality, Privacy, Integrity and Availability goals instantiated to sensitive objects from the object model as suggested in the previous section.
2. For each anti-goal, potential attacker agents are elicited from questions such as “WHO can benefit from this anti-goal?” (Known attacker taxonomies may help answering such questions.)
3. For each anti-goal and corresponding attacker class(es) identified, the attacker’s higher-level antigoals are elicited from questions such as “WHY would instances of this attacker class want to achieve this anti-goal?”
4. From the resulting anti-goal graph fragments a dual AND/OR refinement/abstraction process can start with the aim of reaching terminal anti-goals that are realizable by the attacker agents identified or by attackee software agents. The former are antirequirements assigned to the attacker whereas the latter are vulnerabilities assigned to the attackee. (This rightly fits the Common Criteria definition of vulnerability, namely, “a condition of an agent that in conjunction with a threat, can lead to security requirement violation” ). The AND/OR refinement/abstraction process can be guided by the following techniques:
i. asking “HOW?” and “WHY?” questions about the anti-goals already available,
ii. regressing anti-goal assertions further through domain properties available ,
iii. applying formal refinement patterns – notably, the milestone and decomposition-by-case pattern , the resolve lack of monitor ability and resolve lack of controllability patterns and the obstruction patterns found in .
5. During anti-goal refinement/abstraction the corresponding object and agent anti-models are derived from anti-goal formulations; the boundary and monitoring/control interfaces between the antimachine (under the attacker’s control) and the antienvironment (which includes the software attackee) are thereby derived.
6. Finally, the anti-requirements are AND/OR operationalized in terms of the capabilities of the corresponding attacker agent – which may include blind or intelligent searching, eavesdropping, deciphering, spoofing, etc.)
The above steps result in a multi-component anti-model relating the attackers, their anti-goals, referenced objects and anti-operations to achieve their anti-goals, to the attackees, their goals, objects, operations and vulnerabilities. If goals are formalized in our real-time temporal logic, the logical models of anti-goals are sets of attack scenarios. Bounded SAT solvers can then be used to generate such scenarios automatically.
Once rich attack models have been derived systematically in this way, the last (but not least) step is to exploit them to explore alternative resolutions for each derived antirequirement. A preferred resolution is then selected based on the criticality and likelihood of the corresponding antirequirement, to produce new security requirements
 
6.    Generating Countermeasures: Once intentional anti-models have been built systematically in this way, the next step in the model/antimodel building cycle is to consider alternative countermeasures to the various vulnerabilities and antirequirements found. A preferred countermeasure is then selected based on (a) the severity and likelihood of the corresponding threat, and (b) non-functional goals that have been identified in the primal goal model. The selected countermeasure yields a new security goal to be integrated in the latter model. Alternative countermeasures may be produced systematically using operators similar to those described in for resolving obstacles, e.g.,
• Goal substitution: develop an alternative refinement of the parent goal to prevent the original subgoal from being threatened by the anti-goal;
• Agent substitution: replace a vulnerable agent assigned to a threatened goal by a less vulnerable one for the threatening anti-goal;
• Goal weakening: weaken the specification of the goal being threatened so as to make it circumvent the threatening anti-goal;
• Goal restoration: introduce a new goal prescribing appropriate restoration measures in states where the goal has been threatened by the anti-goal;
• Anti-goal mitigation: tolerate the anti-goal but mitigateits effects;
• Anti-goal prevention: add a new goal requiring the anti-goal to be Avoided.
  The above list may be extended with security-specific operators such as the following:
• Protect vulnerability: make the derived vulnerability condition unmonitorable by the attacker.
• Defuse threat: make the derived anti-requirement condition uncontrollable by the attacker.
• Avoid vulnerability: add a new goal requiring the software vulnerability condition to be Avoided.
Once alternative anti-goal resolutions have been produced by application of such operators a preferred one has to be selected based on how critical the goal being threatened is and on how well the resolution meets other non-functional goals. The NFR qualitative framework may be used here to support the selection process [10]. The new security goal thereby retained has to be AND/OR refined in turn until requirements/expectations are reached. A new model/anti-model building cycle may then be required. For anti-goals that are not too severe, resolutions may be deferred from specification time to run time using antigoal monitoring and intrusion detection technology [12].
7.    Example: Web-Based Banking Services: We illustrate the critical steps (1) and (4) of the above procedure for web-based banking services and then illustrate what the resolutions might be. As a result of step (1) we obtain the following anti-goal by negation of the confidentiality goal Avoid [PaymentMediumKnownBy3rdParty] obtained by instantiation in Section 4:
AntiGoal Achieve [PaymentMediumKnownBy3rdParty]
FormalSpec à $ p: Person, acc: Account
Ø (Owns (p, acc) Ú Proxy (p, acc) Ú Manages (p, acc) )
Ù KnowsVp (acc.Acc#) Ù KnowsVp (acc.PIN)
 
By asking “what are sufficient conditions for someone unauthorized to know the number and PIN of an account simultaneously?” and using the symmetry and [1:1, 1:N] multiplicity of the Match association between account numbers and PINs in the object model we elicit domain properties such as:
" p: Person, acc: Account
Ø (Owns (p, acc) Ú Proxy (p, acc) Ú Manages (p, acc) )
Ù KnowsVp (acc.Acc#)
Ù ($ x: PIN) (Finds (p, x) Ù Match (x, acc.Acc#)
Þ KnowsVp (acc.Acc#) Ù KnowsVp (acc.PIN)
Ø (Owns (p, acc) Ú Proxy (p, acc) Ú Manages (p, acc) )
Ù KnowsVp (acc.PIN)
Ù ($ y: Acc#) (Finds (p, y) Ù Match (acc.PIN, y)
Þ KnowsVp (acc.Acc#) Ù KnowsVp (acc.PIN)
 Regressing the above anti-goal backwards through these domain properties we obtain an ORrefinement of that anti-goal into two alternative antisubgoals:
AntiGoal Achieve [PaymentMediumKnownBy3rdParty
FromPinSearching]
FormalSpec à $ p: Person, acc: Account
Ø (Owns (p, acc) Ú Proxy (p, acc) Ú Manages (p, acc) )
Ù KnowsVp (acc.Acc#)
Ù ($ x: PIN) [ Finds (p, x) Ù Match (x, acc.Acc#) ]
AntiGoal Achieve [PaymentMediumKnownBy3rdParty
FromAccountNumberSearching]
FormalSpec à $ p: Person, acc: Account
Ø (Owns (p, acc) Ú Proxy (p, acc) Ú Manages (p, acc) )
Ù KnowsVp (acc.PIN)
Ù ($ y: Acc#) [ Finds (p, y) Ù Match (acc.PIN, y) ]
The standard way of resolving the first, well-known antigoal is to limit the number of PIN entries (e.g., to three attempts). The second anti-goal is more subtle and corresponds to a sophisticated, real attack reported recently . Resolution through the obstacle prevention operator would produce the new requirement that the possibility of exhaustive search for account numbers matching some fixed PIN number should be avoided. No vulnerability anti-goal assigned to the attackee appeared here as conjunct needed to achieve the higherlevel anti-goal. In a similar derivation from a confidentiality requirement about orders and personal information in a web-based CD sales system, we derived the vulnerability found in reported attacks on a wellknown CD sales company, namely, that order numbers appear on the page URL shown from the order tracking service. 
8.    Related work: A few proposals have been made recently for extending existing modeling notations to capture attacker features at requirements engineering time -- notably, through misuse cases that complement UML use cases [17] or abuse frames that complement problem frames to define the scope and boundary of anti-requirements on the attacker. Goal-based approaches have already been used to address security concerns at requirements engineering time, notably, through security goal taxonomies. For example, the NFR framework allows known security properties to be organized as AND/OR goal graphs [10]. Anton, Earp and Reese propose a rich taxonomy of privacy goals based on their analysis of 23 internet privacy policies for a variety of companies in health care industries . The work closest in spirit to ours is [18]; they propose an extension of the i* agent-based requirements engineering framework [10] to identify attackers, analyze vulnerabilities through agent dependency links and suggest countermeasures. The main difference is that they start from agents involved in the system rather than goals being threatened. In contrast with our approach, they identify insider attackers only, that is, system stakeholders that were identified before in the primal model and might be suspect. The malicious goals owned by such attackers are not modelled explicitly. Their methodology provides no formal techniques for building threat models. The strength of their approach is the propagation of vulnerabilities along agent dependency links. The principle of building a catalogue of known threat tree patterns for documentation and reuse through instantiation is nicely illustrated in [19].  Sheyner et al use model checking technology to generate and analyze attack graphs. Given a state machine model N of the network under attack, a model A of the attacker’s capabilities and a desired security property S, their tools produce all possible counterexample scenarios found when trying to verify N, A |¾ S Our work may be seen as an “upstream” complement to their approach; it can be applied sooner in the process to point out earlier security problems at the requirements level by generating attacks from declarative goal/anti-goal models (as opposed to operational state machine models that need to be appropriately built up to reflect such declarative specifications adequately).
 
9.    Conclusion: Our focus in this paper was on security engineering at the application layer exclusively; compared with the crypto, protocol and system/language layers the application layer has received much less attention to date. As a prerequisite for security assurance at this layer, analysts must ensure that application-specific security requirements are made explicit, precise, adequate, non-conflicting with other requirements and complete. We presented requirements engineering method for elaborating security requirements based on the incremental building and specification of two concurrent models: an intentional model of the system-to-be and an intentional anti-model yielding vulnerabilities and capabilities required for achieving the anti-goals of threatening security goals from the original model. The method allows threat trees to be derived systematically through anti-goal refinement until leaf nodes are reached that are either attackee vulnerabilities observable by the attacker or anti-requirements implementable by this attacker. The original model is then enriched with new security requirements derived as countermeasures to the anti-model.
This approach extends our KAOS framework for goal oriented requirements elaboration by (a) extending the specification language with epistemic constructs for reasoning about the attacker’s knowledge, (b) providing a pattern-based approach for the preliminary formal elicitation of security requirements from generic specifications over security-related meta-variables, (c) introducing dual modeling concepts for richer modeling of the attacker’s universe, and (d) adapting our goaloriented method to the elaboration of dual anti-models from which more robust security requirements can be derived. In such dual models, goals, requirements, expectations about the environment and software operations now become intentional obstacles to security requirements, implementable anti-requirements, software vulnerabilities and attack primitives, respectively.  Our approach to attack tree generation is goal-directed; it requires some preliminary elicitation of application specific security goals, e.g., through instantiation of generic security requirement specifications to application specific “sensitive” objects. This appears to us the most natural way of coping with security issues at the application level – which is the primary concern of our work. Although the original goal-oriented model building method has been validated extensively (over 25 industrial projects in a wide variety of domains), its transposition to anti-model building has been limited to four case studies so far: the one used for illustration in this paper, a popular smartcard-based payment system used in Belgium, a web based CD order tracking system and an e-commerce system relying on an independent payment processing agent. In particular, a similar derivation from a confidentiality goal about order information allowed us to derive vulnerabilities found in reported attacks on a well known CD sales company, namely, (a) confidential order information was obtainable just by submitting guessable order numbers, and (b) order numbers were appearing on the page URL displayed by the order tracking service. For the e-commerce system with independent payment processing, the anti-model revealed a man-in-the-middle attack making malicious use of the system’s services; this was obtaind by anti-goal regression through the system’s goals. In the latter case study, the initial anti-goal was not a negated instantiation of a security goal pattern but just a negated functional goal (“the item is sent but not paid”). This case study also suggested the need for distinguishing vulnerabilities that are more critical in anti-goal achievement than others.
This leads us to a number of issues still left open, e.g.,
• When does the cycle “requirement/anti-requirement/countermeasure” terminate?
• Can we build richer catalogues of threat patterns and corresponding “best” countermeasure patterns?
• To what extent do boundary conditions for conflict among multiple goals play the role of covert channels that can be exploited by attackers to satisfy their anti-goals?
• How do we incorporate trust models in this framework and model trustworthy agents?
• Can we incorporate probabilistic frameworks to reason about partial satisfaction of goals/anti-goals and determine the likelihood of anti-goal occurrences?
 10.    References
1. CERT, http://www.cert.org/stats/cert_stats.html.
2. R.A. Kemmerer, (2003) “Cybersecurity”, Invited Mini- Tutorial, Proc. ICSE’03: 25th Intl. Conf. on Software Engineering, Portland, IEEE Computer Society Press, May 2003, 705-715.
3. J. Viega and G. McGraw, (2001) Building Secure Software: How to Avoid Security Problems the Right Way. Pearson Education, 2001.
4. J. Wing, (1998) "A Symbiotic Relationship Between Formal Methods and Security", Proc. NSF Workshop on Computer Security, Fault Tolerance, and Software Assurance: From Needs to Solution. December 1998
5. B. Schneier, (1996) Applied Cryptography. Wiley, 1996.
6. R. Kemmerer, (1994) C. Meadows, and J. Millen, "Three systems for Cryptographiuc Protocol Analysis", Journal of Cryptology, Vol. 7 No. 2, 1994, 79-130.
7. G. Lowe, (1996) “Breaking and Fixing the Needham-Schroeder Public-Key Protocol Using FDR”, TACAS’96, Springer LNCS 1055, 1996, 147-166.
8. E.M. Clarke, S. Jha and W. Marrero, (2000) “Verifying Security Protocols with Brutus”, ACM Trans. Software Engineering & Methodology Vol. 9 No. 4, Oct. 2000, 443-487.
9. J. Viega and G. McGraw, (2001) Building Secure Software: How to Avoid Security Problems the Right Way.Addison-Wesley, 2001.
10. L. Chung, B. Nixon, E. Yu and J. Mylopoulos, Nonfunctional requirements in software engineering. Kluwer Academic, Boston, 2000.
11. C. Potts, (1995) “Using Schematic Scenarios to Understand User Needs”, Proc. DIS’95 - ACM Symp. on Designing Interactive Systems, Univ. Michigan, August 1995.
12. M. Feather, S. Fickas, A. van Lamsweerde, and C. Ponsard, (1998) “Reconciling System Requirements and Runtime Behaviour”, Proc. IWSSD’98 - 9th Int.l Workshop on Software Specification and Design, Isobe, IEEE CS Press, April 1998.
13. E.J. Amoroso, (1994) Fundamentals of Computer Security. Prentice Hall, 1994.
14. N. Leveson, (1995) Safeware - System Safety and Computers. Addison-Wesley, 1995.
15. B. Schneier, (1999) “Attack Trees: Modeling Security Threats”, Dr. Dobb’s Journal, December 1999.
16. G. Helmer, J. Wong, M. Slagell, V. Honavar , L. Miller and R. Lutz, (2002) “A Software Fault Tree Approach to Requirements Analysis of an Intrusion Detection System”, Requirements Engineering Journal Vol. 7 No. 4, 2002, 177- 220.
17. G. Sindre and A.L. Opdahl, (2000) “Eliciting Security Requirements by Misuse Cases, Proc. TOOLS Pacific’2000, Conf. on Techniques of Object-Oriented Languages and Systems, 2000, 120-131.
18. L. Liu, E. Yu and J. Mylopoulos, (2003) “Security and Privacy Requirements Analysis within a Social Setting”, Proc. RE’03 – Intl. Conf. Requirements Engineering, Sept. 2003, 151-161.
19. AP. Moore, R.J. Ellison and R.C. Linger, (2001) “Attack Modeling for Information Security and Survivability”, Technical Note CMU/SEI-2001-TN-001, March 2001.