Quality Function Deployment

Introduction

Quality Function Deployment was developed by Yoji Akao in Japan in 1966. By 1972 the power of the approach had been well demonstrated at the Mitsubishi Heavy Industries Kobe Shipyard [1]and in 1978 the first book on the subject was published in Japanese and then later translated into English in 1994. [2]

The QFD methodology can be used for both tangible products and non-tangible services, including manufactured goods, service industry, software products, IT projects, business process development, government, healthcare, environmental initiatives, and many other applications.

What is Quality Function Deployment QFD

Definition

Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.” as described by Dr. Yoji Akao, who originally developed QFD

Moreover, Quality Function Deployment is a systematic approach to design based on a close awareness of customer desires, coupled with the integration of corporate functional groups. It consists in translating customer desires (for example, the ease of writing for a pen) into design characteristics (pen ink viscosity, pressure on ball-point) for each stage of the product development.  [1] [2]

Goals

There are 3 main goals in implementing QFD [1]:

  1. Prioritize spoken and unspoken customer wants and needs.
  2. Translate these needs into technical characteristics and specifications.
  3. Build and deliver a quality product or service by focusing everybody toward customer

Usage of QFD

Since its introduction, Quality Function Deployment has helped to transform the way many companies:

  • Plan new products
  • Design product requirements
  • Determine process characteristics
  • Control the manufacturing process
  • Document already existing product specifications
  • Reduce time to market
  • Reduce product development time by 50%

The Quality Function Deployment Process

  • Identify the Customers
  • Determine Customer Requirements/Constraints
  • Prioritize each requirement
  • Competitive Benchmarking
  • Translate Customer Requirements into Measurable Engineering specifications
  • Set Target values for each Engineering Specification

QFD uses some principles from Concurrent Engineering in that cross-functional teams are involved in all phases of product development.  Each of the four phases in a QFD process uses a matrix to translate customer requirements from initial planning stages through production control.

Each phase, or matrix, represents a specific aspect of the product’s requirements. Relationships between elements are evaluated for each phase.  Only the most important aspects of each phase are deployed into the next matrix [1].

  • Phase 1, Product Planning: mainly it is building the House of Quality. Initiated by the marketing Phase 1 is also called The House of Quality. Many organizations only get through this phase of a QFD process. Phase 1 documents customer requirements, warranty data, competitive opportunities, product measurements, competing for product measures, and the technical ability of the organization to meet each customer requirement. Getting good data from the customer in Phase 1 is critical to the success of the entire QFD process.
  • Phase 2, Product Design: This phase 2 is initiated by the engineering department. Product design requires creativity and innovative team ideas. Product concepts (goals and objectives) are created during this phase and part specifications are documented. Parts that are determined to be most important to meeting customer needs are then deployed into process planning, or the next Phase 3.
  • Phase 3, Process Planning: Process planning comes next and is owned by manufacturing engineering. During process planning, manufacturing processes are flowcharts and process parameters (or target values) are documented.
  • Phase 4, Process Control: And finally, in production planning, performance indicators are created to monitor the production process, maintenance schedules, and skills training for operators. Also, in this phase decisions are made as to which process poses the most risk and controls are put in place to prevent   The quality assurance department in concert with manufacturing leads Phase 4.

picture1

Figure illustrates QFD phases

QFD Tools

The House of Quality

House of Quality is a diagram [3], resembling a house, used for defining the relationship between customer desires and the firm/product capabilities. It is a part of the Quality Function Deployment (QFD) and it utilizes a planning matrix to relate what the customer wants to how a firm (that produces the products) is going to meet those wants.

House of Quality appeared in 1972 in the design of an oil tanker by Mitsubishi Heavy Industries. Akao has reiterated numerous times that a House of Quality is not QFD, it is just an example of one tool.

picture2

Figure illustrates house of quality matrix

Decision-matrix method

Invented by Stuart Pugh the decision-matrix method [4], also Pugh method, Pugh Concept Selection is a quantitative technique used to rank the multi-dimensional options of an option set. It is frequently used in engineering for making design decisions but can also be used to rank investment options, vendor options, product options or any other set of multidimensional entities.

picture3

Figure illustrates Decision matrix

Modular Function Deployment

Modular Function Deployment [5]uses QFD to establish customer requirements and to identify important design requirements with a special emphasis on modularity.

Example of QFD using house of quality

This particular QFD example was created for an imaginary Chocolate Chip Cookie Manufacturer (a.k.a. a “Bakery”). The example maps customer requirements to parts/materials to be purchased in order to meet and/or exceed the customer expectations. (The prioritization comes into play when assuming the limited availability of funds for making purchases.) [6]

The example can be accessible using URL: http://www.qfdonline.com/qfd-tutorials/house-of-quality-qfd-example/

This slideshow requires JavaScript.

Findings of the example:

  • The QFD ends with HOQ #3. This is due primarily to the fact that all of its parts/materials are purchased rather than manufactured. Had a different product been chosen, a 4th HOQ could have been added that mapped parts/materials attribute to processes and/or initiatives for manufacturing the parts that met those specifications.
  • The “Weight” requirement (column #4) in HOQ #1 may not be a valuable requirement. You can tell that this requirement is suspect by the fact that its “Max Relationship Value in Column” is only 1. (Note: the template auto-highlights warning values).
  • The “Weight” requirement (row #4) in HOQ #2 is not being addressed. Similarly, “Tensile Ultimate Strength” (Row #3) and “Size (diameter)” (Row #5) are not being substantially addressed. (Note their “Max Relationship Value in Row” values).
  • HOQ #3 has examples of both of the issues listed in #1 & 2 above.

 

References

[1] Sullivan, 1986.

[2] Mizuno and Akao, 1994.

[3] I. R. Institute, “Quality Function Deployment,” Creative Industries Research Institute.

[4] Wikipedia, “Quality function deployment,” Wikipedia, [Online]. Available: http://en.wikipedia.org/wiki/Quality_function_deployment. [Accessed 7 1 2012].

[5] Wikipedia, “House of Quality,” Wikipedia, [Online]. Available: http://en.wikipedia.org/wiki/House_of_Quality. [Accessed 1 7 2012].

[6] Wikipedia, “Decision matrix method,” Wikipedia, [Online]. Available: http://en.wikipedia.org/wiki/Decision-matrix_method. [Accessed 1 7 2012].

[7] Wikipedia, “Modular Function Deployment,” Wikipedia, [Online]. Available: http://en.wikipedia.org/wiki/Modular_Function_Deployment. [Accessed 1 7 2012].

[8] Q. Online, “House of Quality (QFD) Example,” QFD Online, [Online]. Available: http://www.qfdonline.com/qfd-tutorials/house-of-quality-qfd-example/. [Accessed 4 7 2012].

 

Difference between Software Architecture, Software Structure, and Software Design

Introduction

Over the past 10-15 years, Software architecture has been widely spread in software engineering community, To the extent that there are currently many career positions for software architect like Technical Architect and Chief Architect. Also, Architecture has involved in many different domains, for example, the architecture used to describe and refer to the internal structure of microprocessor and structure of machines or Network. For that matter, Trying to find one widely accepted definition for software architecture is not easy, and this issue has been introduced in many books when the authors start defining software architecture.

“Trying to define a term such as software architecture is always a potentially dangerous activity. There really is no widely accepted definition by the industry.”

(Gorton, 2011)

This post shall highlight the difference between Software Architecture, Software Design and Software Structure and the interrelation between them.

Software Architecture vs. Software Structure

Software architecture

The most widely accepted definition comes from work done in the Software Architecture group of the Software Engineering Institute (SEI) at Carnegie-Mellon University in Pittsburgh.

“The architecture of a software-intensive system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them.”

(Bass, Clements, & Kazman, 2003)

Software Structure

Software structure may be a little confusing and seems that no difference between it and software architecture, also there is no solid definition for system structure. Software structure has two types, static and dynamic.

“The static structures of a software system define its internal design-time elements and their arrangement”

“The dynamic structures of a software system define its runtime elements and their interactions“

(Bass, Clements, & Kazman, 2003)

The following example of Human Body will illustrate what is meaning of software structure? The Human Body is structuring the body structure, so the neurologist, the orthopedist, the hematologist, and the dermatologist all take a different perspective on the structure of a human body. Ophthalmologists, cardiologists, and podiatrists concentrate on subsystems. And the kinesiologist and psychiatrist are concerned with different aspects of the entire arrangement’s behavior. Although these perspectives are pictured differently and have totally different properties, all are related together, describe the architecture of the human body. (Maya & Merson, 2008)

The same example can be taken from Engineering structure of a simple home. Some structures made for Electric purposes, others for Sanitation purposes and others for Structural purposes. Each one of them represents a different structure for a specific role and specific purpose. All of these structures shall be consistent to create the architecture, and only one of them cannot represent the architecture.

The same is the software, it contains different views for various purposes and stakeholders. Each view is a structure. The developer needs to have a software structure to help him in developing the requirements. The System integrator needs to know the interrelation between components and its properties and behavior. The tester needs to know what are the inputs and expected outputs. The below-quoted paragraph completes the relation between the structures and the architecture.

“None of these structures alone is the architecture, although they all convey architectural information. The architecture consists of these structures as well as many others. This example shows that since architecture can comprise more than one kind of structure, there is more than one kind of element (e.g., implementation unit and processes), more than one kind of interaction among elements (e.g., subdivision and synchronization), and even more than one context (e.g., development time versus runtime). By intention, the definition does not specify what the architectural elements and relationships are.”

(Maya & Merson, 2008)

 

It is worth mentioning that, structures have a more detailed level of information regarding each component, for example, the structure of deployment infrastructure may include many details regarding processing power, storage and shared memory. This detailed information may be not introduced in the level of architecture.

It can be argued that structure is architecture as it describes the elements and component from the perspective of stakeholders, which achieves their goals. So it is architecture from their own perspective.

“If structure is important to achieve your system’s goals, that structure is architectural. But designers of elements, or subsystems, that you assign may have to introduce structure of their own to meet their goals, in which case such structures are architectural: to them but not to you.”

(Clements, et al., 2010)

Software Architecture vs. Software Design

Software Design

The design is used in many fields and has no generally widely accepted definition. However work experience shows that design is making a plan for the software developing activity to accomplish its requirements and its related quality attributes.

Architecture is mainly a design, while not all designs are architecture. This concept has been explained in (Clements, et al., 2010). As the architecture is a set of main design decisions which will affect the software and its quality attributes, while any other design decisions left for downstream developer and designer is a nonarchitectural design, these designs are left because it will not affect the overall decisions that have been taken by the architect to document the software architecture.

For example, in a service-oriented architecture, the architect is interested in defining the main services and component and their connections with each other, while he is not interested in how one of these services will be implemented as this can be left for nonarchitectural design methods and implementations, as defining these interfaces between the components and the data exchanged between them is more important and cannot be left to any element design decision. As software components and its quality depend on these main decisions.

It is worth mentioning that, there is a wrong view by understanding that architecture is focused on high level and conceptual framework, and it is followed by a step of detailed design, also that architecture document shall be limited to number of pages (50 pages) and it is just a small set of only big decisions. Authors of (Clements, et al., 2010) advises the readers to stamp out these thoughts as they are nonsense and asks them to eliminate the terminology of detailed design and using the nonarchitectural design. As Architecture may be detailed or high level based on the global decisions.

To summarize, architecture is design, but not all design is architectural. The architect draws the boundary between architectural and nonarchitectural design by making those decisions that need to be bound in order for the system to meet its development, behavioral, and quality goals. All other decisions can be left to downstream designers and implementers. Decisions are architectural or not, according to context. If structure is important to achieve your system’s goals, that structure is architectural. But designers of elements, or subsystems, that you assign may have to introduce structure of their own to meet their goals, in which case such structures are architectural: to them but not to you.

 

And (repeat after me) we all promise to stop using the phrase “detailed design.” Try “nonarchitectural design” instead.

(Clements, et al., 2010)

This perspective is not similar with the perspective of Author of (Budgen, 2003) who illustrated the design as phase of the software lifecycle after architecture phase and it is interested in the level of deep details and description of system elements.

Architectural design. Concerned with the overall form of solution to be adopted (for example, whether to use a distributed system, what the major elements will be and how they will interact).

 

Detailed design. Developing descriptions of the elements identified in the previous phase, and obviously involving interaction with this. There are also clear feedback loops to earlier phases if the designers are to ensure that the behavior of their solution is to meet the requirements.

(Budgen, 2003)

Conclusion

To sum up, it can be argued that the three terminologies; Software Architecture, Software Design and Software Structure have no widely agreed definition nor a remarkable difference between them. Therefore, they have many perspectives according to their purposes. In my own perspective, software architecture and software design are the same concept, while they are different on the level of details needed to be shared with stakeholders based on the global architectural decisions.

In addition, software structure is set of architecture to characterize a set of actions and component from the perspective view of the software in specific depth. Software architecture is a group of these structures. These structures are organized in a manner to fulfill the software requirements and its quality attributes. Also, the software Structure of specific view is architecture as it describes the elements, its properties, and behavior to meet this specific view goal for the specific stakeholder.

Above all, in order to reach a remarkable difference between the three terminologies, we have to decide which viewpoint we are looking from.

Bibliography

“SEI”, S. E. (2011). Software Engineering Institute. (Carnegie Mellon University) Retrieved 10 7, 2011, from http://www.sei.cmu.edu/architecture/start/definitions.cfm

Bass, L., Clements, P., & Kazman, R. (2003). Software Architecture in Practice (2nd edition). Addison Wesley 2003.

Budgen, D. (2003). SOFTWARE DESIGN (2nd ed.). Addison Wesley.

Clements, P., Bachmann, F., Bass, L., Garlan, D., Ivers, J., Little, R., . . . Stafford, J. (2010). Documenting Software Architectures (2nd Edition). Addison-Wesely.

D. Garlan, M. S. (1993). An Introduction to Software Architecture, Advances in Software Engineering and Knowledge Engineering, Volume I. World Scientific. Retrieved from D. Garlan, M. Shaw, An Introduction to Software Architecture, Advances in Software Engineering and Knowledge Engineering, Volume I, World Scientific, 1993

Design. (n.d.). Retrieved October 14, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Design

Gorton, I. (2011). Understanding Software Architecture. In I. Gorton, Essential Software Architecture (2nd ed., p. 2). Springer.

Maya, L. D., & Merson, P. (2008). Documentation Roadmap and Overview. Retrieved October 7, 2011, from Software Architecture Document: https://wiki.sei.cmu.edu/sad/index.php/Documentation_Roadmap_and_Overview

Black Box Security Analysis and Test Techniques

Black box techniques are the only techniques available for analyzing and testing nondevelopmental binary executable without first decompiling or disassembling them. Black box tests are not limited in utility to COTS and other executable packages: they are equally valuable for testing compiled custom developed and open source code, enabling the tester to observe the software’s actual behaviors during execution and compare them with behaviors that could only be speculated upon based on extrapolation from indicators in the source code. Black box testing also allows for examination of the software’s interactions with external entities (environment, users, attackers)—a type of examination that is impossible in white box analyses and tests. One exception is the detection of malicious code. On the other hand, because black box testing can only observe the software as it runs and “from the outside in,” it also provides an incomplete picture.

For this reason, both white and black box testing should be used together, the former during the coding and unit testing phase to eliminate as many problems as possible from the source code before it is compiled, and the latter later in the integration and assembly and system testing phases to detect the types of byzantine faults and complex vulnerabilities that only emerge as a result of runtime interactions of components with external entities. Specific types of black box tests include:

Binary Security Analysis

This technique examines the binary machine code of an application for vulnerabilities. Binary security analysis tools usually occur in one of two forms. In the first form, the analysis tool monitors the binary as it executes, and may inject malicious input to simulate attack patterns intended to subvert or sabotage the binary’s execution, in order to determine from the software’s response whether the attack pattern was successful. This form of binary analysis is commonly used by web application vulnerability scanners. The second form of binary analysis tool models the binary executable (or some aspect of it) and then scans the model for potential vulnerabilities. For example, the tool may model the data flow of an application to determine whether it validates input before processing it and returning a result. This second form of binary analysis tool is most often used in Java bytecode scanners to generate a structured format of the Java program that is often easier to analyze than the original Java source code.

Software Penetration Testing

Applies a testing technique long used in network security testing to the software components of the system or to the software-intensive system as a whole. Just as network penetration testing requires testers to have extensive network security expertise, software penetration testing requires testers who are experts in the security of software and applications. The focus is on determining whether intra-or inter-component vulnerabilities are exposed to external access, and whether they can be exploited to compromise the software, its data, or its environment and resources. Penetration testing can be more extensive in its coverage and also test for more complex problems, than other, less sophisticated (and less costly) black box security tests, such as fault injection, fuzzing, and vulnerability scanning. The penetration tester acts, in essence, as an “ethical hacker.” As with network penetration testing, the effectiveness of software penetration tests is necessarily constrained by the amount of time, resources, stamina, and imagination available to the testers.

Fault Injection of Binary Executable

This technique was originally developed by the software safety community to reveal safety-threatening faults undetectable through traditional testing techniques. Safety fault injection induces stresses in the software, creates interoperability problems among components, and simulates faults in the execution environment. Security fault injection uses a similar approach to simulate the types of faults and anomalies that would result from attack patterns or execution of malicious logic, and from unintentional faults that make the software vulnerable. Fault injection as an adjunct to penetration testing enables the tester to focus in more detail on the software’s specific behaviors in response to attack patterns. Runtime fault injection involves data perturbation. The tester modifies the data passed by the execution environment to the software, or by one software component to another. Environment faults in particular have proven useful to simulate because they are the most likely to reflect real-world attack scenarios. However, injected faults should not be limited to those that simulate real-world attacks. To get the most complete understanding of all of the software’s possible behaviors and states, the tester should also inject faults that simulate highly unlikely, even “impossible,” conditions. As noted earlier, because of the complexity of the fault injection testing process, it tends to be used only for software that requires very high confidence or assurance.

Fuzz Testing

Like fault injection, fuzz testing involves the input of invalid data via the software’s environment or an external process. In the case of fuzz testing, however, the input data is random (to the extent that computer-generated data can be truly random): it is generated by tools called fuzzers, which usually work by copying and corrupting valid input data. Many fuzzers are written to be used on specific programs or applications and are not easily adaptable. Their specificity to a single target, however, enables them to help reveal security vulnerabilities that more generic tools cannot.

Byte Code, Assembler Code, and Binary Code Scanning

This is comparable to source code scanning but targets the software’s uninterpreted byte code, assembler code, or compiled binary executable before it is installed and executed. There are no security-specific byte code or binary code scanners. However, a handful of such tools do include searches for certain security-relevant errors and defects; see http://samate.nist.gov/index.php/Byte_Code_Scanners for a fairly comprehensive listing.

Automated Vulnerability Scanning

Automated vulnerability scanning of operating system and application level software involves use of commercial or open source scanning tools that observe executing software systems for behaviors associated with attack patterns that target specific known vulnerabilities. Like virus scanners, vulnerability scanners rely on a repository of “signatures,” in this case indicating recognizable vulnerabilities. Like automated code review tools, although many vulnerability scanners attempt to provide some mechanism for aggregating vulnerabilities, they are still unable to detect complex vulnerabilities or vulnerabilities exposed only as a result of unpredictable (combinations of) attack patterns. In addition to signature-based scanning, most vulnerability scanners attempt to simulate the reconnaissance attack patterns used by attackers to “probe” software for exposed, exploitable vulnerabilities.

Vulnerability scanners can be either network-based or host-based. Network-based scanners target the software from a remote platform across the network, while host-based scanners must be installed on the same host as the target. Host-based scanners generally perform more sophisticated analyses, such as verification of secure configurations, while network-based scanners more accurately simulate attacks that originate outside of the targeted system (i.e., the majority of attacks in most environments). Vulnerability scanning is fully automated, and the tools typically have the high false positive rates that typify most pattern-matching tools, as well as the high false-negative rates that plague other signature-based tools. It is up to the tester to both configure and calibrate the scanner to minimize both false positives and false negatives to the greatest possible extent, and to meaningfully interpret the results to identify real vulnerabilities and weaknesses. As with virus scanners and intrusion detection systems, the signature repositories of vulnerability scanners need to be updated frequently. For testers who wish to write their own exploits, the open source Metasploit Project http://www.metasploit.com publishes black hat information and tools for use by penetration testers, intrusion detection system signature developers, and researchers. The disclaimer on the Metasploit website is careful to state:

“This site was created to fill the gaps in the information publicly available on various exploitation techniques and to create a useful resource for exploit developers. The tools and information on this site are provided for legal security research and testing purposes only.”

Black Box Security Testing

Black box testing is generally used when the tester has limited knowledge of the system under test or when access to source code is not available. Within the security test arena, black box testing is normally associated with activities that occur during the pre-deployment test phase (system test) or on a periodic basis after the system has been deployed.
Black box security tests are conducted to identify and resolve potential security vulnerabilities before deployment or to periodically identify and resolve security issues within deployed systems. They can also be used as a “badness-ometer” [McGraw 04] to give an organization some idea of how bad the security of their system is. From a business perspective, organizations conduct black box security tests to conform to regulatory requirements, protect confidentially and proprietary information, and protect the organization’s brand and reputation.
Fortunately, a significant number of black box test tools focus on application security related issues. These tools concentrate on security related issues including but not limited to:

  • Input checking and validation
  • SQL insertion attacks
  • Injection flaws
  • Session management issues
  • cross-site scripting attacks
  • Buffer overflow vulnerabilities
  • Directory traversal attacks

Benefits and Limitations of Black Box Testing

As previously discussed, black box tests are generally conducted when the tester has limited knowledge of the system under test or when access to source code is not available. On its own, black box testing is not a suitable alternative for security activities throughout the software development life cycle. These activities include the development of security-based requirements, risk assessments, security-based architectures, white box security tests, and code reviews. However, when used to complement these activities or to test third-party applications or security-specific subsystems, black box test activities can provide a development staff crucial and significant insight regarding the system’s design and implementation.

Black box tests can help development and security personnel to:

• Identify implementation errors that were not discovered during code reviews, unit tests, or security white box tests
• Discover potential security issues resulting from boundary conditions that were difficult to identify and understand during the design and implementation phases
• Uncover security issues resulting from incorrect product builds (e.g., old or missing modules/files)
• Detect security issues that arise as a result of interaction with underlying environment (e.g., improper configuration files, unhardened OS and applications)

“White Box” Techniques for security testing

White box” tests and analyses, by contrast with “black box” tests and analyses, are performed on the source code. Specific types of white box analyses and tests include:

Static Analysis

It is known as “code review,” static analysis analyses source code before it is compiled, to detect coding errors, insecure coding constructs, and other indicators of security vulnerabilities or weaknesses that are detectable at the source code level. Static analyses can be manual or automated. In a manual analysis, the reviewer inspects the source code without the assistance of tools.

In an automated analysis, a tool (or tools) is used to scan the code to locate specific “problem” patterns (text strings) defined to it by the analyst via programming or configuration, which the tool then highlights or flags. This enables the reviewer to narrow the focus of his/her manual code inspection to those areas of the code in which the patterns highlighted or flagged in the scanner’s output appear.

Direct Code Analysis

Direct code analysis extends the static analysis by using tools that focus not on finding individual errors but on verifying the code’s overall conformance to a set of predefined properties, which can include security properties such as noninterference and separability, persistent_BNDC, noninference, forward-correct ability, and nondeductibility of outputs.

Property-Based Testing

The purpose of property-based testing is to establish formal validation results through testing. To validate that a program satisfies a property, the property must hold whenever the program is executed. Property-based testing assumes that the specified property captures everything of interest in the program and assumes that the completeness of testing can be measured structurally in terms of source code. The testing only validates the specified property, using the property’s specification to guide dynamic analysis of the program. Information derived from the specification determines which points in the program need to be tested and whether each test execution is correct. A metric known as Iterative Contexts Coverage uses these test execution points to determine when testing is complete. Checking the correctness of each execution together with a description of all the relevant executions results in the validation of the program with respect to the property being tested, thus validating that the final product is free of any flaws specific to that property.

Source Code Fault Injection

A form of dynamic analysis in, which the source code is “instrumented” by inserting changes, then compiling and executing the instrumented code to observe the changes in state and behavior, which emerge when the instrumented portions of code are executed. In this way, the tester can determine and even quantify how the software reacts when it is forced into anomalous states, such as those triggered by intentional faults. This technique has proved particularly useful for detecting the incorrect use of pointers and arrays, and the presence of dangerous calls and race conditions. Fault injection is a complex testing process and thus tends to be limited to code that requires very high assurance.

Fault Propagation Analysis

This involves two techniques for fault injection of source code: extended propagation analysis and interface propagation analysis. The objective is not only to observe individual state changes that result from a given fault but to trace how those state changes propagate throughout a fault tree that has been generated from the program’s source code. Extended propagation analysis entails injecting a fault into the fault tree and then tracing how the fault propagates through the tree. The tester then extrapolates outward to predict the impact a particular fault may have on the behavior of the software module or component, and ultimately the system, as a whole. In interface propagation analysis, the tester perturbs the states that propagate via the interfaces between the module or component and its environment. To do this, the tester injects anomalies into the data feeds between the two levels of components and then watches to see how the resulting faults propagate and whether any new anomalies result. Interface propagation analysis enables the tester to determine how a failure in one component may affect its neighboring components.

Pedigree Analysis

While not a security testing technique in itself, the detection of pedigree indicators in open source code can be helpful in drawing attention to the presence of components that have known vulnerabilities, pinpointing them as high-risk targets in need of additional testing. This is a fairly new area of code analysis that was sparked by concerns regarding open source licensing and intellectual property violations.

Dynamic Analysis of Source Code

Dynamic analysis involves both the source code and the binary executable generated from the source code. The compiled executable is run and “fed” a set of sample inputs while the reviewer monitors and analyzes the data (variables) the program produces as a result. With this better understanding of how the program behaves, the analyst can use a binary-to-source map to trace the location in the source code that corresponds to each point of execution in the running program, and more effectively locate faults, failures, and vulnerabilities. In The Concept of Dynamic Analysis:

  1. Coverage concept analysis
  2. Frequency spectrum analysis.

Coverage concept analysis attempts to produce “dynamic control flow invariants” for a set of executions, which can be compared with statically derived invariants in order to identify desirable changes to the test suite that will enable it to produce better test results.

Frequency spectrum analysis counts the number of executions of each path through each function during a single run of the program. The reviewer can then compare and contrast these separate program parts in terms of higher versus lower frequency, the similarity of frequencies, or specific frequencies.

The first analysis reveals any interactions between different parts of the program, while the second analysis reveals any dependencies between the program parts. The third analysis allows the developer to look for specific patterns in the program’s execution, such as uncaught exceptions, assert failures, dynamic memory errors, and security problems. A number of dynamic analysis tools have been built to elicit or verify system-specific properties in the source code, including call sequences and data invariants.

References

[1]Assuring Software Security Through Testing, White, Black and Somewhere in Between by Mano Paul https://www.isc2.org/uploadedFiles/(ISC)2_Public_Content/Certification_Programs/CSSLP/Software%20Security%20Through%20Testing.pdf

[1] http://www.agitar.com/solutions/why_unit_testing.html

[1] http://www.swsec.com/resources/touchpoints/

[1] State-of-the-Art Report (SOAR) July 31, 2007 – Information Assurance Technology Analysis Center (IATAC) Data and Analysis Center for Software (DACS)

[1] Gu Tian-yang, Shi Yin-sheng, and Fang You-yuan “Research on Software Security Testing” – World Academy of Science, Engineering and Technology 69 2010

vii https://www.owasp.org/index.php/OWASP_Testing_Guide_v3_Table_of_Contents

Choosing the right Software development life cycle model

Selecting a Software Development Life Cycle (SDLC) methodology is a challenging task for many organizations. What tends to make it challenging is the fact that few organizations know what criteria to use in selecting a methodology to add value to the organization. Fewer still understand that a methodology might apply to more than one Lifecycle Model. Before considering a framework for selecting a given SDLC methodology, we need to define the different types and illustrate the advantages and disadvantages of those models (please see Software Development Life Cycle Models and Methodologies).

How to select the right SDLC

Selecting the right SDLC is a process in itself that organization can implement internally or consult for. There are some steps to get the right selection:

STEP 1: Learn the about SDLC Models

SDLCs are the same in their usage, advantages, and disadvantages. In order to select the right SDLC, one must have experience and be familiar with the SDLCs that will be chosen.

STEP 2: Assess the needs of Stakeholders

We must study the business domain, user requirements, business priorities, and technology constraints to be able to choose the right SDLC against their selection criteria.

STEP 3: Define the criteria

Some of the selection criteria or questions that you may use to select an SDLC are:

  • Is the SDLC appropriate for the size of our team and their skills?
  • Is the SDLC appropriate with the selected technology we use for implementing the solution?
  • Is the SDLC appropriate with client and stakeholders need and priorities
  • Is the SDLC appropriate for the geographical situation (co-located or geographically dispersed)?
  • Is the SDLC appropriate for the size and complexity of our software?
  • Is the SDLC appropriate for the type of projects we do?
  • Is the SDLC appropriate for our engineering capability?

What are the criteria?

Here is my recommended criteria, what will be yours?

Factors Waterfall V-Shaped Evolutionary Prototyping Spiral Iterative and Incremental Agile Methodologies
Unclear User Requirement Poor Poor Good Excellent Good Excellent
Unfamiliar Technology Poor Poor Excellent Excellent Good Poor
Complex System Good Good Excellent Excellent Good Poor
Reliable system Good Good Poor Excellent Good Good
Short Time Schedule Poor Poor Good Excellent Excellent Excellent
Strong Project Management Excellent Excellent Excellent Excellent Excellent Excellent
Cost limitation Poor Poor Poor Poor Excellent Excellent
Visibility of Stakeholders Good Good Excellent Excellent Good Excellent
Skills limitation Good Good Poor Poor Good Poor
Documentations Excellent Excellent Good Good Excellent Poor
Component reusability Excellent Excellent Poor Poor Excellent Poor

References

Selecting a Software Development Life Cycle (SDLC) Methodology.(2012, 3 18). Retrieved from http://www.smc-i.com/downloads/sdlc_methodology.pdf

Software Development Life Cycle Models. (2012, 3). Retrieved from Codebetter.com: http://codebetter.com/raymondlewallen/2005/07/13/software-development-life-cycle-models/

Software Development Life Cycle Models and Methodologies

Introduction

The software industry includes many different processes, for example, analysis, development, maintenance and publication of software. This industry also includes software services, such as training, documentation, and consulting.

Our focus here about software development life cycle (SDLC). So, due to that different types of projects have different requirements. Therefore, it may be required to choose the SDLC phases according to the specific needs of the project. These different requirements and needs give us various software development approaches to choose from during software implementation.

Types of Software developing life cycles (SDLC)

Waterfall Model

Description

The waterfall Model is a linear sequential flow. In which progress is seen as flowing steadily downwards (like a waterfall) through the phases of software implementation. This means that any phase in the development process begins only if the previous phase is complete. The waterfall approach does not define the process to go back to the previous phase to handle changes in requirement. The waterfall approach is the earliest approach that was used for software development.

WaterfallThe usage

Projects which not focus on changing the requirements, for example, projects initiated from request for proposals (RFPs)

Advantages and Disadvantages

Advantages Disadvantages
  • Easy to explain to the users.
  • Structures approach.
  • Stages and activities are well defined.
  • Helps to plan and schedule the project.
  • Verification at each stage ensures early detection of errors / misunderstanding.
  • Each phase has specific deliverables.
  • Assumes that the requirements of a system can be frozen.
  • Very difficult to go back to any stage after it finished.
  • A little flexibility and adjusting scope is difficult and expensive.
  • Costly and required more time, in addition to the detailed plan.

V-Shaped Model

Description

It is an extension of waterfall model, Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The major difference between v-shaped model and waterfall model is the early test planning in the v-shaped model.

V-Shaped

The usage

  • Software requirements clearly defined and known
  • Software development technologies and tools is well-known

Advantages and Disadvantages

Advantages Disadvantages
  • Simple and easy to use
  • Each phase has specific deliverables.
  • Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
  • Works well for where requirements are easily understood.
  • Verification and validation of the product in early stages of product development.
  • Very inflexible, like the waterfall model.
  • Little flexibility and adjusting scope is difficult and expensive.
  • Software is developed during the implementation phase, so no early prototypes of the software are produced.
  • The model doesn’t provide a clear path for problems found during testing phases.
  • Costly and required more time, in addition to detailed plan

Prototyping Model

Description

It refers to the activity of creating prototypes of software applications, for example, incomplete versions of the software program being developed. It is an activity that can occur in software development. It used to visualize some component of the software to limit the gap of misunderstanding the customer requirements by the development team. This also will reduce the iterations may occur in waterfall approach and hard to be implemented due to the inflexibility of the waterfall approach. So, when the final prototype is developed, the requirement is considered to be frozen.

It has some types, such as:

  • Throwaway prototyping: Prototypes that are eventually discarded rather than becoming a part of the finally delivered software

Throwaway prototyping

  • Evolutionary prototyping: prototypes that evolve into the final system through an iterative incorporation of user feedback.

ev-proto

  • Incremental prototyping: The final product is built as separate prototypes. At the end, the separate prototypes are merged in an overall design.

StagedModelofSDLC

  • Extreme prototyping: used at web applications mainly. Basically, it breaks down web development into three phases, each one based on the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In the second phase, the screens are programmed and fully functional using a simulated services layer. In the third phase, the services are implemented

The usage

  • This process can be used with any software developing life cycle model. While this shall be focused with systems needs more user interactions. So, the system does not have user interactions, such as, a system does some calculations shall not have prototypes.

Advantages and Disadvantages

Advantages Disadvantages
  • Reduced time and costs, but this can be disadvantage if the developer loses time in developing the prototypes.
  • Improved and increased user involvement.
  • Insufficient analysis· User confusion of prototype and finished system.
  • Developer misunderstanding of user objectives.
  • Excessive development time of the prototype.
  • Expense of implementing prototyping

Spiral Method (SDM)

Description

It is combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favored for large, expensive, and complicated projects. This model uses many of the same phases as the waterfall model, in essentially the same order, separated by planning, risk assessment, and the building of prototypes and simulations.

spiral

The usage

It is used in shrink-wrap large applications and systems which built-in small phases or segments.

Advantages and Disadvantages

Advantages Disadvantages
  • Estimates (i.e. budget, schedule, etc.) become more realistic as work progressed, because important issues are discovered earlier.
  • Early involvement of developers.
  • Manages risks and develops the system into phases.
  • High cost and time to reach the final product.
  • Needs special skills to evaluate the risks and assumptions.
  • Highly customized limiting re-usability

Iterative and Incremental Method

Description

It is developed to overcome the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interactions in between. The basic idea behind this method is to develop a system through repeated cycles (iterative) and in smaller portions at a time (incremental), allowing software developers to take advantage of what was learned during the development of earlier parts or versions of the system.

It consists of mini waterfalls

incremental-sdlc

The usage

It is used in shrink-wrap application and large system which built-in small phases or segments. Also can be used in a system has separated components, for example, ERP system. Which we can start with the budget module as a first iteration and then we can start with inventory module and so forth.

Advantages and Disadvantages

Advantages Disadvantages
  • Produces business value early in the development life cycle.
  • Better use of scarce resources through proper increment definition.
  • Can accommodate some change requests between increments.
  • More focused on customer value than the linear approaches.
  • Problems can be detected earlier.
  • Requires heavy documentation.
  • Follows a defined set of processes.
  • Defines increments based on function and feature dependencies.
  • Requires more customer involvement than the linear approaches.
  • Partitioning the functions and features might be problematic.
  • Integration between iteration can be an issue if this is not considered during the development.

Extreme programming (Agile development)

Description

It is based on iterative and incremental development, where requirements and solutions evolve through collaboration between cross-functional teams.

SW-FW-design

The usage

It can be used with any type of the project, but it needs more involvement from the customer and to be interactive. Also, it can be used when the customer needs to have some functional requirement ready in less than three weeks.

Advantages and Disadvantages

Advantages Disadvantages
  • Decrease the time required to avail some system features.
  • Face to face communication and continuous inputs from customer representative leaves no space for guesswork.
  • The end result is the high-quality software in the least possible time duration and satisfied customer.
  • Scalability.
  • The ability of the customer to express user needs.
  • Documentation is done at later stages.
  • Reduce the usability of components.
  • Needs special skills for the team.

References

(2012, March). Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Main_Page

(2012, March). Retrieved from Software Developing life cycles: http://www.sdlc.ws

Software Development Life Cycle Models. (2012, 3). Retrieved from Codebetter.com: http://codebetter.com/raymondlewallen/2005/07/13/software-development-life-cycle-models/

Software security testing in SDLC?

When to perform Software security analysis and tests?

Most of software security practitioners would agree that the common practice of postponing security analysis and tests after the software implementation phase and even after it has been deployed (i.e., during its acceptance phase), makes it extremely difficult to address in a cost-effective, timely manner any vulnerabilities and weaknesses discovered during the analysis and testing process.

Figure [1] illustrates the relation between cost and time in security testing process which may be doubled or tripled due to lack of this testing coverage during its proper time.

image image

Figure [1][i] Security testing cost vs. time – cost of fixing software bugs

Source: OSSTMM – Open Source Security Testing Methodology Manual

So, Security testing involves in software developing life cycle to ensure the implementation of security requirements. It is worth mentioning that Security testing is not a phase only in SDLC but it involves also in many system components and processes as illustrated in figure [2] below.

image

Figure [2]Security in system components

Source: OSSTMM – Open Source Security Testing Methodology Manual

So, each component of the system has different methodologies and techniques to assure the security, while our focus here on software development life cycle. The Figure [3] and Figure[4] below illustrate the security testing existence at SDLC

image

Figure [3] Describes each of the formal methods activities in the diagram, indicating the SDLC phases to which each activity pertains

Source: Information Assurance Technology Analysis Center (IATAC)

image

Figure [4] Security testing in SDLC – 7 touchpoints

[ii]Figure [4] illustrates the software security touchpoints (a set of best practices) and shows how software practitioners can apply the touchpoints to the various software artifacts produced during software development.

These best practices first appeared as a set in 2004 in IEEE Security & Privacy magazine. Since then, they have been adopted (and in some cases adapted) by the U.S. government in the National Cyber Security Task Force report, by Cigital, by the U.S. Department of Homeland Security, and by Ernst and Young.

So here in table [1] a range of security reviews, analysis, and tests can be mapped to the different software life cycle phases starting with the requirements phase:

Life Cycle Phase Reviews/tests
Requirements Security review of requirements and abuse/misuse cases
Architecture/Product Design Architectural risk analysis (including external reviews)
Detailed Design Security review of the design. Development of test plans, including security tests.
Coding/Unit Testing Code review (static and dynamic analysis), white box testing
Assembly/Integration Testing Black box testing (fault injection, fuzz testing)
System Testing Black box testing, vulnerability scanning
Distribution/Deployment Penetration testing (by software testing expert), vulnerability scanning, impact analysis of patches
Maintenance/support (Feedback loop into previous phases), impact analysis of patches and updates
Security testing in software test plan

The security test plan should be included in the overall software test plan, and should define:

  1. Security test cases or scenarios (based on misuse and abuse cases)
  2. Test data, including attack patterns
  3. Test oracle
  4. Test tools (white box and black box, static and dynamic)
  5. Analysis to be performed to interpret, correlate, and synthesize the results from the various tests and outputs from the various tools.

The security test plan should acknowledge that the security assumptions that were valid when the software’s requirements were specified; will probably have changed by the time the software is deployed. The threat environment in which the software will actually operate is unlikely to have remained static. New threats and attack patterns are continually emerging. Also, emerging has new versions of non-developmental components and patches to those components. All these changes have the potential to invalidate at least some of the security assumptions under which the original requirements were specified.


[i] http://www.agitar.com/solutions/why_unit_testing.html

[ii] http://www.swsec.com/resources/touchpoints/

Software Security testing

What did they say about Software security testing?

“Over 70 percent of security vulnerabilities exist at the application layer, not the network layer” Gartner.

Hacking has moved from a hobbyist pursuit with a goal of notoriety to a criminal pursuit with a goal of money” Counterpane Internet Security.

“64 percent of developers are not confident in their ability to write secure applications” Microsoft Developer Research.

“Losses arising from vulnerable web applications are significant and expensive – up to $60 billion annually”IDC/IBM Systems Sciences Institute.

“If 50 percent of software vulnerabilities were removed prior to production use, enterprise configuration management and incident response costs would be reduced by 75 percent each.”Gartner.

General Statistics

The figures below illustrate that lake of software security allows data breaches. These breaches have been categorized by sector, this has been illustrated in figure [1] and figure [2].

clip_image001

Figure (1) Data breaches that could lead to identity theft and identities exposed, by sector
Source: Based on data provided by OSF DataLossDB (due to rounding, percentages may not total 100 percent)

clip_image002

Figure (2) Average number of identities exposed per data breach, by notable sector
Source: Based on data provided by OSF DataLossDB

The figures below illustrate that lake of software security allows data breaches. At these figures, these breaches have been categories by cause.

clip_image003

Figure (3) Data breaches that could lead to identity theft and identities exposed, by cause
Source: Based on data provided by OSF DataLossDB (due to rounding, percentages may not total 100 percent)

clip_image004

Figure (4) Average number of identities exposed per data breach, by cause
Source: Based on data provided by OSF DataLossDB

Below figure illustrates type of information exposed in deliberate breaches.

clip_image005

Figure (5) Type of information exposed in deliberate breaches
Source: Based on data provided by OSF DataLossDB (due to rounding, percentages may not total 100 percent)

The Impact of unsecured application

The impact of unsecured software application can vary from organization to another based on importance of the system and its related data as following:

The potential impact is LOW if:

The loss of confidentiality, integrity, or availability could be expected to have a limited adverse effect on organizational operations, assets, or individuals.

The potential impact is MODERATE if:

The loss of confidentiality, integrity, or availability could be expected to have a serious adverse effect on organizational operations, assets, or individuals.

The potential impact is HIGH if:

The loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, assets, or individuals.

Types of application need to have security testing

  • Web-applications
  • Applications with sensitive commercial or personal information
  • Payment and statistic systems
  • Applications, sensitive to data distortion
  • Social applications
  • Applications with expensive licensing

The need for security testing

It is important to recognize that there are three key quality components to software assurance as shown in Figure [6]; reliability, resiliency, and recoverability.

  • Reliable software is that which functions as needed by the end user.
  • Resilient software is that which is able to withstand the attempts of an attacker to compromise confidentiality, and/or impact integrity, or availability (CIA).
  • Recoverable software is software that is capable of restoring itself or being restored to expected normal operations when it has failed in its reliability or resiliency.

clip_image006

Figure (6) [i] Software Quality component

Most commonly, when software is said to be of “quality”, it essentially means that the software is working as designed and expected. This is primarily a consideration of software functionality and not its assurance capabilities. However the reliability aspect of software quality today, it is also imperative to take into account the security of the software. This two-pronged approach to software quality testing ensures that software is not only reliable but resilient to withstand attacks that impact CIA.

Therefore, Security testing is necessary because it has a distinct relationship with software quality. The software may meet quality requirements related to the functionality and performance, but it does not necessary mean that this software is secure. The inverse, however, is true.

So, software called secure when it is software with added resiliency, thus software of higher quality, for example, when the “Add to cart” button on a web page is clicked and the selected product is added to the cart (functionality) in less than the expected two-second requirement (performance). It can be urged that this software met the reliability quality requirements as established by the business, but if the software is not tested for security, there is no guarantee that the product code that is added to the cart has not been tampered by an unauthorized user.

Moreover, poor architecture and implementation of the web application cannot assure the CIA aspect of software assurance.

This was an introduction for software security testing; I will add more posts to illustrate more about the definition of Security testing, its relation with the software developing life cycle, and its techniques.

References


[i]Assuring Software security through testing, White, Black and Somewhere in between by Mano Paul https://www.isc2.org/uploadedFiles/(ISC)2_Public_Content/Certification_Programs/CSSLP/Software%20Security%20Through%20Testing.pdf