Difference between Software Architecture, Software Structure, and Software Design

Introduction

Over the past 10-15 years, Software architecture has been widely spread in software engineering community, To the extent that there are currently many career positions for software architect like Technical Architect and Chief Architect. Also, Architecture has involved in many different domains, for example, the architecture used to describe and refer to the internal structure of microprocessor and structure of machines or Network. For that matter, Trying to find one widely accepted definition for software architecture is not easy, and this issue has been introduced in many books when the authors start defining software architecture.

“Trying to define a term such as software architecture is always a potentially dangerous activity. There really is no widely accepted definition by the industry.”

(Gorton, 2011)

This post shall highlight the difference between Software Architecture, Software Design and Software Structure and the interrelation between them.

Software Architecture vs. Software Structure

Software architecture

The most widely accepted definition comes from work done in the Software Architecture group of the Software Engineering Institute (SEI) at Carnegie-Mellon University in Pittsburgh.

“The architecture of a software-intensive system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them.”

(Bass, Clements, & Kazman, 2003)

Software Structure

Software structure may be a little confusing and seems that no difference between it and software architecture, also there is no solid definition for system structure. Software structure has two types, static and dynamic.

“The static structures of a software system define its internal design-time elements and their arrangement”

“The dynamic structures of a software system define its runtime elements and their interactions“

(Bass, Clements, & Kazman, 2003)

The following example of Human Body will illustrate what is meaning of software structure? The Human Body is structuring the body structure, so the neurologist, the orthopedist, the hematologist, and the dermatologist all take a different perspective on the structure of a human body. Ophthalmologists, cardiologists, and podiatrists concentrate on subsystems. And the kinesiologist and psychiatrist are concerned with different aspects of the entire arrangement’s behavior. Although these perspectives are pictured differently and have totally different properties, all are related together, describe the architecture of the human body. (Maya & Merson, 2008)

The same example can be taken from Engineering structure of a simple home. Some structures made for Electric purposes, others for Sanitation purposes and others for Structural purposes. Each one of them represents a different structure for a specific role and specific purpose. All of these structures shall be consistent to create the architecture, and only one of them cannot represent the architecture.

The same is the software, it contains different views for various purposes and stakeholders. Each view is a structure. The developer needs to have a software structure to help him in developing the requirements. The System integrator needs to know the interrelation between components and its properties and behavior. The tester needs to know what are the inputs and expected outputs. The below-quoted paragraph completes the relation between the structures and the architecture.

“None of these structures alone is the architecture, although they all convey architectural information. The architecture consists of these structures as well as many others. This example shows that since architecture can comprise more than one kind of structure, there is more than one kind of element (e.g., implementation unit and processes), more than one kind of interaction among elements (e.g., subdivision and synchronization), and even more than one context (e.g., development time versus runtime). By intention, the definition does not specify what the architectural elements and relationships are.”

(Maya & Merson, 2008)

 

It is worth mentioning that, structures have a more detailed level of information regarding each component, for example, the structure of deployment infrastructure may include many details regarding processing power, storage and shared memory. This detailed information may be not introduced in the level of architecture.

It can be argued that structure is architecture as it describes the elements and component from the perspective of stakeholders, which achieves their goals. So it is architecture from their own perspective.

“If structure is important to achieve your system’s goals, that structure is architectural. But designers of elements, or subsystems, that you assign may have to introduce structure of their own to meet their goals, in which case such structures are architectural: to them but not to you.”

(Clements, et al., 2010)

Software Architecture vs. Software Design

Software Design

The design is used in many fields and has no generally widely accepted definition. However work experience shows that design is making a plan for the software developing activity to accomplish its requirements and its related quality attributes.

Architecture is mainly a design, while not all designs are architecture. This concept has been explained in (Clements, et al., 2010). As the architecture is a set of main design decisions which will affect the software and its quality attributes, while any other design decisions left for downstream developer and designer is a nonarchitectural design, these designs are left because it will not affect the overall decisions that have been taken by the architect to document the software architecture.

For example, in a service-oriented architecture, the architect is interested in defining the main services and component and their connections with each other, while he is not interested in how one of these services will be implemented as this can be left for nonarchitectural design methods and implementations, as defining these interfaces between the components and the data exchanged between them is more important and cannot be left to any element design decision. As software components and its quality depend on these main decisions.

It is worth mentioning that, there is a wrong view by understanding that architecture is focused on high level and conceptual framework, and it is followed by a step of detailed design, also that architecture document shall be limited to number of pages (50 pages) and it is just a small set of only big decisions. Authors of (Clements, et al., 2010) advises the readers to stamp out these thoughts as they are nonsense and asks them to eliminate the terminology of detailed design and using the nonarchitectural design. As Architecture may be detailed or high level based on the global decisions.

To summarize, architecture is design, but not all design is architectural. The architect draws the boundary between architectural and nonarchitectural design by making those decisions that need to be bound in order for the system to meet its development, behavioral, and quality goals. All other decisions can be left to downstream designers and implementers. Decisions are architectural or not, according to context. If structure is important to achieve your system’s goals, that structure is architectural. But designers of elements, or subsystems, that you assign may have to introduce structure of their own to meet their goals, in which case such structures are architectural: to them but not to you.

 

And (repeat after me) we all promise to stop using the phrase “detailed design.” Try “nonarchitectural design” instead.

(Clements, et al., 2010)

This perspective is not similar with the perspective of Author of (Budgen, 2003) who illustrated the design as phase of the software lifecycle after architecture phase and it is interested in the level of deep details and description of system elements.

Architectural design. Concerned with the overall form of solution to be adopted (for example, whether to use a distributed system, what the major elements will be and how they will interact).

 

Detailed design. Developing descriptions of the elements identified in the previous phase, and obviously involving interaction with this. There are also clear feedback loops to earlier phases if the designers are to ensure that the behavior of their solution is to meet the requirements.

(Budgen, 2003)

Conclusion

To sum up, it can be argued that the three terminologies; Software Architecture, Software Design and Software Structure have no widely agreed definition nor a remarkable difference between them. Therefore, they have many perspectives according to their purposes. In my own perspective, software architecture and software design are the same concept, while they are different on the level of details needed to be shared with stakeholders based on the global architectural decisions.

In addition, software structure is set of architecture to characterize a set of actions and component from the perspective view of the software in specific depth. Software architecture is a group of these structures. These structures are organized in a manner to fulfill the software requirements and its quality attributes. Also, the software Structure of specific view is architecture as it describes the elements, its properties, and behavior to meet this specific view goal for the specific stakeholder.

Above all, in order to reach a remarkable difference between the three terminologies, we have to decide which viewpoint we are looking from.

Bibliography

“SEI”, S. E. (2011). Software Engineering Institute. (Carnegie Mellon University) Retrieved 10 7, 2011, from http://www.sei.cmu.edu/architecture/start/definitions.cfm

Bass, L., Clements, P., & Kazman, R. (2003). Software Architecture in Practice (2nd edition). Addison Wesley 2003.

Budgen, D. (2003). SOFTWARE DESIGN (2nd ed.). Addison Wesley.

Clements, P., Bachmann, F., Bass, L., Garlan, D., Ivers, J., Little, R., . . . Stafford, J. (2010). Documenting Software Architectures (2nd Edition). Addison-Wesely.

D. Garlan, M. S. (1993). An Introduction to Software Architecture, Advances in Software Engineering and Knowledge Engineering, Volume I. World Scientific. Retrieved from D. Garlan, M. Shaw, An Introduction to Software Architecture, Advances in Software Engineering and Knowledge Engineering, Volume I, World Scientific, 1993

Design. (n.d.). Retrieved October 14, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Design

Gorton, I. (2011). Understanding Software Architecture. In I. Gorton, Essential Software Architecture (2nd ed., p. 2). Springer.

Maya, L. D., & Merson, P. (2008). Documentation Roadmap and Overview. Retrieved October 7, 2011, from Software Architecture Document: https://wiki.sei.cmu.edu/sad/index.php/Documentation_Roadmap_and_Overview

Black Box Security Analysis and Test Techniques

Black box techniques are the only techniques available for analyzing and testing nondevelopmental binary executable without first decompiling or disassembling them. Black box tests are not limited in utility to COTS and other executable packages: they are equally valuable for testing compiled custom developed and open source code, enabling the tester to observe the software’s actual behaviors during execution and compare them with behaviors that could only be speculated upon based on extrapolation from indicators in the source code. Black box testing also allows for examination of the software’s interactions with external entities (environment, users, attackers)—a type of examination that is impossible in white box analyses and tests. One exception is the detection of malicious code. On the other hand, because black box testing can only observe the software as it runs and “from the outside in,” it also provides an incomplete picture.

For this reason, both white and black box testing should be used together, the former during the coding and unit testing phase to eliminate as many problems as possible from the source code before it is compiled, and the latter later in the integration and assembly and system testing phases to detect the types of byzantine faults and complex vulnerabilities that only emerge as a result of runtime interactions of components with external entities. Specific types of black box tests include:

Binary Security Analysis

This technique examines the binary machine code of an application for vulnerabilities. Binary security analysis tools usually occur in one of two forms. In the first form, the analysis tool monitors the binary as it executes, and may inject malicious input to simulate attack patterns intended to subvert or sabotage the binary’s execution, in order to determine from the software’s response whether the attack pattern was successful. This form of binary analysis is commonly used by web application vulnerability scanners. The second form of binary analysis tool models the binary executable (or some aspect of it) and then scans the model for potential vulnerabilities. For example, the tool may model the data flow of an application to determine whether it validates input before processing it and returning a result. This second form of binary analysis tool is most often used in Java bytecode scanners to generate a structured format of the Java program that is often easier to analyze than the original Java source code.

Software Penetration Testing

Applies a testing technique long used in network security testing to the software components of the system or to the software-intensive system as a whole. Just as network penetration testing requires testers to have extensive network security expertise, software penetration testing requires testers who are experts in the security of software and applications. The focus is on determining whether intra-or inter-component vulnerabilities are exposed to external access, and whether they can be exploited to compromise the software, its data, or its environment and resources. Penetration testing can be more extensive in its coverage and also test for more complex problems, than other, less sophisticated (and less costly) black box security tests, such as fault injection, fuzzing, and vulnerability scanning. The penetration tester acts, in essence, as an “ethical hacker.” As with network penetration testing, the effectiveness of software penetration tests is necessarily constrained by the amount of time, resources, stamina, and imagination available to the testers.

Fault Injection of Binary Executable

This technique was originally developed by the software safety community to reveal safety-threatening faults undetectable through traditional testing techniques. Safety fault injection induces stresses in the software, creates interoperability problems among components, and simulates faults in the execution environment. Security fault injection uses a similar approach to simulate the types of faults and anomalies that would result from attack patterns or execution of malicious logic, and from unintentional faults that make the software vulnerable. Fault injection as an adjunct to penetration testing enables the tester to focus in more detail on the software’s specific behaviors in response to attack patterns. Runtime fault injection involves data perturbation. The tester modifies the data passed by the execution environment to the software, or by one software component to another. Environment faults in particular have proven useful to simulate because they are the most likely to reflect real-world attack scenarios. However, injected faults should not be limited to those that simulate real-world attacks. To get the most complete understanding of all of the software’s possible behaviors and states, the tester should also inject faults that simulate highly unlikely, even “impossible,” conditions. As noted earlier, because of the complexity of the fault injection testing process, it tends to be used only for software that requires very high confidence or assurance.

Fuzz Testing

Like fault injection, fuzz testing involves the input of invalid data via the software’s environment or an external process. In the case of fuzz testing, however, the input data is random (to the extent that computer-generated data can be truly random): it is generated by tools called fuzzers, which usually work by copying and corrupting valid input data. Many fuzzers are written to be used on specific programs or applications and are not easily adaptable. Their specificity to a single target, however, enables them to help reveal security vulnerabilities that more generic tools cannot.

Byte Code, Assembler Code, and Binary Code Scanning

This is comparable to source code scanning but targets the software’s uninterpreted byte code, assembler code, or compiled binary executable before it is installed and executed. There are no security-specific byte code or binary code scanners. However, a handful of such tools do include searches for certain security-relevant errors and defects; see http://samate.nist.gov/index.php/Byte_Code_Scanners for a fairly comprehensive listing.

Automated Vulnerability Scanning

Automated vulnerability scanning of operating system and application level software involves use of commercial or open source scanning tools that observe executing software systems for behaviors associated with attack patterns that target specific known vulnerabilities. Like virus scanners, vulnerability scanners rely on a repository of “signatures,” in this case indicating recognizable vulnerabilities. Like automated code review tools, although many vulnerability scanners attempt to provide some mechanism for aggregating vulnerabilities, they are still unable to detect complex vulnerabilities or vulnerabilities exposed only as a result of unpredictable (combinations of) attack patterns. In addition to signature-based scanning, most vulnerability scanners attempt to simulate the reconnaissance attack patterns used by attackers to “probe” software for exposed, exploitable vulnerabilities.

Vulnerability scanners can be either network-based or host-based. Network-based scanners target the software from a remote platform across the network, while host-based scanners must be installed on the same host as the target. Host-based scanners generally perform more sophisticated analyses, such as verification of secure configurations, while network-based scanners more accurately simulate attacks that originate outside of the targeted system (i.e., the majority of attacks in most environments). Vulnerability scanning is fully automated, and the tools typically have the high false positive rates that typify most pattern-matching tools, as well as the high false-negative rates that plague other signature-based tools. It is up to the tester to both configure and calibrate the scanner to minimize both false positives and false negatives to the greatest possible extent, and to meaningfully interpret the results to identify real vulnerabilities and weaknesses. As with virus scanners and intrusion detection systems, the signature repositories of vulnerability scanners need to be updated frequently. For testers who wish to write their own exploits, the open source Metasploit Project http://www.metasploit.com publishes black hat information and tools for use by penetration testers, intrusion detection system signature developers, and researchers. The disclaimer on the Metasploit website is careful to state:

“This site was created to fill the gaps in the information publicly available on various exploitation techniques and to create a useful resource for exploit developers. The tools and information on this site are provided for legal security research and testing purposes only.”

Black Box Security Testing

Black box testing is generally used when the tester has limited knowledge of the system under test or when access to source code is not available. Within the security test arena, black box testing is normally associated with activities that occur during the pre-deployment test phase (system test) or on a periodic basis after the system has been deployed.
Black box security tests are conducted to identify and resolve potential security vulnerabilities before deployment or to periodically identify and resolve security issues within deployed systems. They can also be used as a “badness-ometer” [McGraw 04] to give an organization some idea of how bad the security of their system is. From a business perspective, organizations conduct black box security tests to conform to regulatory requirements, protect confidentially and proprietary information, and protect the organization’s brand and reputation.
Fortunately, a significant number of black box test tools focus on application security related issues. These tools concentrate on security related issues including but not limited to:

  • Input checking and validation
  • SQL insertion attacks
  • Injection flaws
  • Session management issues
  • cross-site scripting attacks
  • Buffer overflow vulnerabilities
  • Directory traversal attacks

Benefits and Limitations of Black Box Testing

As previously discussed, black box tests are generally conducted when the tester has limited knowledge of the system under test or when access to source code is not available. On its own, black box testing is not a suitable alternative for security activities throughout the software development life cycle. These activities include the development of security-based requirements, risk assessments, security-based architectures, white box security tests, and code reviews. However, when used to complement these activities or to test third-party applications or security-specific subsystems, black box test activities can provide a development staff crucial and significant insight regarding the system’s design and implementation.

Black box tests can help development and security personnel to:

• Identify implementation errors that were not discovered during code reviews, unit tests, or security white box tests
• Discover potential security issues resulting from boundary conditions that were difficult to identify and understand during the design and implementation phases
• Uncover security issues resulting from incorrect product builds (e.g., old or missing modules/files)
• Detect security issues that arise as a result of interaction with underlying environment (e.g., improper configuration files, unhardened OS and applications)

“White Box” Techniques for security testing

White box” tests and analyses, by contrast with “black box” tests and analyses, are performed on the source code. Specific types of white box analyses and tests include:

Static Analysis

It is known as “code review,” static analysis analyses source code before it is compiled, to detect coding errors, insecure coding constructs, and other indicators of security vulnerabilities or weaknesses that are detectable at the source code level. Static analyses can be manual or automated. In a manual analysis, the reviewer inspects the source code without the assistance of tools.

In an automated analysis, a tool (or tools) is used to scan the code to locate specific “problem” patterns (text strings) defined to it by the analyst via programming or configuration, which the tool then highlights or flags. This enables the reviewer to narrow the focus of his/her manual code inspection to those areas of the code in which the patterns highlighted or flagged in the scanner’s output appear.

Direct Code Analysis

Direct code analysis extends the static analysis by using tools that focus not on finding individual errors but on verifying the code’s overall conformance to a set of predefined properties, which can include security properties such as noninterference and separability, persistent_BNDC, noninference, forward-correct ability, and nondeductibility of outputs.

Property-Based Testing

The purpose of property-based testing is to establish formal validation results through testing. To validate that a program satisfies a property, the property must hold whenever the program is executed. Property-based testing assumes that the specified property captures everything of interest in the program and assumes that the completeness of testing can be measured structurally in terms of source code. The testing only validates the specified property, using the property’s specification to guide dynamic analysis of the program. Information derived from the specification determines which points in the program need to be tested and whether each test execution is correct. A metric known as Iterative Contexts Coverage uses these test execution points to determine when testing is complete. Checking the correctness of each execution together with a description of all the relevant executions results in the validation of the program with respect to the property being tested, thus validating that the final product is free of any flaws specific to that property.

Source Code Fault Injection

A form of dynamic analysis in, which the source code is “instrumented” by inserting changes, then compiling and executing the instrumented code to observe the changes in state and behavior, which emerge when the instrumented portions of code are executed. In this way, the tester can determine and even quantify how the software reacts when it is forced into anomalous states, such as those triggered by intentional faults. This technique has proved particularly useful for detecting the incorrect use of pointers and arrays, and the presence of dangerous calls and race conditions. Fault injection is a complex testing process and thus tends to be limited to code that requires very high assurance.

Fault Propagation Analysis

This involves two techniques for fault injection of source code: extended propagation analysis and interface propagation analysis. The objective is not only to observe individual state changes that result from a given fault but to trace how those state changes propagate throughout a fault tree that has been generated from the program’s source code. Extended propagation analysis entails injecting a fault into the fault tree and then tracing how the fault propagates through the tree. The tester then extrapolates outward to predict the impact a particular fault may have on the behavior of the software module or component, and ultimately the system, as a whole. In interface propagation analysis, the tester perturbs the states that propagate via the interfaces between the module or component and its environment. To do this, the tester injects anomalies into the data feeds between the two levels of components and then watches to see how the resulting faults propagate and whether any new anomalies result. Interface propagation analysis enables the tester to determine how a failure in one component may affect its neighboring components.

Pedigree Analysis

While not a security testing technique in itself, the detection of pedigree indicators in open source code can be helpful in drawing attention to the presence of components that have known vulnerabilities, pinpointing them as high-risk targets in need of additional testing. This is a fairly new area of code analysis that was sparked by concerns regarding open source licensing and intellectual property violations.

Dynamic Analysis of Source Code

Dynamic analysis involves both the source code and the binary executable generated from the source code. The compiled executable is run and “fed” a set of sample inputs while the reviewer monitors and analyzes the data (variables) the program produces as a result. With this better understanding of how the program behaves, the analyst can use a binary-to-source map to trace the location in the source code that corresponds to each point of execution in the running program, and more effectively locate faults, failures, and vulnerabilities. In The Concept of Dynamic Analysis:

  1. Coverage concept analysis
  2. Frequency spectrum analysis.

Coverage concept analysis attempts to produce “dynamic control flow invariants” for a set of executions, which can be compared with statically derived invariants in order to identify desirable changes to the test suite that will enable it to produce better test results.

Frequency spectrum analysis counts the number of executions of each path through each function during a single run of the program. The reviewer can then compare and contrast these separate program parts in terms of higher versus lower frequency, the similarity of frequencies, or specific frequencies.

The first analysis reveals any interactions between different parts of the program, while the second analysis reveals any dependencies between the program parts. The third analysis allows the developer to look for specific patterns in the program’s execution, such as uncaught exceptions, assert failures, dynamic memory errors, and security problems. A number of dynamic analysis tools have been built to elicit or verify system-specific properties in the source code, including call sequences and data invariants.

References

[1]Assuring Software Security Through Testing, White, Black and Somewhere in Between by Mano Paul https://www.isc2.org/uploadedFiles/(ISC)2_Public_Content/Certification_Programs/CSSLP/Software%20Security%20Through%20Testing.pdf

[1] http://www.agitar.com/solutions/why_unit_testing.html

[1] http://www.swsec.com/resources/touchpoints/

[1] State-of-the-Art Report (SOAR) July 31, 2007 – Information Assurance Technology Analysis Center (IATAC) Data and Analysis Center for Software (DACS)

[1] Gu Tian-yang, Shi Yin-sheng, and Fang You-yuan “Research on Software Security Testing” – World Academy of Science, Engineering and Technology 69 2010

vii https://www.owasp.org/index.php/OWASP_Testing_Guide_v3_Table_of_Contents

Choosing the right Software development life cycle model

Selecting a Software Development Life Cycle (SDLC) methodology is a challenging task for many organizations. What tends to make it challenging is the fact that few organizations know what criteria to use in selecting a methodology to add value to the organization. Fewer still understand that a methodology might apply to more than one Lifecycle Model. Before considering a framework for selecting a given SDLC methodology, we need to define the different types and illustrate the advantages and disadvantages of those models (please see Software Development Life Cycle Models and Methodologies).

How to select the right SDLC

Selecting the right SDLC is a process in itself that organization can implement internally or consult for. There are some steps to get the right selection:

STEP 1: Learn the about SDLC Models

SDLCs are the same in their usage, advantages, and disadvantages. In order to select the right SDLC, one must have experience and be familiar with the SDLCs that will be chosen.

STEP 2: Assess the needs of Stakeholders

We must study the business domain, user requirements, business priorities, and technology constraints to be able to choose the right SDLC against their selection criteria.

STEP 3: Define the criteria

Some of the selection criteria or questions that you may use to select an SDLC are:

  • Is the SDLC appropriate for the size of our team and their skills?
  • Is the SDLC appropriate with the selected technology we use for implementing the solution?
  • Is the SDLC appropriate with client and stakeholders need and priorities
  • Is the SDLC appropriate for the geographical situation (co-located or geographically dispersed)?
  • Is the SDLC appropriate for the size and complexity of our software?
  • Is the SDLC appropriate for the type of projects we do?
  • Is the SDLC appropriate for our engineering capability?

What are the criteria?

Here is my recommended criteria, what will be yours?

Factors Waterfall V-Shaped Evolutionary Prototyping Spiral Iterative and Incremental Agile Methodologies
Unclear User Requirement Poor Poor Good Excellent Good Excellent
Unfamiliar Technology Poor Poor Excellent Excellent Good Poor
Complex System Good Good Excellent Excellent Good Poor
Reliable system Good Good Poor Excellent Good Good
Short Time Schedule Poor Poor Good Excellent Excellent Excellent
Strong Project Management Excellent Excellent Excellent Excellent Excellent Excellent
Cost limitation Poor Poor Poor Poor Excellent Excellent
Visibility of Stakeholders Good Good Excellent Excellent Good Excellent
Skills limitation Good Good Poor Poor Good Poor
Documentations Excellent Excellent Good Good Excellent Poor
Component reusability Excellent Excellent Poor Poor Excellent Poor

References

Selecting a Software Development Life Cycle (SDLC) Methodology.(2012, 3 18). Retrieved from http://www.smc-i.com/downloads/sdlc_methodology.pdf

Software Development Life Cycle Models. (2012, 3). Retrieved from Codebetter.com: http://codebetter.com/raymondlewallen/2005/07/13/software-development-life-cycle-models/