Skip to main content

Secure Design Best Practices

This chapter gives some basic best practices for a secure software design. Assessing how security is addressed in the design of a product is one important step to ensure that the product meets the best security level and can be done at various points in a product lifecycle.

With Information Technology (IT), we commonly refer to the management of computer equipment, networks, software, and systems at the company level. IT is crucial to enable people and machines to communicate and exchange information, internally but also externally, typically over the Internet.

Industrial Control Systems (ICS) are critical infrastructures and are getting closer to IT environments. Therefore, applying security measures at different layers (by adopting a Defense in Depth approach, see https://csrc.nist.gov/glossary/term/defense_in_depth) is paramount to protect such operational environments against possible threats.

Processes shall be employed to ensure that secure design best practices are documented and applied to the design process at the software level (see ABB Cyber Security Standards). These practices shall be periodically reviewed and updated.

If a system or component is targeting security certification, this chapter is not enough, the requirements in IEC 62443-4-1, IEC 62443-3-3 and IEC 62443-4-2 (see https://go.insideplus.abb.com/tools-and-services/abb-standards/global-standards/iec) must be used as input for product requirements and security design (see the International Cyber Security Standards in References).

Security Architecture and Design Review

Architecture descriptions shall be reviewed as described in the Architecture Review Guideline.

Software Design descriptions shall be reviewed. The roles according to the RACI for 'Detailed Design' as described on the Process page for SW Development shall be represented in the review.

The review shall check at least the following points:

  • Does the design follow/cover the intended security part of the architecture?
  • Does the design consider security best practices?
  • Does the design implement or follow the applicable mitigations described in the threat model?

Defense in Depth

The software architecture and design shall be made with the concept of "Defense in Depth" in mind (see the IEC 62443-4-1 "SD-2 - Defense in depth design" requirement).

This e.g. means that:

  • Each layer provides additional defense mechanisms.
  • Any layer can be compromised; therefore, secure design principles are applied to each layer.
  • The objective is to reduce the attack surface of the subsequent layers.

Example:

  • The TCP/IP stack could check for invalid packets.
  • The HTTP server could authenticate input.
  • Another layer could validate that the input is authorized before it is processed.
  • Audit logs shall be produced to detect administrative changes.

Least privilege and functionality

The principle of least privilege (PoLP) means that the system shall only grant the privileges to users/software necessary to perform the intended operation. By allowing a user/application only the minimum level of permissions or access needed, privileged access to sensitive data is reduced and critical assets are more protected. The benefits of such a principle are the reduction of the attack surface of the product/system and the spreading of the malware over the network if the machine gets locally compromised.

The principle of least functionality means that the system shall only provide the minimum/base functionality and services by default.

Being able to design our product to be executed with the least privileges and functionality it needs means that most of the applications/services of the underlying system should be given low privileges. This can reduce the risk of potential damage to the system since it would require unauthorized escalation of privileges to occur.

Need-to-Know

The ‘Need-to-Know’ principle (see Need-to-Know and Least Privilege principles at Cyber Management Alliance) is a special case of the principle of least privilege. This principle means that the system shall only give data/information access to the extent that is needed for the current user role. Sensitive information that is only relevant for other users shall be restricted to such authorized users and hidden/protected for the current user.

From isms.online: "The need-to-know principle can be enforced with user access controls and authorization procedures and its objective is to ensure that only authorized individuals gain access to information or systems necessary to undertake their duties."

Separation of Duties

The principle means that critical activities shall be separated and assigned to different user roles to reduce the possibility that one single role has access privileges that can compromise the integrity of the entire process. For example, a person that defines the user permissions (e.g., a system administrator) should not be allowed to operate the plant.

Audit Logging

It should not be possible (e.g., for an attacker) to make changes anonymously or unnoticed in the system/product. Logging of changes is an important part of system defense-in-depth. The system shall support audit logging and generate audit events for at least:

  • Configuration changes
  • Backup and restore actions
  • Enabling/disabling functions/ports/interfaces
  • Successful log-in and failed log-in attempts

and protect them both at rest and in transit (read-only events).

Note: the IEC 62443-3-3 and IEC 62443-4-2 standards describe several categories relevant to security (access control, request errors, operating system events...) for which audit records should be generated, what information the individual audit records shall include and a reference to the audit storage capacity (according to commonly recognized recommendations for log management and system configuration: see https://csrc.nist.gov/Projects/log-management).

Trust boundaries as part of the design

The ABB Architecture & Software Development teams are responsible for describing and analyzing proper boundaries within the system/product or even recommending them (see the Architecture process).

Boundaries are used to compartmentalize and isolate SW modules to prevent cross-component communication, information leakage and control without proper authentication and/or authorization.

It shall be clearly documented what products, components, or sub-components are inside or outside a trust boundary. Threat models, attack surface analysis or any diagram for design shall show what other components/products/users the product communicates with (i.e., the product interface shall be clear). It shall also be clear if data is protected or not when crossing a trust boundary.

No debug ports

From Wikipedia: “A debug port is a chip-level diagnostic interface (like a computer port) included in an integrated circuit to aid design, fabrication, development, and debugging. A debug port is not necessary for end-use function and is often hidden or disabled in finished products.

ABB highly recommends removing debug ports, headers and traces from circuit boards used during development from production hardware or documenting their presence and the need to protect them from unauthorized access.

No hardcoded credentials or backdoor accounts

From https://www.beyondtrust.com/resources/glossary/hardcoded-embedded-passwords : “Hardcoded passwords, also often referred to as embedded credentials, are plain text passwords or other secrets in source code. Password hardcoding refers to the practice of embedding plain text (non-encrypted) passwords and other secrets (SSH Keys, DevOps secrets, etc.) into the source code. Default, hardcoded passwords may be used across many of the same devices, applications, and systems, which helps simplify set up at scale, but at the same time, poses considerable cybersecurity risk.

From https://www.mpirical.com/glossary/backdoor-accounts : “These are secret accounts installed on a machine to allow users, usually developers or administrators, to gain access to resources while bypassing usual authentication procedures. Backdoor accounts, if discovered, can be used by a malicious hacker to gain access to the machine or software.

ABB product or system must not have any hardcoded credential or backdoor log-in or account and all access shall be handled by the ordinary log-in/user account management intended for the end-user.

Economy of mechanism

The likelihood of a greater number of vulnerabilities increases with the complexity of the software's architectural design and code. Therefore, by keeping the software design and implementation details simple, the attack-ability or attack surface of the software is reduced.

Keep the design as simple and small as possible.

Attack surface reduction

As part of the ABB Cyber Security Standards, we rely on Threat Models and the Attack Surface & Criticality Analyses (see 3BSE092114 Security Analysis Guideline for details) to analyze possible threats and remove or reduce the attack surface.

A particular analysis is done for external interfaces of the product since any interface that accepts external input or exposes programmatic functionality (API remotely accessible) provides an entry point for an attacker to change or acquire a program control path, inject commands or arbitrary code for execution or alter product data.

Input validation is important for safeguarding the interfaces of the product/system. A way to attack a system is to input invalid data, out-of-range data, escape characters, malformed data etc. in order to crash or disturb the normal execution of the product/system.

See Input Validation in the Architectural Security Best Practices page.
See Data Validation Issues on the SW Development Security Best Practices page.

Input validation

All system/product interfaces shall have an input validation layer. The input validation shall only accept the minimum set of valid (e.g., valid range, syntax, and size) data/characters/message types etc. needed for the function. All other data shall be ignored/filtered/logged.

Input validation applies to all system/product interfaces. Some examples:

  • User Interface
  • Communication ports/protocols
  • Parsers
  • System/Product API
  • File input

Hardening

The principle of hardening is to keep everything closed/disabled by default, and only open/enable if used. This applies to ports, interfaces, and software (but includes also malware prevention solutions, security policies, and network configuration). See References for details.

Mobile Code

From NIST: “Mobile code is software programs or parts of programs obtained from remote information systems, transmitted across a network, and executed on a local information system without explicit installation or execution by the recipient.

Avoid designs that introduce mobile code (e.g., web technology with VBScript, JavaScript, Flash animations etc.).

If mobile code is received, from a web ‘server’ for example as part of your web ‘client’ application, consider the following several requirements (see the IEC 62443-4-2 standard about them, https://go.insideplus.abb.com/tools-and-services/abb-standards/global-standards/iec):

  • Preventing the execution of mobile code
  • Requiring proper authentication and authorization for the origin of the code
  • Restricting mobile code transfer
  • Monitoring the use of mobile code

Secure communications

For ABB product/system internal communications, it is mandatory to use secure communications. If a communication protocol exists in both a secure and non-secure variant, the secure variant shall be chosen.

For example:

  • HTTPS instead of HTTP
  • SFTP instead of FTP
  • Enable security in OPC UA

If the product/system internal communication is based on a standard protocol that does not exist in a secure variant, then it is allowed to be used if it is part of the Default Exception list (see https://go.insideplus.abb.com/corporate-functions/research-and-development/cyber-security/standards/exceptions ).

The product/system can support non-secure communication to external devices. However, the communication must be disabled by default, and the end-user documentation shall state the risks of using the non-secure communication. If possible, the product/system shall also have a graphical indication/warning when non-secure communication is enabled.

Using proven secure components/designs

During the design phase, the most suitable communication protocols, cryptographic algorithms, and authentication methods for fulfilling the security requirements shall be selected (see also using secure design patterns). Cryptographic algorithms shall be chosen based on commonly accepted security industry recommendations and guidelines (e.g., as recommended by NIST or defined in international standards).

Using secure design patterns

According to Wikipedia, “In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.

The Software Engineering Institute (SEI, https://www.sei.cmu.edu ) is one of many available public resources where you can find proper guidelines according to your implementation design (e.g., 'Secure Factory', 'Secure Chain of Responsibility', 'Secure Visitor' … secure versions of classic and well-known SW design patterns).

Containerization and Orchestration

From Red Hat: “Containerization is the packaging together of software code with all its necessary components like libraries, frameworks, and other dependencies so that they are isolated in their own "container". Container orchestration is about the automation of the deployment, management, scaling, and networking of containers.

ABB released an internal guideline (Guideline on Containers and Container Orchestration Security) based on past experiences assessing these technologies and incident post-mortem analysis. It has been subsequently aligned to public documents NSA, CISA Kubernetes Hardening Guidance (August 2022) and NIST Special Publication 800-190, Application Container Security Guide.

After an introduction to the technology, threats according to CISA are enumerated and then also Risks and Countermeasures according to NIST. The last sections contain some tips and recommendations based on experience and with a more practical approach.

General design considerations

General considerations for both architectural and SW design perspectives:

  • Built-in security
    Choose new technologies having security built-in (instead of older technology + compensating measures).

  • Shared memory or file sharing
    Shared memory, or in general any software technique to share information, is an efficient means of passing data between programs but at the same time difficult to make secure with proper authorization mechanisms. Avoid designs that depend on shared memory.

  • Portable storage
    Avoid designs that depend on Portable storage (SD cards, USB disks etc.). The properties that make these devices portable also make them vulnerable to losses of physical control and network security breaches. Therefore they may be disabled in Industrial Control Systems.

Architectural Security Best Practices

This section organizes weaknesses according to common architectural security tactics. It is intended to assist architects in identifying potential mistakes that can be made when designing software.

See Architectural Security Best Practices.

Software Development Security Best Practices

This section organizes weaknesses around concepts that are frequently used or encountered in software development. This includes most of the aspects of the software development lifecycle including both architecture and implementation. Accordingly, this view can align closely with the perspectives of architects, developers, educators, and assessment vendors.

See SW Development Security Best Practices.

References

Owner: Cyber Security Team