# |
ID |
PAPER |
TAXONOMY |
LEVEL 0 |
LEVEL 1 |
LEVEL 2 |
LEVEL 3 |
LEVEL 4 |
METRIC |
KEY WORD |
DEFINITION |
SCALE |
SCOPE |
AUTOMATION |
MEASUREMENT |
WHAT? |
USABLE |
REASON |
|
|
|
1 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Phishing Susceptibility |
- |
- |
False positives (FP) |
USER |
Percentage of flagging genuine email as phishing email |
RATIO |
USER |
|
DYNAMIC |
- |
NO |
Phishing affects only people. It exploits a user vulnerability. This evaluates the user, not the device |
|
HERE, THE ROWS WITH NO INFORMATION WERE DELETED |
|
2 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Phishing Susceptibility |
- |
- |
False negatives (FN) |
USER |
Percentage of detecting a phishing email as a genuine email |
RATIO |
USER |
|
DYNAMIC |
- |
NO |
Phishing affects only people. It exploits a user vulnerability. This evaluates the user, not the device |
|
HERE, THE ROWS WITH "SIMILAR TO X" WERE DELETED |
|
3 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Malware Susceptibility |
- |
- |
User’s online behavior |
USER |
|
RATIO |
USER |
|
DYNAMIC |
- |
NO |
Phishing affects only people. It exploits a user vulnerability. This evaluates the user, not the device. This is only a suggestion. Not defined |
|
* |
This information was induced by me |
4 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Malware Susceptibility |
- |
- |
Users’ susceptibility to attacks |
USER |
|
RATIO |
USER |
|
UNKNOWN |
- |
NO |
This evaluates the user, not the device. This is only a suggestion. Not defined |
|
HW + SW |
DEVICE |
5 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Password Vulnerabilities |
- |
- |
Entropy |
PASSWORD |
|
|
USER |
AUTOMATIC |
STATIC |
- |
YES |
This could be used with keys instead of passwords to measure randomness |
|
|
|
6 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Password Vulnerabilities |
- |
- |
Password guessability (statistical / parameterized) |
PASSWORD |
This metric measures the guessability of a set of passwords (rather than for individual passwords) by an “idealized” attacker with the assumption of a perfect guess / This metric measures the number of guesses an attacker needs to make via a particular cracking algorithm (i.e., a particular threat model) before recovering a particular password and training data. Different password cracking algorithms leads to different results |
RATIO BSOLUTE |
USER |
AUTOMATIC |
STATIC |
- |
NO |
Passwords are for people. Devices use keys (that are much more larger) |
|
|
|
7 |
2 |
|
YES |
Vulnerability metrics |
Measuring User Vulnerabilities |
Password Vulnerabilities |
- |
- |
Password strength meter |
PASSWORD |
This metric measures the password strength via an estimated effort to guess the password, while considering factors such as the character set (e.g., special symbols are required or not), password length, whitespace permissibility, password entropy, and blacklisted passwords being prevented or not. One variant metric is adaptive password strength, which is used to improve the accuracy of strength estimation |
ORDINAL |
USER |
AUTOMATIC |
STATIC |
- |
NO |
Passwords are for people. Devices use keys (that are much more larger) |
|
|
|
8 |
2 |
|
YES |
Vulnerability metrics |
Measuring Interface-Induced Vulnerabilities |
- |
- |
- |
Attack surface |
SURFACE |
It aim to measure the ways by which an attacker can compromise a targeted software |
ABSOLUTE |
DEVICE |
|
STATIC |
DEVICE |
YES |
|
|
|
|
9 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Temporal Attributes of Vulnerabilities |
Evolution of vulnerabilities |
- |
Historical vulnerability metric |
VULNERABILITY |
It measures the degree that a system is vulnerable (i.e., frequency of vulnerabilities) in the past |
RATIO BSOLUTE ISTRIBUTION |
DEVICE |
|
STATIC |
|
YES |
This could be part of a checklist of task, to check if there is any vulnerability that affects the device |
|
|
|
10 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Temporal Attributes of Vulnerabilities |
Evolution of vulnerabilities |
- |
Historically exploited vulnerability metric |
VULNERABILITY |
It measures the number of vulnerabilities exploited in the past |
RATIO BSOLUTE ISTRIBUTION |
DEVICE |
|
STATIC |
|
YES |
This could be part of a checklist of task, to check if there is any vulnerability that affects the device |
|
|
|
11 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Temporal Attributes of Vulnerabilities |
Evolution of vulnerabilities |
- |
Future vulnerability metric |
VULNERABILITY |
It measures the number of vulnerabilities that will be discovered during a future period of time |
RATIO BSOLUTE ISTRIBUTION |
DEVICE |
|
STATIC |
|
NO |
If this has to do with statistic and probabilities, I think it is better not to use it |
|
|
|
12 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Temporal Attributes of Vulnerabilities |
Evolution of vulnerabilities |
- |
Future exploited vulnerability metric |
VULNERABILITY |
It measures the number of vulnerabilities that will be exploited during a future period of time |
RATIO BSOLUTE ISTRIBUTION |
DEVICE |
|
STATIC |
|
NO |
If this has to do with statistic and probabilities, I think it is better not to use it |
|
|
|
13 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Temporal Attributes of Vulnerabilities |
Evolution of vulnerabilities |
- |
Tendency-to-be-exploited metric |
|
It measures the tendency that a vulnerability may be exploited, where the “tendency” may be derived from information sources such as posts on Twitter before vulnerability disclosures |
RATIO BSOLUTE ISTRIBUTION |
DEVICE |
|
STATIC |
|
¿? |
This research work might not be part of a security evaluation, but could be applied if so |
|
|
|
14 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Temporal Attributes of Vulnerabilities |
Vulnerability lifetime |
- |
Vulnerability lifetime |
VULNERABILITY |
Client-end vulnerabilities Server-end vulnerabilities Cloud-end vulnerabilities |
ABSOLUTE |
NETWORK |
|
STATIC |
|
NO |
Clearly related to nerworks |
|
|
|
15 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Exploitability Metrics |
Attack vector |
|
Describes whether a vulnerability can be exploited remotely, adjacently, locally, or physically (i.e., attacker must have physical access to the computer) |
NOMINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
16 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Exploitability Metrics |
Attack complexity |
|
Describes the conditions that must hold before an attacker can exploit the vulnerability, such as low or high |
ORDINA |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
17 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Exploitability Metrics |
Privilege required |
|
Describes the level of privileges that an attacker must have in order to exploit a vulnerability, such as none, low, or high |
ORDINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
18 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Exploitability Metrics |
User interaction |
|
Describes whether the exploitation of a vulnerability requires the participation of a user (other than the attacker), such as none or required |
NOMINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
19 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Exploitability Metrics |
Authorization scope |
|
Describes whether or not a vulnerability has an impact on resources beyond its privileges (e.g., sandbox or virtual machine), such as unchanged or changed |
NOMINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
20 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Impact Metrics |
Confidentiality impact |
CONFIDENTIALITY |
The impact of a successful exploitation of a vulnerability in terms of confidentiality such as none, low, high |
ORDINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
21 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Impact Metrics |
Integrity impact |
|
The impact of a successful exploitation of a vulnerability in terms of integrity such as none, low, high |
ORDINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
22 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
Impact Metrics |
Availability impact |
AVAILABILITY |
The impact of a successful exploitation of a vulnerability in terms of availability, such as none, low, high |
ORDINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
23 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
CVSS Temporal Metrics |
Exploit code maturity |
|
The likelihood a vulnerability can be attacked based on the current exploitation techniques, such as undefined, unproven, proof-of-concept, functional, or high |
ORDINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
24 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
CVSS Temporal Metrics |
Remediation level |
|
Describes whether or not a remediation method is available for a given vulnerability, such as undefined, unavailable, workaround, temporary fix, or official fix |
NOMINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
25 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
CVSS Temporal Metrics |
Report confidence |
|
Measures the level of confidence for a given vulnerability as well as known technical details, such as unknown, reasonable, or confirmed |
NOMINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
26 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of Individual Software Vulnerabilities |
CVSS Base |
CVSS Environmental Metrics |
Security requirement |
|
Describes environment-dependent security requirements in terms of confidentiality, integrity, and availability, such as not defined, low, medium, or high |
NOMINAL |
SOFTWARE |
|
STATIC |
|
YES |
For calculating new vulnerabilities, or to know each dimension of known vulnerabilities |
|
|
|
27 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Deterministic Severity Metrics (attack graphs) |
Topology metrics |
Depth metric |
|
Ratio of the diameter of a domain-level attack graph over the diameter in the most secure case, implying that the larger the diameter, the more secure the network |
- |
NETWORK |
|
STATIC |
|
NO |
It is related to the topological properties of attack graphs |
|
|
|
28 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Deterministic Severity Metrics (attack graphs) |
Topology metrics |
Existence, number, and lengths of attack paths metrics |
|
It uses the attributes of attack paths from an initial state to the goal state. These metrics can be used to compare two attacks |
- |
NETWORK |
|
STATIC |
|
NO |
It is related to the topological properties of attack graphs |
|
|
|
29 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Deterministic Severity Metrics (attack graphs) |
Effort metrics |
Necessary defense metric |
|
It estimates a minimal set of defense countermeasures necessary for thwarting a certain attack |
- |
DEVICE |
|
STATIC |
|
YES |
This is related with the defenses that are necesary to implement so the system is secure for a certain vulnerability |
|
|
|
30 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Deterministic Severity Metrics (attack graphs) |
Effort metrics |
Effort-to-security-failure metric |
EFFORT |
It measures an attacker’s effort to reach its goal state |
- |
DEVICE |
|
DYNAMIC* |
|
YES |
Although it might be difficult to estimate, this could be also included in the evaluation process |
|
|
|
31 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Deterministic Severity Metrics (attack graphs) |
Effort metrics |
Weakest adversary metric |
EFFORT |
It estimates minimum adversary capabilities required to achieve an attack goal |
- |
DEVICE |
|
DYNAMIC* |
|
¿? |
If the effort can be calculated in an objective manner, this could be used in embedded systems |
|
|
|
32 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Deterministic Severity Metrics (attack graphs) |
Effort metrics |
k-zero-day-safety metric |
|
It measures a number of zero-day vulnerabilities for an attacker to compromise a target |
- |
DEVICE |
|
STATIC |
|
NO |
This has to do with probabilities. Can be applied, but I won't |
|
|
|
33 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Probabilistic Severity Metrics |
Metrics treating CVSS scores as atomic parameters |
Likelihood of exploitation |
|
It measures the probability that an exploit will be executed by an attacker with certain capabilities |
- |
SOFTWARE |
|
STATIC |
|
¿? |
This is related with statistics and risk assessment. It could be applied |
|
|
|
34 |
2 |
|
YES |
Vulnerability metrics |
Measuring Software Vulnerabilities |
Severity of a Collection of Vulnerabilities |
Probabilistic Severity Metrics |
Metrics not treating CVSS scores as atomic parameters |
Effective base metric / scope |
|
Given an attack graph and a subset of exploits, the effective base metric aims to adjust the CVSS base metric by taking the highest value of the ancestors of an exploit to reflect the worst-case scenario, while the effective base score metric is calculated based on the effective base metric |
- |
SOFTWARE |
|
STATIC |
|
- |
- |
|
|
|
35 |
2 |
|
YES |
Defense metrics |
Strength of Preventive Defenses |
Metrics for Blacklisting |
- |
- |
Reaction time metric |
TIME |
It captures the delay between the observation of the malicious entity at time t and the blacklisting of the malicious entity at time t (i.e., t − t) |
ABSOLUTE |
DEVICE |
|
DYNAMIC |
|
YES |
If the device has any kind of detection mechanism, this could be used. Detection and reaction might have different times |
|
|
|
36 |
2 |
|
YES |
Defense metrics |
Strength of Preventive Defenses |
Metrics for Blacklisting |
- |
- |
Coverage metric |
COVERAGE |
It estimates the portion of blacklisted malicious players |
ORDINAL |
DEVICE |
|
STATIC |
|
YES |
This could be used, but the device has to have a blacklist |
|
|
|
37 |
2 |
|
YES |
Defense metrics |
Strength of Preventive Defenses |
Metrics for DEP |
- |
- |
NO METRIC IS DEFINED |
|
No metrics have been defined to measure the effectiveness of DEP. The effectiveness of DEP can be measured based on the probability of being compromised by a certain attack A(t) over all possible classes of attacks |
- |
DEVICE |
|
- |
|
- |
No metric is defined |
|
|
|
38 |
2 |
|
YES |
Defense metrics |
Strength of Preventive Defenses |
Metrics for CFI |
- |
- |
Average indirect target reduction |
|
It counts the overall reduction of targets for any indirect control-flow transfer. It measures the overall reduction in terms of the number of targets exploitable by the attacker where smaller targets are more secure |
ABSOLUTE |
UNKNOWN |
|
UNKNOWN |
|
UNKNOWN |
UNKNOWN |
|
|
|
39 |
2 |
|
YES |
Defense metrics |
Strength of Preventive Defenses |
Metrics for CFI |
- |
- |
Average target size |
|
Ratio between the size of the largest target and the number of targets |
- |
UNKNOWN |
|
UNKNOWN |
|
UNKNOWN |
UNKNOWN |
|
|
|
40 |
2 |
|
YES |
Defense metrics |
Strength of Preventive Defenses |
Metrics for CFI |
- |
- |
Evasion resistance |
|
It is measured against control flow bending attacks, reflecting the effort (or premises) that an attacker must make (or satisfy) for evading the CFI scheme |
- |
UNKNOWN |
|
UNKNOWN |
|
UNKNOWN |
UNKNOWN |
|
|
|
41 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Monitoring |
- |
- |
Coverage metric |
COVERAGE |
It measures the fraction of events detectable by a specific sensor deployment, reflecting a defender’s need in monitoring events |
- |
DEVICE |
|
DYNAMIC |
|
UNKNOWN |
UNKNOWN |
|
|
|
42 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Monitoring |
- |
- |
Redundancy metric |
|
It estimates the amount of evidence provided by a specific sensor deployment to detect an event |
- |
DEVICE |
|
DYNAMIC |
|
YES |
This can be applied to devices as a whole, to check any mechanism of intrusion detection (mostly, tanti-tampering and software) |
|
|
|
43 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Monitoring |
- |
- |
Confidence metric |
|
It measures how well-deployed sensors detect an event in the presence of compromised sensors |
- |
DEVICE |
|
DYNAMIC |
|
YES |
This can be applied to devices as a whole, to check any mechanism of intrusion detection (mostly, tanti-tampering and software) |
|
|
|
44 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Monitoring |
- |
- |
Cost metric |
COST |
It measures the amount of resources consumed by deploying sensors including the cost for operating and maintaining sensors |
- |
ORGANISATION |
|
STATIC |
|
NO |
This is related with the cost of a specific countermeasure. This is more likely to be applied in early steps of the development |
|
|
|
45 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
- |
Detection time |
TIME |
For instrument-based attack detection, this metric is used to measure the delay between the time t0 at which a compromised computer sends its first scan packet and the time t that a scan packet is observed by the instrument |
- |
UNKNOWN |
|
UNKNOWN |
|
UNKNOWN |
UNKNOWN |
|
|
|
46 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
True-positive rate |
INTRUSION |
It is the probability that an intrusion is detected as an attack |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
47 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
False-negative rate |
INTRUSION |
It is the probability that an intrusion is not detected as an attack |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
48 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
True-negative rate |
INTRUSION |
It is the probability that a non-intrusion is not detected as an attack |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
49 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
False-positive rate |
INTRUSION |
Also known as false alarm rate, it is the probability that a nonintrusion is detected as an attack |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
50 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
Intrusion detection capability metric |
INTRUSION |
It is the normalized metric of I(I, O) with respect to H(I) based on the base rate where I is the input to the IDS as a stream of 0/1 random variables (0 for benign/normal and 1 for malicious/abnormal), O is the output of the IDS as a stream of 0/1 random variables (0 for no alert or normal; 1 for alert / abnormal), H(I) and H(O), respectively, denote the entropy of I and O, and I(I, O) = H(I) − H(I\O) is the mutual information between I and O |
- |
NETWORK |
|
UNKNOWN |
|
NO |
This is clearly a network-oriented metric |
|
|
|
51 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
Receiver operating characteristic (ROC) |
|
It reflects the dependence of the true-positive rate on the false-positive rate, reflecting a tradeoff between the true-positive and the false-positive rates |
- |
NETWORK |
|
STATIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
52 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
Intrusion detection operating characteristic (IDOC) |
INTRUSION |
It describes the dependence of the true positive rate Pr(A\I) on the Bayesian detection rate Pr(I\A), while accommodating the base rate Pr(I) |
- |
NETWORK |
|
STATIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
53 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Individual Strength of Defense Mechanisms |
Intrusion detection metrics |
Cost metric (damage / response / operational) |
COST |
It includes the damage cost incurred by undetected attacks, the response cost spent on the reaction to detected attacks including both true and false alarms, and the operational cost for running an IDS |
- |
NETWORK |
|
STATIC |
|
NO |
This is clearly a network-oriented metric, that aims to translate the impact into moneu |
|
|
|
54 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Relative Strength of Defense Mechanisms |
- |
Relative effectiviness |
|
This metric reflects the strength of a defense tool when employed in addition to other defense tools. A defense tool does not offer any extra strength if it cannot detect any attack undetected by other defense tools in place |
RATIO |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
55 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Reactive Defenses |
Metrics for Detection Mechanisms |
Collective Strength of Defense Mechanisms |
- |
Collective effectiviness |
|
This metric measures the collective strength of IDSs and anti-malware programs |
RATIO |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
57 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Proactive Defenses |
Address Space Layout Randomization (ASLR) |
- |
- |
ASLR-induced Effective entropy metric |
|
Measure of entropy in user space memory which quantitatively considers an adversary’s ability to leverage low entropy regions of memory via absolute and dynamic inter-section connections |
- |
SOFTWARE |
|
DYNAMIC |
OS |
YES |
If the device runs an OS, this could be applied |
|
|
|
58 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Proactive Defenses |
- |
- |
- |
Moving Target Defense (MTD) |
|
It measures the degree that an enterprise system can tolerate some undesirable security configurations in order to direct the global security state S(t) towards a desired stable system state |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is only applicable to an enterprise system |
|
|
|
59 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Overall Defenses |
- |
- |
- |
Penetration resistance (PR) |
|
Running a penetration test to estimate the level of effort (e.g., person-day or cost) required for a red team to penetrate into a system |
- |
DEVICE ETWORK |
|
DYNAMIC |
|
YES |
Penetration testing is a really common way to test this kind of device |
|
|
|
60 |
2 |
|
YES |
Defense metrics |
Measuring the Strength of Overall Defenses |
- |
- |
- |
Network diversity (ND) |
|
It measures the least or average effort an attacker must make to compromise a target entity based on the causal relationships between resource types to be considered as the inclusion in an attack graph |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
61 |
2 |
|
YES |
Attack metrics |
Measuring Zero-Day Attacks |
- |
- |
- |
Lifetime of zero-day attacks |
ZERODAY |
It measures the period of time between when an attack was launched and when the corresponding vulnerability is disclosed to the public |
- |
SOFTWARE ARDWARE EVICE |
|
DYNAMIC |
|
NO |
This can be computed, but it is not feasible for a security evaluation |
|
|
|
62 |
2 |
|
YES |
Attack metrics |
Measuring Zero-Day Attacks |
- |
- |
- |
Victims by zero-day attacks |
ZERODAY |
It measures the number of computers compromised by zero-day attacks |
- |
ORGANISATION |
|
DYNAMIC* |
|
NO |
This is usefull for networks or an organisation. It can be computed for a network of devices, but it is not useful for a security evaluation |
|
|
|
63 |
2 |
|
YES |
Attack metrics |
Measuring Zero-Day Attacks |
- |
- |
- |
Susceptibility of a device to zero-day attacks |
ZERODAY |
If we can capture the degree of the susceptibility of a device to zero-day attacks, then we can prioritize resource allocation to the ones requiring higher attention for mitigating the damage |
- |
DEVICE |
|
|
|
|
This metric is proposed in [2] as a future concept, but it is not developed |
|
|
|
64 |
2 |
|
YES |
Attack metrics |
Measuring Targeted Attacks |
- |
- |
- |
Targeted threat index |
|
Level of targeted malware attacks |
- |
NETWORK RGANISATION |
|
DYNAMIC |
|
NO |
This is usefull for networks or an organisation. It can be computed for a network of devices, but it is not useful for a security evaluation |
|
|
|
65 |
2 |
|
YES |
Attack metrics |
Measuring Botnets |
- |
- |
- |
Botnet size |
|
Number of bots, x, that can be instructed to launch attacks (e.g., distributed denial-of-service attacks) at time t, denoted by y(t). Due to time zone difference, y(t) is often much smaller than the actual x as some of x is turned off during night time at time zones |
- |
NETWORK |
|
STATIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
66 |
2 |
|
YES |
Attack metrics |
Measuring Botnets |
- |
- |
- |
Network bandwidth |
|
Network bandwidth that a botnet can use to launch denial-of-service attacks |
- |
NETWORK |
|
STATIC YNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
67 |
2 |
|
YES |
Attack metrics |
Measuring Botnets |
- |
- |
- |
Botnet efficiency |
|
Network diameter of the botnet network topology. It measures a botnet’s capability in communicating command-and-control messages and updating bot programs |
- |
NETWORK |
|
¿? |
|
NO |
This is clearly a network-oriented metric |
|
|
|
68 |
2 |
|
YES |
Attack metrics |
Measuring Botnets |
- |
- |
- |
Botnet robustness |
|
Robustness of botnets under random or intelligent disruptions. The idea was derived from the concept of complex network robustness, characterized by the percolation threshold under which a network is disrupted into small components |
- |
NETWORK |
|
¿? |
|
NO |
This is clearly a network-oriented metric |
|
|
|
69 |
2 |
|
YES |
Attack metrics |
Measuring Malware Spreading |
- |
- |
- |
Infection rate |
|
Average number of vulnerable computers that are infected by a compromised computer (per time unit) at the early stage of spreading |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is usefull for networks or an organisation. It can be computed for a network of devices, but it is not useful for a security evaluation |
|
|
|
70 |
2 |
|
YES |
Attack metrics |
Measuring Attack Evasion Techniques |
Adversarial Machine Learning Attacks |
- |
- |
Increased false-positive rate |
|
The strength of attacks as a consequence of applying a certain evasion method |
- |
DEVICE ETWORK |
|
DYNAMIC |
|
NO |
This can only be calculated if the embedded system has implemented an evasion mechanism against machine learning attacks |
|
|
|
71 |
2 |
|
YES |
Attack metrics |
Measuring Attack Evasion Techniques |
Adversarial Machine Learning Attacks |
- |
- |
Increased false-negative rate |
|
The strength of attacks as a consequence of applying a certain evasion method |
- |
DEVICE ETWORK |
|
DYNAMIC |
|
NO |
This can only be calculated if the embedded system has implemented an evasion mechanism against machine learning attacks |
|
|
|
72 |
2 |
|
YES |
Attack metrics |
Measuring Attack Evasion Techniques |
Obfuscation Attacks |
- |
- |
Obfuscation prevalence metric |
CODE |
Occurrence of obfuscation in malware samples |
- |
MALWARE |
|
STATIC |
|
NO |
Malware-related |
|
|
|
73 |
2 |
|
YES |
Attack metrics |
Measuring Attack Evasion Techniques |
Obfuscation Attacks |
- |
- |
Structural complexity metric |
|
Runtime complexity of packers in terms of their layers or granularity |
- |
MALWARE |
|
¿? |
|
NO |
Malware-related |
|
|
|
74 |
2 |
|
YES |
Attack metrics |
Measuring Attack Evasion Techniques |
Obfuscation Attacks |
- |
- |
Evasion capability of attacks |
|
Allows comparing the evasion power of two attacks, but also allows computing the damage caused by evasion attacks |
- |
MALWARE |
|
DYNAMIC |
|
NO |
Proposed, but not defined |
|
|
|
75 |
2 |
|
YES |
Attack metrics |
Measuring Attack Evasion Techniques |
Obfuscation Attacks |
- |
- |
Obfuscation sophistication |
CODE |
In terms of the amount of effort required for unpacking packed malware |
- |
MALWARE |
|
STATIC |
|
NO |
Proposed, but not defined. Malware related |
|
|
|
76 |
2 |
|
YES |
Situation metrics |
Measuring Security State |
Data-Driven State Metrics |
- |
- |
Network maliciousness metric |
|
It estimates the fraction of blacklisted IP addresses n a network |
- |
NETWORK |
|
STATIC YNAMIC |
|
NO |
This metric works at network level, for systems in a network. Cannot be applied to embedded systems |
|
|
|
77 |
2 |
|
YES |
Situation metrics |
Measuring Security State |
Data-Driven State Metrics |
- |
- |
Rogue networkmetric |
|
Population of networks used to launch drive-by download or phishing attacks |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This metric works at network level, for systems in a network. Phishing cannot be applied to embedded systems |
|
|
|
78 |
2 |
|
YES |
Situation metrics |
Measuring Security State |
Data-Driven State Metrics |
- |
- |
ISP badness metric |
|
Quantifies the effect of spam from one ISP or Autonomous System (AS) on the rest of the Internet |
- |
NETWORK |
|
DYNAMIC |
|
NO |
This is clearly a network-oriented metric |
|
|
|
79 |
|
|
YES |
Situation metrics |
Measuring Security State |
Data-Driven State Metrics |
- |
- |
Control-plane reputation metric |
|
Calibrates the maliciousness of attacker-owned (i.e., rather than legitimate but mismanaged/abused) ASs based on their control plane information (e.g., routing behavior), which can achieve an early-detection time of 50–60 days (before these malicious ASs are noticed by other defense means) |
- |
ATTACKER |
|
¿? |
|
NO |
This is related to external systems that could be potential attackers |
|
|
|
80 |
2 |
|
YES |
Situation metrics |
Measuring Security State |
Data-Driven State Metrics |
- |
- |
Cybersecurity posturemetric |
|
Measures the dynamic threat imposed by the attacking computers |
- |
ATTACKER |
|
DYNAMIC |
|
NO |
This is related to attacks themselves |
|
|
|
82 |
2 |
|
YES |
Situation metrics |
Measuring Security State |
Model-Driven Metrics |
- |
- |
Probability a computer is compromised at time t |
|
It quantifies the degree of security in enterprise systems by using modeling techniques based on a holistic perspective |
- |
DEVICE ETWORK |
|
¿? |
|
¿? |
Although this is a network-oriented metric, this might be adapted to be used with an embedded system |
|
|
|
83 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Encounter rate |
|
Measures the fraction of computers that encountered some malware or attack during a period of time |
- |
NETWORK |
|
STATIC |
|
NO |
Directly related to the systems in a network |
|
|
|
84 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Incident rate |
|
Measures the fraction of computers successfully infected or attacked at least once during a period of time |
- |
NETWORK |
|
STATIC |
|
NO |
Directly related to the systems in a network |
|
|
|
85 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Blocking rate |
|
The rate an encountered attack is successfully defended by a deployed defense |
- |
NETWORK |
|
DYNAMIC |
|
¿? |
Although this is a network-oriented metric, this might be adapted to be used with an embedded system (maybe for a penetration test) |
|
|
|
86 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Breach frequencymetric |
|
How often breaches occur |
- |
NETWORK |
|
STATIC |
|
NO |
Directly related to the systems in a network |
|
|
|
87 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Breach size metric |
|
Number of records breached in individual breaches |
- |
NETWORK |
|
STATIC |
|
NO |
Directly related to the systems in a network |
|
|
|
88 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Time-between-incidents metric |
TIME |
Period of time between two incidents |
- |
NETWORK |
|
STATIC |
|
NO |
Directly related to the systems in a network |
|
|
|
89 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Frequency of Security Incidents |
- |
- |
Time-to-first-compromise metric |
TIME |
Estimates the duration of time between when a computer starts to run and the first malware alarm is triggered on the computer where the alarm indicates detection rather than infection |
- |
DEVICE |
|
DYNAMIC |
|
YES |
This can be used in embedded systems, but it might not be suitable for a security evaluation (maybe for a penetration test) |
|
|
|
90 |
2 |
|
YES |
Situation metrics |
Measuring Security Incidents |
Damage of Security Incidents |
- |
- |
Delay in incident detection |
TIME |
Time between the occurrence and detection |
- |
ORGANISATION |
|
DYNAMIC |
|
NO |
This metric could be used to transmit the importance of security in the organisation, but cannot be used in embedded systems |
|
|
|
92 |
2 |
|
YES |
Situation metrics |
Measuring Security Investment |
- |
- |
- |
Security spending |
COST |
Percentage of IT budget |
- |
ORGANISATION |
|
STATIC |
|
NO |
This metric shows the investing in security in the organization. It Cannot be applied to embedded systems |
|
|
|
93 |
2 |
|
YES |
Situation metrics |
Measuring Security Investment |
- |
- |
- |
Security budget allocation |
COST |
It estimates how the security budget is allocated to various security activities and resources |
- |
ORGANISATION |
|
STATIC |
|
NO |
This metric shows the investing in security in the organization. It Cannot be applied to embedded systems |
|
|
|
94 |
2 |
|
YES |
Situation metrics |
Measuring Security Investment |
- |
- |
- |
Return on security investment (ROSI) |
COST |
It is a variant of the classic return on investment (ROI) metric, measuring the financial net gain of an investment based on the gain from investment minus the cost of investment |
- |
ORGANISATION |
|
STATIC |
|
NO |
This metric shows the investing in security in the organization. It Cannot be applied to embedded systems |
|
|
|
95 |
2 |
|
YES |
Situation metrics |
Measuring Security Investment |
- |
- |
- |
Net present value |
COST |
It measures the difference between the present economic value of future inflows and the present economic value of outflows with respect to an investment |
- |
ORGANISATION |
|
STATIC |
|
NO |
This metric shows the investing in security in the organization. It Cannot be applied to embedded systems |
|
|
|
96 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Attack Impact |
|
It is the quantitative measure of the potential harm caused by an attacker to exploit a vulnerability |
- |
NETWORK |
|
STATIC |
|
|
|
|
|
|
97 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Attack Cost |
COST |
It is the cost spent by an attacker to successfully exploit a vulnerability (i.e., security weakness) on a host |
- |
NETWORK |
|
STATIC YNAMIC |
|
YES |
This is related to network, but this could be applicable to devices too |
|
|
|
98 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Structural Important Measured |
|
It it used to qualitatively determine the most critical event (attack, detection or mitigation) in a graphical attack model |
- |
NETWORK |
|
STATIC |
|
NO |
Related to networks |
|
|
|
100 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Mean-time-to-compromise (MTTC) |
TIME |
It is used to measure how quickly a network can be penetrated |
- |
NETWORK |
|
DYNAMIC |
|
¿? |
This is no directly applicable, but its equivalent would be a penetration attack in an embedded system |
|
|
|
101 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Mean-time-to-recovery (MTTR) |
TIME |
It is used to assess the effectiveness of a network to recovery from an attack incidents. It is defined as the average amount of time required to restore a system out of attack state |
- |
NETWORK |
|
DYNAMIC |
|
¿? |
Although this is developed for networks, the same concept can be translated into embeded systems |
|
|
|
104 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Return on Investment |
COST |
It measures the financial net gain of an investment based on the gain from investment minus the cost of investment |
- |
NETWORK |
|
STATIC |
|
NO |
Network-related and cost related. Useful for organisation managing |
|
|
|
105 |
10 |
|
YES |
Host-based |
Without Probability Values |
- |
- |
- |
Return on Attack |
|
The gain the attacker expects from successful attack over the losses he sustains due to the countermeasure deployed by his target |
- |
NETWORK |
|
¿? |
|
¿? |
Network-related, but it scent can be translated into embedded systems |
|
|
|
106 |
10 |
|
YES |
Host-based |
With Probability Values |
- |
- |
- |
Common Vulnerability Scoring System (CVSS) - Overall value |
VULNERABILITY |
The overall score is determined by generating a base score and modifying it through the temporal and environmental formulas |
- |
DEVICE |
|
STATIC |
|
YES |
This could be used tohether with other metics to obtain a numeric value |
|
|
|
107 |
10 |
|
YES |
Host-based |
With Probability Values |
- |
- |
- |
Probability of Vulnerability Exploited |
VULNERABILITY |
It is used to assess the likelihood of an attacker exploiting a specific vulnerability on a host |
- |
SOFTWARE ARDWARE ETWORK |
|
STATIC |
|
¿? |
Network-related metric. This is also related to risk assessment. Can be traslated into embedded systems |
|
|
|
108 |
10 |
|
YES |
Host-based |
With Probability Values |
- |
- |
- |
Probability of attack detection |
|
It is used to assess the likelihood of a countermeasure to successfully identify the event of an attack on a target. |
- |
SOFTWARE ARDWARE ETWORK |
|
DYNAMIC |
|
¿? |
Network-related metric. This is also related to risk assessment. Can be traslated into embedded systems |
|
|
|
109 |
10 |
|
YES |
Host-based |
With Probability Values |
- |
- |
- |
Probability of host compromised |
|
It is used to assess the likelihood of an attacker to successfully compromise a target |
- |
HARDWARE OFTWARE ETWORK |
|
STATIC YNAMIC |
|
¿? |
Network-related metric. This is also related to risk assessment. Can be traslated into embedded systems |
|
|
|
110 |
10 |
|
YES |
Network-based |
Non-path based |
- |
- |
- |
Network Compromise Percentage |
|
It quantifies the percentage of hosts on the network on which an attacker can obtain an user or administration level privilege |
- |
NETWORK |
|
STATIC YNAMIC |
|
NO |
It's about networks, and we can't predict where the device will be used, so this metric is not representative |
|
|
|
112 |
10 |
|
YES |
Network-based |
Non-path based |
- |
- |
- |
Vulnerable Host Percentage |
|
This metric quantifies the percentage of hosts with vulnerability on a network. It is used to assess the overall security of a network |
- |
NETWORK |
|
DYNAMIC |
|
NO |
It's about networks, and we can't predict where the device will be used, so this metric is not representative |
|
|
|
113 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Attack Shortest Path |
PATH |
It is the smallest distance from the attacker to the target. This metric represents the minimum number of hosts an attacker will use to compromise the target host |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
114 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Number of Attack Paths |
PATH |
It is the total number of ways an attacker can compromise the target |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
115 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Mean of Attack Path Lengths |
PATH |
It is the average of all path lengths. It gives the expected effort that an attacker may use to breach a network policy |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
116 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Normalised Mean of Path Lengths |
PATH |
Expected number of exploits an attacker should execute in order to reach the target |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
117 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Standand Deviation of Path Lengths |
PATH |
It is used to determine the attack paths of interest |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
118 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Mode of Path Lengths |
PATH |
It is the attack path length that occurs most frequently |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
119 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Median of Path Lengths |
PATH |
It is used by network administrator to determine how close is an attack path length to the value of the median path length (i.e. path length that is at the middle of all the path length values) |
- |
NETWORK |
AUTOMATIC* |
DYNAMIC |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
120 |
10 |
|
YES |
Network-based |
Path based |
- |
- |
- |
Attack Resistance Metric |
|
It is use to assess the resistance of a network configuration based on the composition of measures of individual exploits. It is also use for assessing and comparing the security of different network configurations |
- |
NETWORK |
|
¿? |
|
NO |
If it has to do with paths, it has to do with networks, and we can't assume where is this going to be used |
|
|
|
121 |
4 |
|
YES |
- |
- |
- |
- |
- |
Mean Time To Failure (MTTF) - Time to compromise |
TIME |
Mean time for a potential intruder to reach the specified target |
- |
NETWORK |
AUTOMATIC |
DYNAMIC |
|
NO |
|
|
|
|
122 |
4 |
|
YES |
- |
- |
- |
- |
- |
Mean Effort To Failure (METF) - Effort to compromise |
|
Mean effort for a potential attacker to reach the specified target |
- |
NETWORK |
SEMI-AUTOMATIC |
DYNAMIC |
|
NO |
|
|
|
|
123 |
4 |
|
YES |
- |
- |
- |
- |
- |
Mean Time to Security Failure (MTTSF) - Time to compromise |
TIME |
Expected amount of time attackers need to successfully complete an attack process |
- |
NETWORK |
SEMI-AUTOMATIC |
STATIC |
|
NO |
|
|
|
|
132 |
4 |
|
YES |
- |
- |
- |
- |
- |
Network compromise percentage |
|
|
|
NETWORK |
AUTOMATIC |
DYNAMIC |
|
NO |
If we are evaluating a device, it has no sense talking about networks |
|
|
|
133 |
4 |
|
YES |
- |
- |
- |
- |
- |
Compromise probability |
|
|
|
NETWORK |
SEMI-AUTOMATIC |
DYNAMIC |
|
NO |
|
|
|
|
135 |
4 |
|
YES |
- |
- |
- |
- |
- |
Expected dificulty |
|
|
|
NETWORK |
AUTOMATIC |
STATIC |
|
NO |
|
|
|
|
136 |
4 |
|
YES |
- |
- |
- |
- |
- |
Percentage of distinct hosts (d1-Diversity) |
|
|
|
NETWORK |
SEMI-AUTOMATIC |
STATIC |
|
NO |
This is related with servers in a network, cannot be applied to a single device |
|
|
|
137 |
4 |
|
YES |
- |
- |
- |
- |
- |
Number of bits leaked (mean privacy) |
DATA |
|
|
NETWORK |
SEMI-AUTOMATIC |
STATIC |
|
¿? |
If adapted, this metric can be used to test memory related vulnerabilities |
|
|
|
139 |
5 |
|
NO |
- |
- |
- |
- |
- |
rav |
OPSEC |
It is a scale measurement of the attack surface, the amount of uncontrolled interactions with a target, which is calculated by the quantitative balance between operations, limitations, and controls |
- |
|
|
|
|
|
|
|
|
|
141 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Churn |
CODE |
Count of Source Lines of Code that has been added or changed in a component since the previous revision of the software |
- |
SOFTWARE |
- |
- |
|
- |
|
|
|
|
142 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Frequency |
CODE |
Number of times that a binary was edited during its development cycle |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
NO |
This is more related to the development of the software |
|
|
|
143 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Lines Changed |
CODE |
The cumulated number of code lines changed since the creation of a file |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
NO |
This is more related to the development of the software |
|
|
|
144 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Lines new |
CODE |
The cumulated number of new code lines since the creation of a file |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
NO |
This is more related to the development of the software |
|
|
|
146 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Number of commits |
CODE |
Counting the commits a given developer made |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
NO |
This is more related to the development of the software |
|
|
|
148 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Relative churn |
CODE |
Normalized by parameters such as lines of code, files counts |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
|
|
|
|
|
149 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Repeat frequency |
CODE |
The number of consecutive edits that are performed on a binary |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
|
|
|
|
|
150 |
11 |
|
YES |
Churn |
- |
- |
- |
- |
Total churn |
CODE |
The total added, modified, and deleted lines of code of a binary during the development |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
|
|
|
|
|
155 |
11 |
|
YES |
Complexity* |
- |
- |
- |
- |
Cyclomatic number |
CODE |
M = E − N + P, where E = the number of edges of the graph. N = the number of nodes of the graph. P = the number of connected components. |
|
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
158 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Fan in (FI) |
COUPLING |
Number of inputs a function uses. Inputs include parameters and global variables that are used (read) in the function / Number of functions calling a function |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
159 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Fan out (FO) |
COUPLING |
Number of outputs that are set. The outputs can be parameters or global variables (modified) / Number of functions called by a function |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
162 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Max fan in |
CODE |
The largest number of inputs to a function such as parameters and global variables |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
163 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Max fan out |
CODE |
The largest number of assignment to the parameters to call a function or global variables |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
164 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
MaxMaxNesting |
CODE |
The maximum of MaxNesting in a file |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
165 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
MaxNesting |
CODE |
Maximum nesting level of control constructs such as if or while statements in a function |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
166 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Nesting complexity |
CODE |
Maximum nesting level of control constructs (if, while, for, switch, etc.) in the function |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
167 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Number of children (NOC) |
CODE |
Number of immediate sub-classes of a class or the count of derived classes. If class CA inherits class CB, then CB is the base class and CA is the derived class. In other words, CA is the children of class CB, and CB is the parent of class CB. |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
168 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
SumCyclomaticStrict |
CODE |
The sum of the strict cyclomatic complexity, where strict cyclomatic complexity is defined as the number of conditional statements in a function |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
169 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
SumEssential |
CODE |
The sum of essential complexity, where essential complexity is defined as the number of branches after reducing all the programming primitives such as a for loop in a function’s control flow graph into a node iteratively until the graph cannot be reduced any further. Completely well-structured code has essential complexity 1 |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
170 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
SumFanIn |
CODE |
The sum of FanIn, where FanIn is defined as the number of inputs to a function such as parameters and global variables |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
171 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
SumFanOut |
CODE |
The sum of FanOut, where FanOut is defined as the number of assignment to the parameters to call a function or global variables |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
172 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
SumMaxNesting |
CODE |
The sum of the MaxNesting in a file, where MaxNesting is defined as the maximum nesting level of control constructs such as if or while statements in a function |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
173 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
McCabe Cyclomatic Complexity |
CODE |
M = E − N + 2P, where E = the number of edges of the graph. N = the number of nodes of the graph. P = the number of connected components |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
174 |
11 |
|
YES |
Complexity |
- |
- |
- |
- |
Weighted methods per class (WMC) |
CODE |
Number of local methods defined in the class |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
177 |
11 |
|
YES |
Cost |
- |
- |
- |
- |
Remediation Impact (RI) |
|
The remediation impact provides an overview of how much impact the remediation could have on the affected system |
- |
|
|
|
|
NO |
|
|
|
|
180 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Access accuracy |
USER |
Number of correctly configured user accounts, against the overall number of accounts created, including badly configured accounts and hanging accounts |
- |
ORGANISATION |
AUTOMATIC* |
DYNAMIC* |
|
NO |
This is not related to embedded systems |
|
|
|
181 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Administrator and operator logs |
LOG |
Total number corrective actions taken after specific event is recorded in log / total number of events recorded in log * 100 |
- |
|
|
|
|
|
|
|
|
|
182 |
11 |
X |
YES |
Coverage |
- |
- |
- |
- |
Anomalous session count |
ACCESS |
The first phase derives a SessionTableAccessProfile by correlating application server user log entries for a user session with accessed database tables; the resulting value of SessionTableAccessProfile is represented as a user ID followed by a set of ordered pairs with a table name and a count. The second phase derives the AnomalousSessionCount by counting how many SessionTableAccessCounts don’t fit a predefined user profile. If AnamalousSessionCount is greater than one for any user, especially a privileged user, it could indicate the need for significant refactoring and redesign of the Web application’s persistence layer. This is a clear case in which detection at design time is preferable. |
- |
SOFTWARE |
- |
- |
|
¿? |
If adapted, it could be used |
|
|
|
183 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Approval accuracy |
|
Number of approved provisioning activities, against the overall provisioning activities, including the unauthorized ones |
- |
ORGANISATION |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
This is not related to embedded systems |
|
|
|
185 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Audit logging |
LOG |
Total number of log files monitored in specific time interval / number of available log files * 100 |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
Although this has to do with larger systems, embedded systems could not be able to produce log files |
|
|
|
187 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Accessor Attribute Interactions (CAAI) |
CODE |
The ratio of the sum of all interactions between accessors and classified attributes to the possible maximum number of interactions between accessors and classified attributes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
188 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Attributes Interaction Weight (CAIW) |
CODE |
The ratio of the number of all interactions with classified attributes to the total number of all interactions with all attributes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
189 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Class Data Accessibility (CCDA) |
CODE |
The ratio of the number of non-private classified class attributes to the number of classified attributes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
190 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Instance Data Accessibility (CIDA) |
CODE |
The ratio of the number of non-private classified instance attributes to the number of classified attributes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
191 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Method Extensibility (CME) |
CODE |
The ratio of the number of the non-finalised classified methods in program to the total number of classified methods in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
192 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Methods Weight (CMW) |
CODE |
The ratio of the number of classified methods to the total number of methods in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
193 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Mutator Attribute Interactions (CMAI) |
CODE |
The ratio of the sum of all interactions between mutators and classified attributes to the possible maximum number of interactions between mutators and classified attributes in the program. |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
194 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Operation Accessibility (COA) |
CODE |
The ratio of the number of non-private classified methods to the number of classified methods in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
195 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Classified Writing Methods Proportion (CWMP) |
CODE |
The ratio of the number of methods which write classified attributes to the total number of classified methods in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
196 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Composite-Part Critical Classes (CPCC) |
CODE |
The ratio of the number of critical composed-part classes to the total number of critical classes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
0 |
|
|
|
|
|
|
|
|
Classes Design Proportion (CDP) |
CODE |
Ratio of the number of critical classes to the total number of classes, from the group of Design Size Metrics. |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
197 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Controls against malicious code |
CODE |
Number of incidents as results of malware (malicious software) outbreaks on system / number of detected and blocked malware occurrences * 100 |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
This metric could be adapted to embedded systems, but it might not make sense in the testing phase |
|
|
|
201 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Critical Class Extensibility (CCE) |
CODE |
The ratio of the number of the non-finalised critical classes in program to the total number of critical classes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
202 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Critical Design Proportion (CDP) |
CODE |
The ratio of the number of critical classes to the total number of classes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
203 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Critical Element Ratio |
CODE |
A process may not require all aspects of an object to be instantiated. Critical Elements Ratio = (Critical Data Elements in an Object) / (Total number of elements in the object) |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
204 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Critical Serialized Classes Proportion (CSCP) |
CODE |
The ratio of the number of critical serialized classes to the total number of critical classes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
205 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Critical Superclasses Inheritance (CSI) |
CODE |
The ratio of the sum of classes which may inherit from each critical superclass to the number of possible inheritances from all critical classes in the program’s inheritance hierarchy |
- |
SOFTWARE |
- |
- |
|
¿? |
|
|
|
|
206 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Critical Superclasses Proportion (CSP) |
CODE |
The ratio of the number of critical superclasses to the total number of critical classes in the program’s inheritance hierarchy |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
207 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Depth of inspection |
CODE |
It calculates the depth of inspection by collecting the statistical data of inspection and testing approaches and finds out the defect capturing capability as a ratio. DI can be measured phase-wise or as a whole before the deployment of the product. # defects captured by inspection process / # defects captured by both inspection and testing approaches |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
212 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Isolation (PI) |
CODE |
This assesses the level of security isolation between system components. This means getting privileges to a component does not imply accessibility of other co-located components. This metric can be assessed using system architecture and deployment models. Components marked as confidential should not be hosted with nonconfidential (public) components. Methods that are not marked as confidential should not have access to confidential attributes or methods |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
I am not sure if this metric can be applied to embedded software |
|
|
|
214 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Number of Catch Blocks Per Class (NCBC) |
CODE |
It measures the exception handing factor (EHF) in some way. It is defined as the percentage of catch blocks in a class over the theoretical maximum number of catch blocks in the class |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
215 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Percentage High-, medium- and moderate-Risk Software Hazards with Safety Requirements |
|
It reveals whether all high-, medium- and moderate-risk software hazards have resulted in applicable safety requirements through hazard analysis |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
This is related to software and the design |
|
|
|
218 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Percentage of Software Safety Requirements (PSSR) |
|
How sufficient hazard analysis has been performed. (# Software Safety Requirements / # SoftwareRequirements) * 100 |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
This is related to software and the design |
|
|
|
219 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Percentage of Software Safety Requirements Traceable to Hazards |
|
Ensuring traceability to system hazards or System of Systems hazards increases the validation case. Percentage indicator of traceability of requirements. All derived software safety requirements must be traceable to system hazards or System of Systems hazards (# Traceable Software Safety Requirements / # Software Safety Requirements) * 100 |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
This is related to software and the design |
|
|
|
246 |
11 |
X |
YES |
Coverage |
- |
- |
- |
- |
PercentValidatedInput |
INPUTS |
To compute this metric, let T equal the count of the amount of input forms or interfaces the application exposes (the number of HTML form POSTs, GETs, and so on) and let V equal the number of these interfaces that use input validation mechanisms. The ratio V/T makes a strong statement about the Web application’s vulnerability to exploits from invalid input |
- |
SOFTWARE |
- |
- |
|
YES |
This could be adapted to other pieces of code, rather than just HTML |
|
|
|
247 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Design Decisions (RDD) |
|
Ratio of the number of design decisions related to security to the total number of design decisions of the entire system. The objective is to assess the portion of design dedicated to security |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
This is related to design phase |
|
|
|
248 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Design Flaws Related to Security (RDF) |
|
Ratio of design flaws related to security to the total number of design flaws applicable to the whole system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
This is related to design phase |
|
|
|
249 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of extension misuse cases extended once to the total number of extension misuse cases |
DESIGN |
Are the functional decompositions between misuse cases correctly handled? 1 - (# Extension Misuse Cases included once / # Extension Misuse Cases) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
250 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Implementation Errors That Have an Impact on Security (RSERR) |
|
Ratio of the number of errors related to security to the total number of errors in the implementation of the system (i.e. NERR) |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
This is related to design phase |
|
|
|
251 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of inclusion misuse cases included once to the total number of inclusion misuse cases |
DESIGN |
Are the functional decompositions between misuse cases correctly handled? (# Inclusion Misuse Cases included once) / (# Inclusion Misuse Cases) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
252 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of misuse cases used as pre/post conditions of other misuse cases to the total number of misuse cases |
DESIGN |
Are the functional decompositions between misuse cases correctly handled? (# Misuse Cases used as pre/post conditions) / (# Misuse Cases) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
253 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Patches Issued to Address Security Vulnerabilities (RP) |
|
Ratio of the number of patches that are issued to address security vulnerabilities to the total number of patches of the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
254 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Security Requirements (RSR) |
|
Ratio of the number of requirements that have direct impact on security to the total number of requirements of the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
255 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Security Test Cases (RTC) |
|
Number of test cases designed to detect security issues to the total number of test cases of the entire system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
256 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Security Test Cases that Fail (RTCP) |
|
Number of test cases that detect implementation errors (i.e. the ones that fail) to the total number of test cases, designed specifically to target security issues |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
258 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of Software Changes Due to Security Considerations (RSC) |
|
Number of changes in the system triggered by a new set of security requirements. Software changes due to security considerations include patches that are released after a system is delivered, or any other security updates |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
259 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of the number of included security requirements to the total number of stated security requirements |
DESIGN |
How vulnerable is the application based on the stated security requirements? 1 - ( (# Stated Security Requirements - # Excluded Stated Security Requirements) / # Stated Security Requirements ) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
260 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of the number of misuse cases that do not threaten the application to the total number of misuse cases |
DESIGN |
Do the misuse cases correctly represent the application vulnerabilities and are they consistent with application security use cases? (# Non-Threatening Misuse Cases) / (# Misuse Cases) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
261 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of the Number of Omitted Exceptions (ROEX) |
|
Ratio of the number of omitted exceptions (i.e. Noex) to the total number of exceptions that are related to security |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
262 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of the Number of Omitted Security Requirements (ROSR) |
|
Ratio of the number of security requirements that have not been considered during the analysis phase (i.e. NOSR) to the total number of security requirements identified during the analysis phase (NSR) |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
263 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of the number of the base misuse cases associated to one misuser to the total number of base misuse cases |
DESIGN |
Are the misusers presented and handled correctly in the misuse case model? 1 - (# Base Misuse Cases Associated to one misuser / # Misuse Cases) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
264 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Ratio of the number of unmitigated misuse cases that threaten the application to the total number of misuse cases |
DESIGN |
Do the misuse cases correctly represent the application vulnerabilities and are they consistent with application security use cases? (# Unmitigated Misuse Cases) / (# Misuse Cases) |
- |
SOFTWARE |
MANUAL* |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
269 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Software Hazard Analysis Depth |
|
Hazardous software, or safety-critical software, allocated as high- or medium-risk, according to the Software Hazard Criticality Matrix, requires analysis of second- and third-order causal factors (should they exist) |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
Hazard is similar to risk. |
|
|
|
273 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Unaccessed Assigned Classified Attribute (UACA) |
|
The ratio of the number of classified attributes that are assigned but never used to the total number of classified attributes in the program |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
274 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Uncalled Classified Accessor Method (UCAM) |
|
The ratio of the number of classified methods that access a classified attribute but are never called by other methods to the total number of classified methods in the program |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
275 |
11 |
|
YES |
Coverage |
- |
- |
- |
- |
Unused Critical Accessor Class (UCAC) |
|
The ratio of the number of classes which contain classified methods that access classified attributes but are never used by other classes to the total number of critical classes in the program |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
284 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Classified Attributes Inheritance (CAI) |
CODE |
The ratio of the number of classified attributes which can be inherited in a hierarchy to the total number of classified attributes in the program’s inheritance hierarchy |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
285 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Classified Methods Inheritance (CMI) |
CODE |
The ratio of the number of classified methods which can be inherited in a hierarchy to the total number of classified methods in the program’s inheritance hierarchy |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
286 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Coupling |
COUPLING |
Measures the following three coupling dimensions between modules: referential dependency, structural dependency, and data integrity dependency |
- |
SOFTWARE |
|
|
|
|
|
|
|
|
288 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Coupling Between Object classes (CBOC) |
COUPLING |
Number of other classes coupled to a class C |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
290 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Coupling Corruption Propagation |
COUPLING |
Coupling corruption propagation is meant to measure the total number of methods that could be affected by erroneous originating method. Coupling Corruption Propagation = Number of child methods invoked with the parameter(s) based on the parameter(s) of the original invocation |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
291 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Critical Classes Coupling (CCC) |
CODE |
The ratio of the number of all classes’ links with classified attributes to the total number of possible links with classified attributes in the program |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Object-oriented programs. If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
292 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Depth of Inheritance Tree (DIT) |
|
Maximum depth of the class in the inheritance tree |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
301 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Lack of cohesion of methods (LCOM) |
|
The LCOM value for a class C is defined as LCOM(C) = (1- \E(C)\ ÷ (\V(C)\ × \M(C)\)) × 100%, where V(C) is the set of instance variables, M(C) is the set of instance methods, and E(C) is the set of pairs (v,m) for each instance variable v in V(C) that is used by method m in M(C) |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
304 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
NumCalls |
|
The number of calls to the functions defined in an entity |
- |
SOFTWARE |
|
|
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
312 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Reflection Package Boolean (RPB) |
CODE |
A boolean value representing whether the Java program imports the reflection package (1) or not (0) |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
0 |
|
|
YES |
Dependency |
- |
- |
- |
- |
Vulnerability Propagation (VP) |
CODE |
Vulnerability Propagation (VP) of a class C, denoted by VP(C), is the set containing classes in the hierarchy which directly or indirectly inherit class C. Cardinality of set VP(C) represents the number of classes which have become vulnerable due to class C. Alternatively, Vulnerability Propagation of a class C is the total number of classes which directly or indirectly inherit class C. |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
This metric can be applied to embedded software |
|
|
|
315 |
11 |
|
YES |
Dependency |
- |
- |
- |
- |
Vulnerability Propagation Factor (VPF) |
CODE |
An Inheritance hierarchy may contain no class or one or more vulnerable classes. Also, there are more than one Inheritance hierarchies present in a design. So, calculation of Vulnerability Propagation Factor (VPF) of a design requires calculation of Vulnerability Propagation due to each vulnerable class in Inheritance hierarchies present in the design. Then, union of Vulnerability Propagation due to each vulnerable class will give overall Vulnerability Propagation in design. |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
This metric can be applied to embedded software |
|
|
|
316 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Access Complexity (AC) |
ACCESS |
This metric measures the complexity of attacks exploiting the vulnerability. A vulnerability with low complexity can be for example a buffer overflow in a web server, the vulnerability can be exploited at will |
- |
|
|
|
|
|
|
|
|
|
317 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Access Vector (AV) |
ACCESS |
This metric indicates from where an attacker can exploit the vulnerability |
- |
|
|
|
|
|
|
|
|
|
318 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Adversary Work Factor |
EFFORT |
Adversary work factor is an informative metric for gauging the relative strengths and weaknesses of modem information systems. However, this metric can be difficult to measure and evaluate |
- |
NETWORK |
MANUAL* |
DYNAMIC* |
|
NO |
This metric has nothing to do with embedded systems and it is directly retaled to networks |
|
|
|
319 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Attackability |
|
The number of inbound edges of a node in the hostbased attack graph represents all the direct attack paths that other hosts in the considered network can compromise to that node. This number can be used to compute the attackability metric of a host in the context of the system under study. The metric is based on intuitive properties derived from common sense. For example, the metric will indicate reduced confidence when more inbound edges exist. (1 - # inbound edges of the node/ total inbound edges) |
- |
NETWORK |
AUTOMATIC* |
STATIC* |
|
NO |
This metric has nothing to do with embedded systems and it is directly retaled to networks |
|
|
|
320 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Authentication (AU) |
AUTHENTICATION |
This metric counts how often an attacker must authenticate before the vulnerability can be exploited |
- |
|
|
|
|
|
|
|
|
|
322 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Damage potential-effort ratio |
|
It indicates the level of damage an attacker can potentially cause to the system and the effort required for the attacker to cause such damage |
- |
DEVICE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
YES |
It seems that it can be applied to embeded systems, but I am not sure if it is applicable in the evaluation phase |
|
|
|
324 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
ExclusiveExeTime |
|
Execution time for the set of functions, S, defined in an entity excluding the execution time spent by the functions called by the functions in S |
- |
|
|
|
|
|
|
|
|
|
327 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
InclusiveExeTime |
|
Execution time for the set of functions, S, defined in an entity including all the execution time spent by the functions called directly or indirectly by the functions in S |
- |
|
|
|
|
|
|
|
|
|
330 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Minimal cost of attack |
COST |
Minimal cost of attack represents the minimal cost that the attacker has to pay for the execution of an attack on a system |
- |
NETWORK* |
- |
- |
|
NO |
Formal model. Formal verification. Too much notation. I think this is more related to networks |
|
|
|
331 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Minimal length of attacks |
|
An intuition behind this metric is the following: the less steps an attacker has to make, the simpler is to execute the attack successfully, and the less secure the system is |
- |
NETWORK* |
- |
- |
|
NO |
Formal model. Formal verification. Too much notation. I think this is more related to networks |
|
|
|
332 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Protection Rate (PR) |
|
It expresses the efficiency of security mechanisms. Percentage of the known vulnerabilities protected by a given security mechanism. It is based on the Attack Surface metric. PR = (1 - AttackSurfaceEvaluatedSystem / AttackSurfaceReferenceSystem) * 100 |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
¿? |
This metric was developed specificaly for OSGi platform, but I think it can be adapted to be applied to embedded systems |
|
|
|
334 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Side-channel Vulnerability Factor (SVF) |
|
SVF quantifies patterns in attackers’ observations and measures their correlation to the victim’s actual execution patterns and in doing so captures systems’ vulnerability to side-channel attacks. we can measure information leakage by computing the correlation between ground-truth patterns and attacker observed patterns. We call this correlation Side-channel Vulnerability Factor (SVF). SVF measures the signal-to-noise ratio in an attacker’s observations. While any amount of leakage could compromise a system, a low signal-to-noise ratio means that the attacker must either make do with inaccurate results (and thus make many observations to create an accurate result) or become much more intelligent about recovering the original signal. We use phase detection techniques to find patterns in both sets of data then compute the correlation between actual patterns in the victim and observed patterns. |
- |
HARDWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
YES |
Embedded devices can be vulnerable to side-channel attacks and they can be tested |
|
|
|
335 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Social Engineering Resistance (SER) |
|
SER = <SERlow, SERhigh>; SERlow = (1 − (P + N − A)/N) ∗ 100; SERhigh = (1 − P/N) ∗ 100; where N is the number of individuals selected or invited to take part in the experiment; A is the number of active participants; P is the total number of valid password/username pairs obtained during the experiment |
- |
USER |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
Social engineering cannot be applied to a device |
|
|
|
336 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Structural severity |
RISK |
Measure that uses software attributes to evaluate the risk of an attacker reaching a vulnerability location from attack surface entry points. It is measured based on three values: high (reachable from an entry point), medium (reachable from an entry point with no dangerous system calls), low (not reachable from any entry points) |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
¿? |
I am not sure if this metric can be applied to embedded software |
|
|
|
0 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Vulnerability Index (VI) |
|
Probability of a component’s vulnerability being exposed in a single execution |
- |
DEVICE |
MANUAL* |
DYNAMIC* |
|
¿? |
It seems that it can be applied to embeded systems, but I am not sure if it is applicable in the evaluation phase |
|
|
|
0 |
11 |
|
YES |
Effort |
- |
- |
- |
- |
Effective Vulnerability Index (EVI) |
|
Relative measure of the impact of a component on the system’s insecurity |
- |
DEVICE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
¿? |
It seems that it can be applied to embeded systems, but I am not sure if it is applicable in the evaluation phase |
|
|
|
341 |
11 |
|
YES |
Organization |
- |
- |
- |
- |
Depth of Master Ownership |
|
This metric determines the level of ownership of the binary depending on the number of edits done. The organization level of the person whose reporting engineers perform more than 75% of the rolled up edits is considered as the DMO. This metric determines the binary owner based on activity on that binary. Our choice of 75% is based on prior historical information on Windows to quantify ownership. |
- |
|
|
|
|
NO |
|
|
|
|
350 |
11 |
|
YES |
Organization |
- |
- |
- |
- |
Edit Frequency |
|
Total number of times the source code that makes up the binary was edited. An edit is when an engineer checks out code from the version control system, alters it, and checks it in again. This is independent of the number of lines of code altered during the edit |
- |
|
|
|
|
NO |
|
|
|
|
351 |
11 |
|
YES |
Organization |
- |
- |
- |
- |
Level of Organizational Code Ownership |
|
The percent of edits from the organization that contains the binary owner (or if there is no owner the percent of edits from the organization that made the majority of the edits to that binary) |
- |
ORGANISATION |
|
|
|
NO |
It is at higher level. It is related to the organisation's security |
|
|
|
354 |
11 |
|
YES |
Organization |
- |
- |
- |
- |
Number of Ex-Engineers |
|
Total number of unique engineers who have touched a binary and have left the company as of the release date of the software system |
- |
ORGANISATION |
|
|
|
NO |
It is at higher level. It is related to the organisation's security |
|
|
|
355 |
11 |
|
YES |
Organization |
- |
- |
- |
- |
Organization Intersection Factor |
|
The number of different organizations that contribute more than 10% of edits, as measured at the level of the overall org owners |
- |
|
|
|
|
NO |
|
|
|
|
356 |
11 |
|
YES |
Organization |
- |
- |
- |
- |
Overall Organization Ownership |
|
This is the ratio of the people at the DMO level making edits to a binary relative to total engineers editing the binary |
- |
ORGANISATION |
|
|
|
NO |
It is at higher level. It is related to the organisation's security |
|
|
|
361 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Classified Attributes Total (CAT) |
|
The total number of classified attributes in the program |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
362 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Classified Methods Total (CMT) |
|
The total number of classified methods in the program |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
366 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Count of Base Classes (CBC) |
|
Number of base classes |
- |
SOFTWARE |
- |
- |
|
- |
|
|
|
|
367 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Critical Classes Total (CCT) |
|
The total number of critical classes in the program |
- |
SOFTWARE |
- |
- |
|
- |
Object-oriented programs |
|
|
|
376 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of Attacks |
|
How many attacks on a system exist. The idea behind this metric is that the more attacks on a system exist the less secure the system is. This metric is applied for the simplest analysis of attack graphs. Number of attacks also can be used for the analysis of results of the penetration testing |
- |
NETWORK* |
- |
- |
|
NO |
Formal model. Formal verification. Too much notation. I think this is more related to networks |
|
|
|
378 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of Design Decisions Related to Security (NDD) |
|
The proposed metric aims at assessing the number of design decisions that address the security requirements of the system. During the design phase, it is common to end up with multiple solutions to the same problem. Software engineers make many design decisions in order to choose among alternative design solutions. |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
379 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of Developers |
|
|
|
ORGANISATION |
|
|
|
NO |
It has to do with the organisation and the development of the product. It is applied at the beginning of the life-cycle |
|
|
|
380 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of elicited security use cases * |
DESIGN |
What is the number of elicited security use cases? # Elicited Security Use Cases |
- |
SOFTWARE |
MANUAL |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
381 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of excluded security requirements that ensure session handling * |
DESIGN |
Is session identifier created on server side? Is new session identifier assigned to user on authentication? Is session identifier changed on reauthentication? Is logout option provided for all operations that require authentication? Is session identifier cancelled when authenticated user logs out? Is session identifier killed after a period of time without any actions? Is user’s authenticated session identifier protected via secure data transmission protocol? |
- |
SOFTWARE |
MANUAL |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
382 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of excluded security requirements that put the system at risk of possible attacks * |
DESIGN |
Summation of the excluded security requirements that put the system at risk |
- |
SOFTWARE |
MANUAL |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
383 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of global variables * |
|
|
|
SOFTWARE |
|
|
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
384 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of identified misuse cases * |
DESIGN |
What is the number of misuse cases found? # Misuse Cases |
- |
SOFTWARE |
MANUAL |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
390 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of return points in the method |
|
|
|
SOFTWARE |
|
|
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
391 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of Security Algorithms (NSA) |
|
Number of security algorithms that are supported by the application |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied, but this metric can be used in finals stages of the development life-cycle |
|
|
|
392 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of Security Incidents Reported (NSR) |
|
Number of incidents related to security that are reported by the users of the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
By definition, the values of this metric are collectec by the user. Son it can't be used in the valuation |
|
|
|
393 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of Security Requirements (NSR) * |
|
Number of security requirements identified during the analysis phase of the application |
- |
SOFTWARE |
MANUAL |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
394 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Number of sub classes |
|
|
|
SOFTWARE |
|
|
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
395 |
11 |
|
YES |
Size |
- |
- |
- |
- |
NumFunctions |
|
|
|
SOFTWARE |
|
|
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
400 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Response set for a Class (RFC) |
|
Set of methods that can potentially be executed in response to a message received by an object of that class. RFC is simply the number of methods in the set, including inherited methods |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
402 |
11 |
|
YES |
Size |
- |
- |
- |
- |
Stall Ratio |
CODE |
Stall ratio is a measure of how much a program’s progress is impeded by frivolous activities. stall ratio = (lines of non-progressive statements in a loop) / (total lines in the loop) |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
409 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Comment ratio |
|
Ratio of number of comment lines to number of code lines |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
YES |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
410 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
CommentDensity |
|
The ratio of lines of comments to lines of code |
- |
|
|
|
|
|
|
|
|
|
411 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Compartmentalization |
CODE |
Compartmentalization means that systems and their components run in different compartments, isolated from each other. Thus a compromise of any of them does not impact the others. This metric can be measured as the number of independent components that do not trust each other (performs authentication and authorization for requests/calls coming from other system components) that the system is based on to deliver its function. The higher the compartmentalization value, the more secure the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
415 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Defense-In-Depth |
CODE |
This metric verifies that security controls are used at different points in the system chain including network security, host security, and application security. Components that have critical data should employ security controls in the network, host, and component layers. To assess this metric we need to capture system architecture and deployment models as well as the security architecture model. Then we can calculate the ratio of components with critical data that apply the layered security principle compared to number of critical components |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
I am not sure if this metric can be applied to embedded software, or at verification phases |
|
|
|
417 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Fail Securely |
CODE |
The system does not disclose any data that should not be disclosed ordinarily at system failure. This includes system data as well as data about the system in case of exceptions. This metric can be evaluated from the security control responses – i.e. how the control behaves in case it failed to operate. From the system architecture perspective, we can assess it as the number of critical attributes and methods that can be accessed in a given component. The smaller the metric value, the likely more secure the system in case of failure |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
This could be used in evaluation |
|
|
|
419 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Inspection Performance Metric (IPM) |
TESTER |
It measures the performance of the people during the inspection process using the aforementioned parameters. # defects captured by inspection process /inspection effort |
- |
TESTER |
SEMI-AUTOMATIC* |
STATIC* |
|
NO |
This metric is related to the tester |
|
|
|
421 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Isolation (PI) |
CODE |
This assesses the level of security isolation between system components. This means getting privileges to a component does not imply accessibility of other co-located components. This metric can be assessed using system architecture and deployment models. Components marked as confidential should not be hosted with nonconfidential (public) components. Methods that are not marked as confidential should not have access to confidential attributes or methods |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
I am not sure if this metric can be applied to embedded software |
|
|
|
423 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Least Privilege |
CODE |
This metric states that each component and user should be granted the minimal privileges required to complete their tasks. This metric can be assessed from two perspectives: from the security controls perspective we can review users’ granted privileges. From the architectural analysis perspective this can be assessed as how the system is broken down to minimal possible actions i.e. the number of components that can access critical data. The smaller the value, the more secure the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the software is available, it can be applied. It can be also applied to embedded software |
|
|
|
431 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Trustworthiness |
|
Trustworthiness of software means its worthy of being trusted to fulfill requirements which may be needed for a particular software component, application, system, or network. It involves attributes of stability, data security, quality, privacy, safety and so on. Software trustworthiness is interrelated with not only risk control in the software process, but also the quality management of the software development process. Furthermore, vision is needed to avoid excessive costs and schedule delays in development and risks management costs in operation; to improve development efforts; and above all |
- |
SOFTWARE |
- |
- |
|
- |
- |
|
|
|
433 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Variable Security Vulnerability |
CODE |
The security vulnerability of a variable v@l is determined by the combined security damage it might cause to the overall system security when v@l is attacked. The more security properties that can be left intact, the less vulnerable the variable is; on the other hand, the more security properties are violated, the more criticall the variable is. |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is only applicable if the source code is available |
|
|
|
435 |
11 |
|
YES |
Strength |
- |
- |
- |
- |
Vulnerability Free Days (VFD) |
|
Percent of days in which the vendor’s queue of reported vulnerabilities for the product is empty |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
This metric is not suitable for the evaluation phase |
|
|
|
437 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Average Active Vulnerabilities per day (AAV) |
|
Median number of software vulnerabilities which are known to the vendor of a particular piece of software but for which patch has been publicly released by the vendor |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
This metric is not suitable for the evaluation phase |
|
|
|
439 |
11 |
X |
YES |
Weakness |
- |
- |
- |
- |
BrokenAccountCount |
|
Number of accounts that have no activity for more than 90 days and will never expire. Such accounts represent a clear risk of password compromise and resulting illegal access. |
- |
SOFTWARE |
- |
- |
|
¿? |
|
|
|
|
449 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Faults found during manual inspection |
|
|
|
SOFTWARE |
|
|
|
YES |
It is related to software and it can be applied to embedded software |
|
|
|
452 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Maximal probability of successful attack |
|
The probability to accomplish an attack successfully is a well-known metric. The metric describes the most probable way to compromise the system |
- |
NETWORK* |
- |
- |
|
NO |
Formal model. Formal verification. Too much notation. I think this is more related to networks |
|
|
|
453 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Mean Failure Cost |
COST |
Mean Failure Cost MFC) used in the operational sense because the lack of security within the system may cause damage, in terms of lost productivity, lost business, lost data, resulting in security violations. We represent this loss by a random variable, and define MFC as the mean of this random variable. As discussed further, this quantity is not intrinsic to the system, but varies by stakeholder |
- |
ORGANISATION |
- |
- |
|
NO |
This metric is not related to embeded systems, but to organisations |
|
|
|
456 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Monitoring system use |
|
Number of unauthorized attempts to access file, folders, device attachment, attempts to change security settings |
- |
ORGANISATION |
AUTOMATIC* |
DYNAMIC* |
|
NO |
This can be adapted to be used in embedded systems, but this metric makes no sense in the evaluation phase |
|
|
|
458 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of Design Flaws Related to Security (NSDF) |
|
Security-related design flaws occur when software is planned and specified without proper consideration of security requirements and principles. For instance, clear-text passwords are considered as design flaws. Design flaws can be detected using design inspection techniques (e.g., design reviews). Identifying the number of design flaws related to security can help detect security issues earlier in the design process |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
459 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of Exceptions That Have Been Implemented to Handle Execution Failures Related to Security (NEX) |
|
Number of exceptions that have been included in the code to handle possible failures of the system due to an error in a code segment that has an impact on security |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
460 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of excluded security requirements that ensure input/output handling |
DESIGN |
Is a specific encoding scheme defined for all inputs? Is a process of canonicalization applied to all inputs? Is an appropriate validation defined and applied to all inputs, in terms of type, length, format/syntax and range? Is a whitelist Filtering approach is applied to all inputs? Are all the validations performed on the client and server side? Is all unsuccessful input handling rejected with an error message? Is all unsuccessful input handling logged? Is output data to the client filtered and encoded? Is output encoding performed on server side? |
- |
SOFTWARE |
MANUAL |
STATIC* |
|
NO |
This metric is intended to be use in the design phase. This cannot be applied in a security assessment |
|
|
|
462 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of Implementation Errors Found in the System (NERR) |
|
Number of implementation errors of the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Related to the implementation phase |
|
|
|
463 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of Implementation Errors Related to Security (NSERR) |
|
Number of implementation errors of the system that have a direct impact on security. One of the most common security related implementation errors is the buffer overflow |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Related to the implementation phase |
|
|
|
464 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of Omitted Exceptions for Handling Execution Failures Related to Security (NOEX) |
|
Number of missing exceptions that have been omitted by software engineers when implementing the system. These exceptions can easily be determined through testing techniques. |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
Related to the implementation phase |
|
|
|
465 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of Omitted Security Requirements (NOSR) |
|
Number of requirements that should have been considered when building the application |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
¿? |
If the design is available, it can be applied |
|
|
|
471 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Number of violations of the LP principle |
|
Number of violations of the Least Privilege principle. A component does not adhere to LP if it, based upon the permissions attributed as described before, is capable of executing tasks it is not responsible for |
- |
SOFTWARE |
AUTOCATIC* |
STATIC* |
|
YES |
This could be used in embedded systems |
|
|
|
473 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Percentage Software Hazards (PSH) |
|
Comparing the number of software hazards identified against historical data, it indicates the sufficiency of the software hazard identification based on the identified hazards. (# software hazards / # System hazards ) * 100 |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
SÍ |
Hazard is similar to risk. |
|
|
|
476 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Remediation Potency (RP) |
|
Remediation potency refers to how good the remediation is against the vulnerability |
- |
|
|
|
|
NO |
|
|
|
|
477 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Remediation Scheme (RS) |
|
It is used to describe the remediation process |
- |
|
|
|
|
NO |
|
|
|
|
479 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Security of system documentation |
|
Total number of unauthorized accesses to system documentation / total number of access to system documentation * 100 |
- |
ORGANISATION |
SEMI-AUTOMATIC* |
DYNAMIC* |
|
NO |
This might be useful to measure anything related to social engineering and information gathering and OSINT |
|
|
|
480 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Static analysis alert count (number of) |
|
|
|
SOFTWARE |
|
|
|
|
|
|
|
|
488 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Vulnerability Density |
CODE |
Number of vulnerabilities in the unit size of a code. VD = size of the software / # vulnerabilities in the system |
- |
SOFTWARE |
SEMI-AUTOMATIC* |
STATIC* |
|
YES |
It seems that it can be applied to embeded systems, but I am not sure if it is applicable in the evaluation phase |
|
|
|
490 |
11 |
X |
YES |
Weakness |
- |
- |
- |
- |
XsiteVulnCount |
|
It is obtained via a penetration-testing tools |
- |
SOFTWARE |
- |
- |
|
NO |
It is not clearly defined |
|
|
|
494 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security metrics of Arithmetic Expression |
|
The security flaws of arithmetic expression include divide by zero, overflow of arithmetic operation, misused data type of arithmetic expression, and truncation error of arithmetic operation |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Part of a checklist |
|
|
|
495 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Array Index |
|
The security flaws of array index include index out of range, incorrect index variable and uncontrollable array index |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Part of a checklist |
|
|
|
496 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Component Interfaces |
|
The security flaws of component interface include inconsistent parameter numbers, inconsistent data types, and inconsistent size of data types |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Part of a checklist |
|
|
|
497 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Control Operation |
|
The security flaws of control operations include infinite loop, deadlock situations and boundary defects |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Part of a checklist |
|
|
|
498 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of I/O Management |
|
File is an important media which can store critical information and record log data. However, file privilege definition, file access management, file contents, file organization and file size are major factors to affect the security software operation |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Part of a checklist |
|
|
|
499 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Input Format |
|
The security flaws of data format include mismatched data contents, mismatched data type, mismatch data volume |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Part of a checklist |
|
|
|
500 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Kernel Operation |
|
A released software system must be adapted to a suitable operating system and environment. The kernel of operating system and environment become a critical factor to affect the operating security of software system. Resource management, execution file protection, main memory control and process scheduling are major tasks of operating system. The security flaws, which embedded in the tasks of operating system, become high security risk for software system operation |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified. Checklist. Only when there is an OS |
|
|
|
501 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Network Environment |
|
Network environment is a critical tool in the e-commerce years. However, data transmission management, remote access control, and transmission contents always become the security holes of software system operation |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified |
|
|
|
502 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of Resources Allocation |
|
The security flaws of resource allocation include exhausted memory, unreleased memory exhausted resources, and unreleased resources |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified |
|
|
|
503 |
11 |
|
YES |
Dependency* |
- |
- |
- |
- |
Security Metrics of User Authority |
|
Many users may use the released system to do specific jobs. However, user privilege definition, user password management and user detailed information maintenance are important tasks to control and manage the user authority. Without user authority control and management, the operation environment of software system will be on high security risk |
- |
SOFTWARE |
AUTOMATIC* |
STATIC* |
|
¿? |
This is not a metric, but a group of them. Individually, they could be used if they are specified |
|
|
|
526 |
11 |
|
YES |
Weakness |
- |
- |
- |
- |
Kolmogorov Complexity |
|
|
|
SOFTWARE |
|
|
|
|
|
|
|
|