Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Thinking About Risk

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2013

Risk Tolerance

Risk tolerance is a measure of the level of risk an organization is willing to accept, expressed in either qualitative or quantitative terms and used as a key criterion when making risk-based decisions. Organizations sometimes assign different risk tolerance levels to different types of risk, but if organizations use consistent risk rating or measurement scales, then the same risk tolerance level should apply regardless of the type of risk or its source. Risk tolerance—also sometimes called risk appetite [17] or risk propensity [28]—varies widely among organizations based on many factors, including the relative risk sensitivity of risk managers and other organizational leaders, the organization’s mission, and the nature of its assets, resources, and the operational processes they support. In information security or any other risk management domain, risk managers make decisions based not on the total risk potentially faced by the organization, but on the risk that remains after risk mitigation or other measures intended to reduce risk have been put in place. This remaining risk is termed residual risk and, as illustrated in Figure 3.2, the organizational risk tolerance determines the acceptable level of residual risk, meaning that two or more organizations with different risk tolerances may respond differently to the same risk determination. When the residual risk relevant to a given decision exceeds the risk tolerance, the organization may choose to take additional action to reduce risk or it may opt not to go forward with the risk at the level determined. For organizations seeking consistent risk management, it is essential to both accurately determine risk tolerance and communicate the tolerance level to risk managers and relevant decision-makers throughout the organization.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 3.2. The Risk Level is Determined by Considering Total Risk and the Risk Mitigation or Other Risk-Reducing Actions Taken By the Organization to Arrive at Residual Risk—the Risk That Remains When All Chosen Mitigating Responses are Implemented

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496414000035

Special Volume: Mathematical Modeling and Numerical Methods in Finance

T. Zariphopoulou, T. Zhou, in Handbook of Numerical Analysis, 2009

Proposition 5.3

Let the local risk tolerance be given by

r(x, t;1,0)=x,x>0.

Then, the process

Utl(x)=(logxYt-At2)Zt ,x>0

is a forward performance. Moreover, the optimal investment strategy and associated wealth processes are given by

π˜t*,l=x(mt+nt)exp(-1 2∫0t|σsns|2ds+kt)

and

π˜t*,l=xexp(-12∫0t|σsns|2ds+kt).

At the optimum,

Utl(Xt*,l)=(logx-∫0t|σsns|2ds+kt) Zt

with nt and kt as in (2.21) and (4.4).

In an analogy to Corollary 5.1, we look at the case of no benchmark and no alternative market views.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1570865908000069

Risk management

Julie Carpenter, in Project Management in Libraries, Archives and Museums, 2011

Suitable responses to risk

Having identified the risks, both below and above the risk tolerance line, the project manager needs to take a view on what the response to each risk should be. The range of responses is fairly standard across different project management frameworks and methodologies. Table 4.7 provides a summary.

Table 4.7. Summary of typical risk responses

ResponseDescription
Avoidance or prevention Terminate the risk by doing things differently and thus removing the risk, where it is feasible to do so. For instance, there is a plan to build a new library and archival storage facility on a greenfield site but there is a risk that the council will refuse planning permission and delay the project. The project partners decide to build on brownfield site on a former industrial estate. This incurs additional cost in terms of demolishing old buildings and removing hazardous waste.
Mitigation or reduction Take action to control the risk in some way where the actions either reduce the likelihood of the risk developing or limit the impact. For instance, the project will not easily be able to attract the required technical staff, so a salary supplement is offered to project staff.
Transference Moving the impact (and ownership) of the risk to a third party, via for instance an insurance policy or outsourcing services. For instance, the project board is aware that colleges are the target of an organised gang stealing hardware. A decision is taken to outsource some of the project servers to a hosting company.
Acceptance Tolerate the risk, perhaps because nothing can be done at a reasonable cost to mitigate it or the likelihood and impact of the risk are at an acceptable level.
Contingency Dealing with the risk via contingency rather than altering the plan. Actions are planned and organised to come into force as and when a risk occurs.

Risk response actions may be preventative measures taken as soon as a risk is identified, and not only when a risk actually occurs. The project manager will ensure that actions are recorded in the risk log and the status of the risk is re-assessed once the preventative or mitigating actions are complete.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843345664500047

Information Risk Assessment

Timothy Virtue, Justin Rainey, in HCISPP Study Guide, 2015

Risk Tolerance and Uncertainty

Organizations need to determine the levels and types of risk that are acceptable. Risk tolerance is determined as part of the organizational risk management strategy to ensure consistency across the organization. Organizations also provide guidance on how to identify reasons for uncertainty when risk factors are assessed, since uncertainty in one or more factors will propagate to the resulting evaluation of level of risk, and how to compensate for incomplete, imperfect, or assumption-dependent estimates. Consideration of uncertainty is especially important when organizations consider advanced persistent threats (APTs) since assessments of the likelihood of threat event occurrence can have a great degree of uncertainty. To compensate, organizations can take a variety of approaches to determine likelihood, ranging from assuming the worst-case likelihood (certain to happen sometime in the foreseeable future) to assuming that if an event has not been observed, it is unlikely to happen. In determining likelihood, they should also consider the probability of an attack being attempted and its probability of success. Organizations also determine what levels of risk (combination of likelihood and impact) indicate that no further analysis of any risk factors is needed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128020432000069

Staying ahead on downside risk

Giuliano De Rossi, in Optimizing Optimization, 2010

6.2.2 Modeling EVaR dynamically

Another notable advantage of expectiles is their mathematical tractability, as argued in Newey and Powell (1987). Dynamic expectile models have been derived in Taylor (2008), Kuan et al. (2009), and De Rossi and Harvey (2009). An early example appeared in Granger and Sin (2000). Computationally, simple univariate models compare favorably to GARCH or dynamic quantile models.

The analysis presented in this chapter is based on the dynamic model of De Rossi and Harvey (2009). Intuitively, their estimator produces a curve rather than a single value μ, so that it can adapt to changes in the distribution over time. The two parameters needed for the estimation are ω and q.

The former can be interpreted as a prudentiality level: the lower ω, the more risk aversion. Figure 6.1 shows a typical expectile plot for alternative values of ω. By decreasing ω, one focuses on values that are further out in the lower tail, i.e., more severe losses.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 6.1. Time-varying expectiles. The solid black lines represent estimated dynamic expectiles for ω = 0.05, 0.25, 0.5 (mean), 0.75, and 0.95.

By increasing q, we can make the model more flexible in adapting to the observed data. The case q = 0 corresponds to the constant expectile (estimated by the sample expectile). As Figure 6.2 shows, larger values of q produce estimated curves that follow more and more closely the observations.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 6.2. Time-varying expectiles for alternative values of q. The data is a simulated time series. The dotted line represents the sample 5% expectile, which corresponds to the case q = 0.

De Rossi and Harvey (2009) assume that a time series yt, a risk tolerance parameter 0<ω<1, and a signal to noise ratio q are given. They then decompose each observation into its unobservable ω-expectile, μt(ω), and an error term ε having ω-expectile equal to zero:

yt=μt(ω)+ɛt(ω)μt(ω)=μt-1(ω)+ηt(ω)

The ω-expectile, μt(ω), is assumed to change over time following a (slowly evolving) random walk that is driven by a normally distributed error term ηt having zero mean.4 In the special case, ω=0.5,undefinedμt(0.5) is just the time-varying mean and therefore yt is a random walk plus noise. The signal μt(0.5) can be estimated via the Kalman filter and smoother (KFS).

Equivalently, the problem can be cast in a nonparametric framework. The goal is to find the optimal curve f(t), plotted in Figure 6.1, that fits the observations. It is worth stressing that f(t) is a continuous function, so here the argument t, with a slight abuse of notation, is allowed to be a positive real number such that 0<t<T. The solution minimizes:

(6.3)∫0T[f'(t)]2 dt+q∑s=1Tρω(ys-f(s))

with respect to the whole function f, within the class of functions having square integrable first derivative. At any point t = 1,...,T, we then set μt(ω) = f(t).

The first term in Equation (6.3) is a roughness penalty. Loosely speaking, the more the curve f(t) wiggles, the higher the penalty. The second term is the objective function for the expectile, which as noted above gives asymmetric weights on positive and negative errors.

The constant q represents the relative importance of the expectile criterion. As q grows large, the objective function tends to become influenced less and less by the squared first derivative. In the limit, only the errors yt−μt(ω) matter and the solution becomes μt(ω) = yt, i.e., the estimated expectile coincides with the corresponding observation. As q tends to zero, instead, the integral of the squared derivative is minimized by a straight line. As a result, the solution in the limit is to set all expectiles equal to the historical expectile. The role of q is illustrated in Figure 6.2.

It can be shown that the optimal curve is a piecewise linear spline. Computationally, finding the optimal spline boils down to solving a nonlinear system in μ, the vector of T expectiles. After some tedious algebra, the first-order conditions to minimize Equation (6.3) turn out to be:

[Ω+D(y,μ)]μ=D(y,μ)y

where D(y, μ) is diagonal with element (t, t) given by

|ω-I(yt-μt<0)|

and

Ω=q-1[1-100…0-12-10…00-12-1…0⋮⋮0…2-10 …-11]

BothΩ and D are T×T matrices. Starting with an initial guess, μ(1), the optimal μ is found by iterating the equation:

μ(i+1)=[Ω+D(y,μ(i))]-1D(y,μ(i))y

until convergence. I define Dˆ(y) the matrix D upon convergence of μ(i) to μˆ. The repeated inversion of the T×T matrix in the formula can be efficiently carried out by using the KFS.5 To this end, it is convenient to set up an auxiliary linear state space form at each iteration:

(6.4)yt=δt+utδt=δt-1+vt

where

Var(ut)=1/|ω-I(yt-μt(i)<0)|

and

Var(v t)=q

The unobservable state δt replaces μt. The model in Equation (6.4) is just a convenient device that can be used to carry out the computation efficiently. It can be shown that the linear KFS applied to Equation (6.4) yields the optimal μ characterized above.

The parameter q can be estimated from the data by cross-validation. The method is very intuitive: It consists of dropping one observation at a time from the sample and thus re-estimating the time-varying expectile with a missing observation T times. The T residuals are then used to compute the objective function:

CVω( q)=∑t=1Tρω(yt- μ˜t(-t))

where μ˜t (-t)is the estimated value at time t when yt is dropped. CVω (q) depends on q through the estimator μ˜t(-t). De Rossi and Harvey (2006) devise a computationally efficient method to minimize CVω (q) with respect to q.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123749529000063

Risk Management Framework

Leighton Johnson, in Security Controls Evaluation, Testing, and Assessment Handbook, 2016

Step 6 – monitoring

The primary goal of this step is to monitor the security controls in the information system on an ongoing basis including assessing control effectiveness, documenting changes to the system or its environment of operation, conducting security impact analyses of the associated changes, and reporting the security state of the system to designated organizational officials.

The objectives of this step are as follows:

Operate and maintain system security within acceptable risk tolerance.

Update system securely and safely.

Conduct mission successfully.

After an Authorization to Operate (ATO) is granted, ongoing continuous monitoring is performed on all identified security controls as well as the political, legal, and physical environment in which the system operates. Changes to the system or its operational environment are documented and analyzed. The security state of the system is reported to designated officials. Significant changes will cause the system to reenter the security authorization process. Otherwise, the system will continue to be monitored on an ongoing basis in accordance with the organization’s monitoring strategy.

The identified tasks for step 6 are as follows:

1.

Determine the security impact of proposed or actual changes to the information system and its environment of operation.

2.

Assess the technical, management, and operational security controls employed within and inherited by the information system in accordance with the organization-defined monitoring strategy.

3.

Conduct remediation actions based on the results of ongoing monitoring activities, assessment of risk, and outstanding items in the plan of action and milestones.

4.

Update the security plan, security assessment report, and plan of action and milestones based on the results of the continuous monitoring process.

5.

Report the security status of the information system (including the effectiveness of security controls employed within and inherited by the system) to the authorizing official and other appropriate organizational officials on an ongoing basis in accordance with the monitoring strategy.

6.

Review the reported security status of the information system (including the effectiveness of security controls employed within and inherited by the system) on an ongoing basis in accordance with the monitoring strategy to determine whether the risk to organizational operations, organizational assets, individuals, other organizations, or the nation remains acceptable.

7.

Implement an information system disposal strategy, when needed, which executes required actions when a system is removed from service.

The guidance from the SP 800-37, rev. 1 gives additional insight to ongoing monitoring: “The authorizing official or designated representative reviews the reported security status of the information system (including the effectiveness of deployed security controls) on an ongoing basis, to determine the current risk to organizational operations and assets, individuals, other organizations, or the Nation. The authorizing official determines, with inputs as appropriate from the authorizing official designated representative, senior information security officer, and the risk executive (function), whether the current risk is acceptable and forwards appropriate direction to the information system owner or common control provider. The use of automated support tools to capture, organize, quantify, visually display, and maintain security status information promotes the concept of near real-time risk management regarding the overall risk posture of the organization. The use of metrics and dashboards increases an organization’s ability to make risk-based decisions by consolidating data from automated tools and providing it to decision makers at different levels within the organization in an easy-to-understand format. The risks being incurred may change over time based on the information provided in the security status reports. Determining how the changing conditions affect the mission or business risks associated with the information system is essential for maintaining adequate security. By carrying out ongoing risk determination and risk acceptance, authorizing officials can maintain the security authorization over time. Formal reauthorization actions, if required, occur only in accordance with federal or organizational policies. The authorizing official conveys updated risk determination and acceptance results to the risk executive (function).”6

The Risk Management Framework Authorization Package, as referenced above, has three required documents produced during the assessment and authorization process which are required to obtain an ATO for federal systems. These three documents are the System Security Plan (as defined in SP 800-18), Security Assessment Report (as defined in SP 800-37 and SP 800-53A), and the POAM (as defined in OMB Memorandum M02-01). The following diagram shows the RMF steps with the approximate steps where each of these documents are generated during the process:

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Continuous Monitoring for Current Systems

The objective of a continuous monitoring program is to determine if the complete set of planned, required, and deployed security controls within an information system or inherited by the system continues to be effective over time in light of the inevitable changes that occur. In 2010, OMB issued guidance to US governmental agencies that continuous monitoring of security controls would now be required for all systems. This began the developmental process for the ongoing efforts to create, develop, and maintain a continuous monitoring program for each agency. NIST provided some guidance when they issued SP 800-137, “Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations,” in September 2011.

There are several different ISCM programs currently in deployment in various governmental agencies including the Continuous Diagnostic and Mitigation (CDM) effort from DHS and Continuous Monitoring Risk System (CMRS) from DISA in DOD. These efforts will continue to evolve over the next several years. As part of this effort, OMB and NIST are providing guidance and directions on moving to an event-driven authorization process known as Ongoing Authorization (OA) within agencies that have an active ISCM. Recent supplemental guidance to SP 800-37, rev. 1 provides this guidance and it is worth the effort to obtain this document and review it for your area of interest.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128023242000051

General Team Management

Leighton R. JohnsonIII, in Computer Incident Response and Forensics Team Management, 2014

Corporate Level Management Considerations

The corporate oversight requirements for SIR&FT Management need to focus on the general corporate culture for risk tolerance for the business activities. The corporate leadership, the officers and the board, have the requirement, under compliance needs and regulations (GRC domain: Governance, Risk, and Compliance), to set the risk appetite of the organization and portray to the rest of the organization via strategy documents, corporate guidance, general communications, and personal leadership. Each Line of Business (LOB) must have some guidance for what areas to address in their policies and procedures. This includes data privacy needs and requirements which can include external factors to be considered such as Health information if the unit deals with HIPAA or other health data, or personal information if the unit deals with Personal Identifiable Information (PII) in its daily actions. If the organization has Intellectual Property (IP) used in the organization, this needs to be clearly portrayed to the SIR&FT Manager before any occurrence dealing with the IP. The IP requirements are especially important if the organization is subject to or suspects industrial espionage, since that is not a criminal, but a civil offense.

The senior corporate management needs to have in place a method of risk review for all potential exposure points for the organization. This construct would include a Risk Review mechanism for threats to the organization and the potential vulnerabilities of the organization and its IT infrastructure. The first part of this methodology needs to look at the assets of the organization and their value to the group. What do they (the assets) cost to run, to replace, to redesign, to rebuild, and to retire all are questions to consider for the asset valuation portion of this process. The asset control requirements for the LOB also need to be defined and documented for proper consideration.

Following up the asset review are the three basic areas within any risk assessment process that the organization must perform in order to properly advise the SIR&FT during response events. These three areas (threat assessment, vulnerability assessment, and control analysis) are all areas which the SIR&FT Manager can advise on, participate in, or even help create options for during the assessment process at a corporate level. Often the Incident Response team members are extremely aware of the threat environment of the organization as they are often researching the current state of threats, malware analysis techniques, and other “hacker” type activities occurring in the community. The team members and manager can advise the risk assessment team on the validity and viability of the threats as defined during the assessment. The team members, both incident handlers and forensics analysts, can advise on the vulnerability assessment parts of the review. Both areas (Incident Response and Forensics) are often exposed to various vulnerabilities and have detailed understanding of operating systems, applications, databases, and equipment they have gathered during the normal course of their daily job performance. The control analysis portion of the assessment is often where the forensics investigators can excel in providing detailed analysis or inputs to the recommendations of the current and planned controls needed to properly meet the standard security objectives of confidentiality, integrity, and availability.

Once these parts are accomplished, then the organization conducts the business evaluation and impact analysis actions for each area of concern, piece of equipment or system under review. This process is important for the subsequent Incident Response Plan development, the Forensics Team training efforts, and the overall SIR&FT preparation step as discussed in Sections 3, 10, and 13Section 3Section 10Section 13. Then the corporate security staff, the corporate sponsor, and the IT staff come up with control recommendations and the overall risk determination for the system or network under review. This lays the foundation for the SIR&FT policies and procedures development efforts as the core strategy is now defined and the Incident Response and Forensics actions to prepare for are now documented.

Corporate reviews of risk assessments conducted by the regular security staff need visibility to the SIR&FT Manager and personnel. This allows for proper training of the team members during exercises which would focus on the most likely scenarios of data breach, insider threat, and external driven events before the actual response event occurs and the team has to respond. As part of this process, the corporate security plan and policies need to include the various incident response and forensics policies as defined elsewhere in this book. The integration of the team-specific policies allow for understanding by everyone possibly involved in an event during the response activity.

The SIR&FT needs this guidance from the corporate area in order to determine methods of response, priorities of response based on value to the LOB during an incident response or forensics investigation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499965000170

Further Techniques in Decision Analysis

Richard E. Neapolitan, Xia Jiang, in Probabilistic Methods for Financial and Marketing Informatics, 2007

6.1.1 The Exponential Utility Function

The exponential utility function is given by

Ur(x)=1−e−x/r.

In this function the parameter r, called the risk tolerance, determines the degree of risk-aversion modeled by the function. As r becomes smaller, the function models more risk-averse behavior. Figure 6.1 (a) shows U500(x), while Figure 6.1 (b) shows U1000(x). Notice that both functions are concave (opening downward), and the one in Figure 6.1 (b) is closer to being a straight line. The more concave the function is, the more risk-averse is the behavior modeled by the function. To model risk-neutrality (i.e., simply being an expected value maximizer), we would use a straight line instead of the exponential utility function, and to model risk-seeking behavior we would use a convex (opening upward) function. Chapter 5 showed many examples of modeling risk-neutrality. Here, we concentrate on modeling risk-averse behavior.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 6.1. The U500(x) = 1 − e−x/500 function is in (a), while the U1000(x) = 1 − e−x/1000 function is in (b).

Example 6.1

Suppose Sam is making the decision in Chapter 5, Example 5.1, and Sam decides his risk tolerance r is equal to 500. Then for Sam

EU(BuyNASDIP)=EU(NASDIP)=.25U500($500)+.25U500($1000)+.5U500($2000)=.25(1−e−500/500)+.25(1−e−1000/500)+ .5(1−e−2000/500)=.86504.

EU(Leave$1000inbank)= U500($1005)=1−e−1005/500=.86601.

So Sam decides to leave the money in the bank.

Example 6.2

Suppose Sue is less risk averse than Sam, and she decides that her risk tolerance r equals 1000. If Sue is making the decision in Chapter 5, Example 5.1, for Sue

EU(BuyNASDIP)=EU(NASDIP)=.25U1000($500)+.25U1000($1000)+.5U 1000($2000)=.25(1−e−500/1000)+.25(1−e−1000/1000)+.5( 1−e−2000/1000)=.68873.

E U(Leave$1000inbank)=U 1000($1005)=1−e−1005/1000=.63396.

So Sue decides to buy NASDIP.

Assessing r

In the previous examples we simply assigned risk tolerances to Sam and Sue. You should be wondering how an individual arrives at her or his personal risk tolerance. Next, we show a method for assessing it.

One way to determine your personal value of r in the exponential utility function is to consider a gamble in which you will win $x with probability .5 and lose − $x/2 with probability .5. Your value of r is the largest value of x for which you would choose the lottery over obtaining nothing. This is illustrated in Figure 6.2.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 6.2. You can assess the risk tolerance r by determining the largest value of x for which you would be indifferent between d1 and d2.

Example 6.3

Suppose we are about to toss a fair coin. I would certainly like the gamble in which I win $10 if a heads occurs and lose $5 if a tails occurs. If we increased the amounts to $100 and $50, or even to $1000 and $500, I would still like the gamble. However, if we increased the amounts to $1,000,000 and $500,000, I would no longer like the gamble because I cannot afford a 50% chance of losing $500,000. By going back and forth like this (similar to a binary cut), I can assess my personal value of r. For me r is about equal to 20,000. (Professors do not make all that much money.)

You may inquire as to the justification for using this gamble to assess r. Notice that for any r

.5(1−e−r/r)+.5(1−e −(−r/2)/r)=.0083,

and

1−e−0/r=0.

We see that for a given value of the risk tolerance r, the gamble in which one wins $r with probability .5 and loses −$r/2 with probability .5 has about the same utility as receiving $0 for certain. We can use this fact and then work in reverse to assess r. That is, we determine the value of r for which we are indifferent between this gamble and obtaining nothing.

Constant Risk Aversion

Another way to model a decision problem involving money is to consider one's total wealth after the decision is made and the outcomes occur. The next example illustrates this.

Example 6.4

Suppose Joe has an investment opportunity that entails a .4 probability of gaining $4000 and a .6 probability of losing $2500. If we let d1 be the decision alternative to take the investment opportunity and d2 be the decision alternative to reject it (i.e., he receives $0 for certain), then

E(d1)=.4($4000)+.6(−$2500)=$100E(d2)=$0.

So if Joe were an expected value maximizer, he would choose the investment opportunity.

Suppose next that Joe carefully analyzes his risk tolerance, and he decides that for him r = $5000. Then

EU(d1)=.4 (1−e−4000/5000)+.6(1−e− (−2500)/5000)=−.1690E(d2)= 1−e−0/5000=0.

The solved decision tree is shown in Figure 6.3 (a). So given Joe's risk tolerance, he would not choose this risky investment.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 6.3. The solved decision tree for Example 6.4 is shown in (a). Solved decision trees for that same example when we model in terms of total wealth are shown in (b) and (c). The total wealth in (b) is $10,000, whereas in (c) it is $100,000.

Example 6.5

Suppose Joe's current wealth is $10,000, and he has the same investment opportunity as in the previous example. Joe might reason that what really matters is his current wealth after he makes his decision and the outcome is realized. Therefore, he models the problem instance in terms of his final wealth rather than simply the gain or loss from the investment opportunity. Doing this, we have

EU(d1)=.4( 1−e−(10,000+4000)/5000)+.6(1−e−(10,000−2500)/5000)=.8418E( d2)=1−e−10,000/5000=.8647.

The solved decision tree is shown in Figure 6.3 (b). The fact that his current wealth is $10,000 does not affect his decision. The decision alternative to do nothing still has greater utility than choosing the investment opportunity.

Example 6.6

Suppose next that Joe rejects the investment opportunity. However, he does well in other investments during the following year, and his total wealth becomes $100,000. Further suppose that he has the exact same investment opportunity he had a year ago. That is, Joe has an investment opportunity that entails a .4 probability of gaining $4000 and a .6 probability of losing $2500. He again models the problem in terms of his final wealth. We then have

EU( d1)=.4(1−e−(100,000+4000)/5000 )+.6(1−e−(100,000−2500)/5000) =.9999999976

E(d2)=1−e−100,000/5000 =.9999999980.

The solved decision tree is shown in Figure 6.3 (c). Although the utility of the investment opportunity is now very close to that of doing nothing, it is still smaller, and he still should choose to do nothing.

It is a property of the exponential utility function that an individual's total wealth cannot affect the decision obtained using the function. A function such as this is called a constant risk-averse utility function. If one uses such a function to model one's risk preferences, one must reevaluate the parameters in the function when one's wealth changes significantly. For example, Joe should reevaluate his risk tolerance r when his total wealth changes from $10,000 to $100,000.

The reason the exponential utility function displays constant risk aversion is that the term for total wealth cancels out of an inequality comparing two utilities. For example, consider again Joe's investment opportunity that entails a .4 probability of gaining $4000 and a .6 probability of losing $2500. Let w be Joe's total wealth. The first inequality in the following sequence of inequalities compares the utility of choosing the investment opportunity to doing nothing when we consider the total wealth w, while the last inequality compares the utility of choosing the investment opportunity to doing nothing when we do not consider total wealth. If you follow the inequalities in sequence, you will see that they are all equivalent to each other. Therefore, consideration of total wealth cannot affect the decision.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123704771500232

Risk Profiling

Evan Wheeler, in Security Risk Management, 2011

Assessing Risk Appetite

On day one of any job as a security professional, you should be observing how the organization manages and reacts to risks, and try to gauge the organization's appetite for risk. As a security leader, one of your primary tasks when joining an organization should be to understand your CEO's tolerance for risk exposure. Clearly, some organizations are more sensitive to risks and have more to lose during a security breach than others. Risk management choices are often influenced by the corporate culture and the tone for risk acceptance is usually set from the top down.

Assessing the C-Level

You can categorize an organization as either risk accepting or risk averse, but there is a large continuum of risk tolerance in between on which most organizations' risk appetites lie. Organizations that accept risk are generally those that are in a growth mode, whereas larger, more well-established organizations are typically more averse to risk taking. For example, you might hear large banking institutions' approach to risk management as “find all the risks, and crush them into oblivion.” This would be characteristic of a risk-averse approach. However, a small start-up company might be willing to operate with a lot more exposure, knowing that they can capitalize on opportunities and have less to lose.

Regardless of whether you approach risk management from a quantitative perspective or focus on qualitative methods, you will discover that risk management can be as much an art as it is a science. This should be no real surprise to anyone who is familiar with similar disciplines such as economics. We like to think of our field as being closer to the science of economics than, say, weather forecasting, but there are the days when we're not quite sure. After all, providing even a 10-day forecast for information security can sometimes feel daunting. The art of risk management becomes most critical when you are trying to gauge the risk tolerance of your executive team. The failures of many young information security officers can be directly attributed to their failure to properly profile their own senior leadership and adapt to the organization's culture.

Sadly, you will hear many horror stories about the security leaders who didn't even attempt to assess the risk appetite of the organization before plowing ahead with security initiatives and “fixes” that weren't at all in line with the executive's priorities. Whether you are interviewing for a senior position on a security team, or you are a new security leader joining an organization, or an existing security officer who has been tasked with building out a risk program, you need to focus on drawing out information from the most senior executives about their priorities and tolerance for risk. The most typical approach is to schedule a few sit-downs with executives to discuss past incidents and current concerns and to talk through how they would want to handle a few likely risk scenarios.

You may be inclined to start the scenario discussion by focusing on a couple incidents that you foresee occurring at this organization, but it can be more productive to start with a couple of hypothetical risk decisions. Present a few risks to the executive with some trade-offs in terms of cost to mitigate and resource constraints and listen to how they would want to address these risks. Pay very close attention to the questions they ask to help make their decision because this is a great indicator of how they think and can help you prepare answers when there is a real decision to be made down the road. For example, if you present a sample risk during this conversation and the executive's first question is whether this risk remediation plan can be outsourced to a consulting company, then you now have valuable information about how to present mitigation plans in the future. As much as possible, it is important to listen primarily and to only speak up when you need to move the discussion forward or to better understand their reasoning.

Present several scenarios—this may need to be over the course of several sessions—with each risk having different levels of exposure and focusing on different aspects of the business. That way, you have some information you can use to compare the executive team's priorities relative to one another. Try translating your risk scales into threshold statements that will be meaningful for senior managers. For example, ask the CEO if a risk that will likely cause an outage of a major service for up to four hours in the next year needs to be escalated to the executive level or not. How about an exposure that will likely cause an outage for thirty minutes? You may want to start this process with your direct boss, whether that be the CIO or another C-level executive, and also be sure to engage the CEO if you don't report to him or her directly. Any meetings that you can attend where decisions are being made about priorities or objectives that are being set will also provide you with invaluable insight into the values of the organization that can help you to focus your own efforts.

These sessions don't need to be formal; you may even extend this exercise to include some table-top sessions where you get several leaders in a room to work through a hypothetical scenario. The goal is to consciously spend some time profiling the organization outside of any information-resource-focused assessments on the ground and to gauge how the senior leadership approach risk decisions.

Setting Risk Thresholds and Determining Tolerance Ranges

As part of the security risk profiling process, each resource needs to have risk sensitivity, tolerance, and threshold values defined. Table 4.4 shows the relationship between sensitivity, tolerance range, and threshold for a basic low-to-high qualitative risk scale.

Table 4.4. Risk-Tolerance Levels

Risk SensitivityRisk Tolerance (Risk exposure range)Risk Threshold (Risk exposure upper bound)
Low Negligible-High High
Moderate Negligible-Moderate Moderate
High Negligible-Low Low

Think of the risk threshold as the highest level of acceptable risk exposure for that resource. If you remember, back in the beginning of this chapter, we stated that the risk threshold is always inversely proportional to the risk sensitivity level. If you have a high sensitivity to risk, then you should have a low threshold for risk. A resource with a low sensitivity to risk will have a high threshold for risk exposure. The risk threshold is the upper bound for the risk tolerance range. The risk tolerance range defines the lowest and highest levels of acceptable risk for that resource. There is nothing magical about the relationship between sensitivity, tolerance, and thresholds; we use it as a way to inform risk decisions and as a simple criterion for which risks need to be escalated. If a particular exposure is outside the acceptable tolerance range, then this may require an escalation to address the risk exposure. Likewise, you can use this same concept to define threshold risk levels for an environment. For example, when making an implementation decision for a new system, you may determine that it can't be placed into the existing server farm because of the exposures it might introduce to existing systems in that environment. The risk that one system imposes on another is referred to as transitive risk. This should be a primary factor when determining the proper placement and segmentation needed in your environment.

Just because a risk exposure is within an acceptable level, it doesn't mean an organization shouldn't choose to mitigate it further. Similarly, just because a risk is outside the tolerance range, it doesn't mean that it can't be accepted. These levels are merely guidelines for making risk decisions and should be used to determine the level of executive sign-off needed to deviate from recommended tolerance levels. These concepts will be particularly important when we start to discuss policy exceptions and risk acceptance in later chapters.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496155000049

The Wheel

Rex Hartson, Partha S. Pyla, in The UX Book, 2012

2.3 Choosing a process instance for your project

Increasingly, the need to rush products to market to beat the competition is shortening development schedules and increasing the number of product versions and updates. Web applications must be developed in “Internet time.” Ponderous processes and methods are abandoned in favor of lightweight, agile, and flexible approaches intended to be more responsive to the market-driven need for short product versioning cycles. Abridged methods notwithstanding, however, knowledge of the rigorous UX process is an essential foundation for all UX practitioners and it is important for understanding what is being abridged or made agile in choices for the shorter methods.

The lifecycle process diagram in Figure 2-2 is responsive to the need for many different kinds of UX processes. Because it is a template, you must instantiate it for each project by choosing the parts that best suit your project parameters. To support each of these activities, the team can pick from a variety of sub-activities, methods, techniques, and the level of rigor and completeness with which these activities are carried out. The resulting instantiation can be a heavyweight, rigorous, and complete process or a lightweight, rapid, and “just enough” process.

That choice of process can always be a point of contention—between academics and practitioners, between sponsor/customer and project team, and among team members within a project. Some say “we always do contextual inquiry” (substitute any UX process activity); they value a thorough process, even if it can sometimes be costly and impractical. Others say “we never do contextual inquiry (or whatever process activity); we just do not have the time”; they value doing it all as fast as possible, even if it can sometimes result in a lower quality product, with the idea of improving the quality in later production releases.

Much has been written about powerful and thorough processes and much has been written about their lightweight and agile counterparts. So how do we talk about UX design processes and make any sense?

2.3.1 Project Parameters: Inputs to Process Choices

In reality there are as many variations of processes as there are projects. How do you decide how much process is right for you? How do you decide the kinds of process to choose to match your project conditions? What guidance is there to help you decide? There are no set rules for making these choices. Each factor is an influence and they all come together to contribute to the choice. The lifecycle template in this chapter and the guidelines for its instantiation are a framework within which to choose the process best for you.

Among the many possible factors you could consider in choosing a process to instantiate the lifecycle template are:

risk tolerance

project goals

project resources

type of system being designed

development organizational culture

stage of progress within project

One of biggest goal-related factors is risk and the level of aversion to risk in a given project. The less tolerance for risks—of things going wrong, of features or requirements being missing, or not meeting the needs of users—the more need for rigor and completeness in the process.

Budget and schedule are obvious examples of the kinds of resource limitations that could hinder your process choices. Another important kind of resource is person power. How many people do you have, what project team roles can they fill, and what skills do they bring to the project? Are the types of people you have and are their strengths a good match for this type of project?

Practitioners with extensive experience and maturity are likely to need less of some formal aspects of the rigorous process, such as thorough contextual inquiry or detailed UX goals and targets. For these experienced practitioners, following the process in detail does not add much to what they can accomplish using their already internalized knowledge and honed intuition.

For example, an expert chef has much of his process internalized in his head and does not need to follow a recipe (a kind of process). But even an expert chef needs a recipe for an unfamiliar dish. The recipe helps off-load cognitive complexity so that the chef can focus on the cooking task, one step at a time.

Another project parameter has to do with the demands due to the type of system being designed. Clearly you would not use anything like the same lifecycle to design a personal mp3 music player as you would for a new air traffic control system for the FAA.

Sometimes the organization self-selects the kind of processes it will use based on its own tradition and culture, including how they have operated in the past. For example, the organization's market position and the urgency to rush a product to market can dictate the kind of process they must use.

Also, certain kinds of organizations have their culture so deeply built in that it pre-determines the kinds of projects they can take on. For example, if your organization is an innovation consulting firm such as IDEO, your natural process tools will be predisposed toward ideation and sketching. If your organization is a government contractor, such as Northrup-Grumman, your natural process tools will lean more toward a rigorous lifecycle.

Somewhat orthogonal to and overlaid upon the other project parameters is the current stage of progress within the project for which you must choose activities, methods, and techniques. All projects will go through different stages over time. Regardless of process choices based on other project parameters, the appropriateness of a level of rigor and various choices of UX methods and techniques for process activities will change as a project evolves through various stages.

For example, early stages might demand a strong focus on contextual inquiry and analysis but very little on evaluation. Later stages will have an emphasis on evaluation for design refinement. As the stage of progress keeps changing over time, it means that the need to choose a level of rigor and the methods and techniques based on the stage of product evolution is ongoing. As an example, to evaluate an early conceptual design you might choose a quick design review using a walkthrough and later you might choose UX inspection of a low-fidelity prototype or lab-based testing to evaluate a high-fidelity prototype.

2.3.2 Process Parameters: Outputs of Process Choices

Process parameters or process choices include a spectrum from fully rigorous UX processes (Chapters 3 through 17Chapter 3Chapter 4Chapter 5Chapter 6Chapter 7Chapter 8Chapter 9Chapter 10Chapter 11Chapter 12Chapter 13Chapter 14Chapter 15Chapter 16Chapter 17) through rapid and so-called discount methods. Choices also can be made from among a large variety of data collection techniques. Finally, an agile UX process is available as an alternative choice for the entire lifecycle process, a process in which you do a little of each activity at a time in a kind of spiral approach.

2.3.3 Mapping Project Parameters to Process Choices

To summarize, in Figure 2-4 we show the mapping from project parameters to process parameter choices. While there are some general guidelines for making these mapping choices, fine-tuning is the job of project teams, especially the project manager. Much of it is intuitive and straightforward.

Which of the following steps can an organization take to best ensure that a risk response plan remains a living document?

Figure 2-4. Mapping project parameters to process parameter choices.

In the process chapters of this book, we present a set of rather rigorous process activities, but we want the reader to understand that we know about real-world constraints within tight development schedules. So, everywhere in this book, it should be understood that we encourage you to tailor your own process to each new project, picking and choosing process activities and techniques for doing them, fitting the process to your needs and constraints.

2.3.4 Choose Wisely

A real-world Web-based B2B software product company in San Francisco had a well-established customer base for their large complex suite of tools. At some point they made major revisions to the product design as part of normal growth of functionality and market focus. Operating under at least what they perceived as extreme pressure to get it to the market in “Internet time,” they released the new version too fast.

The concept was sound, but the design was not well thought through and the resulting poor usability led to a very bad user experience. Because their customers had invested heavily in their original product, they had a somewhat captive market. By and large, users were resilient and grumbled but adapted. However, their reputation for user experience with the product was changing for the worse and new customer business was lagging, finally forcing the company to go back and completely change the design for improved user experience. The immediate reaction from established customers and users was one of betrayal. They had invested the time and energy in adapting to the bad design and now the company changed it on them—again.

Although the new design was better, existing users were mostly concerned at this point about having a new learning curve blocking their productivity once again. This was definitely a defining case of taking longer to do it right vs. taking less time to do it wrong and then taking even longer to fix it. By not using an effective UX process, the company had quickly managed to alienate both their existing and future customer bases. The lesson: If you live by Internet time, you can also crash and burn in Internet time!

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123852410000026