What reflects the integrity of decision made and is a function of many factors?

Psychotherapy: Ethical Issues

J. Younggren, in International Encyclopedia of the Social & Behavioral Sciences, 2001

5 Fidelity

The final principle of ethical decision-making is that of fidelity. Fidelity addresses a person's responsibility to be loyal and truthful in their relationships with others. It also includes promise keeping, fulfilling commitments, and trustworthiness (Welfel and Kitchener 1992). In healthcare, however, the implementation of this principle extends beyond the regular responsibilities of business or contractual fulfillment to the creation of relationship based upon trust: the trust the patient has that the professional will always operate in the patient's best interests and will always fulfill their professional obligations to operate in his or her best interests. Some authors have taken the position that fidelity, along with non maleficence, are the most legally salient moral principles (Bersoff and Koeppl 1993).

As applied to psychotherapy, fidelity forces the professional to orient toward the patient's needs and not his or her own. Thus, the therapist who allows a sexual relationship to develop in treatment would be violating this principle. Clearly such a relationship is designed to meet the therapist's needs and becomes exploitive. This principle also applies to patient abandonment since such conduct would violate the trust the patient places in the psychotherapist, trust that the therapist will see the patient through the treatment process. Finally, fidelity also applies to circumstances where professionals allow administrative factors, like benefits limitations, fee capitation, and contractual rebates to affect their treatment decisions.

Fidelity, as applied to healthcare, is a dynamic process: and the way in which a therapist maintains fidelity in treatment changes as treatment changes. Fidelity also blends with the other principles of ethical decision-making to require that the therapist be open and honest in the treatment relationship and to make sure that the patient's expectations are consistent with what the therapist can provide. Fidelity requires psychotherapists to make the best choices they can for those that they treat.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767013607

Ethics and Professionalism

Christopher A. Hertig, in The Professional Protection Officer, 2010

Publisher Summary

This chapter describes the ethics and professionalism, provides a guide to ethical decision making, ethical issues in protection, and reasons of unethical behavior. Ethical and professional conduct by protection officers is necessary for all concerned–the officer, the organization they represent, and the public that they protect. Officers, who wish to have long and fulfilling careers need to develop ethical approach. They need to be ethical and they need to perform in a professional manner at all times. Individuals who are unethical and unprofessional do not have rewarding careers, and protection officers must be equipped with the decision-making skills and the professional knowledge to make the right choices. The basic decision making involves problem solving that consists of a number of steps including identify the problem, research the various options available, choose an option, and implement the decision. There are a number of ethical issues that are pertinent to the protection of assets, such as violation of privacy, celebrities, abuse of force in protecting celebrities, and exploitation of positions by getting involved in security functions. Some of the more common causes of ethical lapses are protection officers or any other person in the position of trust, taking the path of least resistance, or conflict with full-time and part-time employment. Employers and clients benefit from ethical and professional behavior by being able to trust the protectors, and they can be assured that those who protect them are acting in their best interest.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781856177467000456

Emotion, Empathy, and Ethical Thinking in Fable III

Karen Schrier, in Emotions, Technology, and Digital Games, 2016

The Relationships Among Ethics, Emotions, and Empathy

A number of researchers have suggested relationships among emotions, empathy, and ethical decision-making. Krishnakumar and Rymph (2012) explain that many models of ethical decision-making focus on the cognitive, rather than affective aspects. Yet, ethical decision-making and emotions are intertwined, and intense negative or positive emotions can function as obstacles or enhancements to decision-making (Krishnakumar & Rymph, 2012). For example, Krishnakumar and Rymph’s (2012) study critiques Rest’s (1986) model of ethical decision-making, by exposing the importance of emotions, particularly negative emotions, such as anger, to the process. Moreover, their results suggest that how a person manages the emotions throughout the ethical decision-making process affects how those emotions may help or halt it (Krishnakumar & Rymph, 2012). Similarly, the role of emotions and feelings in general decision-making has been suggested (e.g., Dunbar, 2005; Etzioni, 1988).

Empathy- and care-related concepts, skills, and thought processes have also been suggested to be related to ethical decision-making. Critiquing Kohlberg’s (1969) six stages of moral development, Gilligan (1982) explains that care is a key component of moral thinking, and that people’s relationships and connections to others affect how they think through ethical choices. Similarly, Noddings (1984, p. 244) posits that morality requires a “sentiment of natural caring” and that caring forms the foundation and motivation for ethical relationships. She argues that discourse around ethics and ethical behavior often centers on logic, mathematical calculations, fairness, objectivity, order, and justice—traits traditionally associated with the masculine—rather than the natural, instinctual, empathetic, emotional—traits traditionally associated with the feminine (Noddings, 2003). Noddings argues that it is in fact caring and wanting to care—being in a caring relationship—that motivates us to act morally, and not mathematical calculations or logic (Noddings, 2003). Other theorists, such as Dewey (2008), blend together the caring and emotional, with reasoning and logical components, when defining ethics.

Also, Jolliffe and Farrington (2006) posit that empathy is also integral to moral development and ethical practice, and created a basic empathy scale that integrates both the affective and cognitive abilities typically associated with empathy. Mencl and May (2009) empirically tested the extent to which cognitive and affective empathy plays a role in ethical decision-making in managerial contexts, and included empathy as a factor that mediates or facilitates a situation’s ethical quality by enhancing one’s understanding of the implications of the situation and by influencing moral judgment (Mencl & May, 2009). This resulted in the development of a new framework for ethical decision-making that includes empathy in relation to a managerial context (Mencl & May, 2009). In general, however, empirical research on the relationship between ethics and empathy is sparse, especially in nonmanagement contexts.

While there has not been as much research focused on the interplay of emotion- and empathy-related skills and thought processes, and ethical decision-making, one study suggested that the inclusion of empathy and feelings in ethical decision-making and ethical thinking may even lead to more humane outcomes. Lurie (2004) investigated ethical decisions made in a managerial context, and results suggested that managers that use emotions effectively make more compassionate decisions and strengthen relationships among people. Overall, these studies suggest relationships between the three elements, though determining the extent to which they affect each other is less clear.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128017388000038

Fieldwork: Ethical Aspects

P. Marshall, in International Encyclopedia of the Social & Behavioral Sciences, 2001

5 Recommendations for Resolving Ethical Dilemmas

Investigators who confront moral dilemmas in the course of conducting fieldwork must consider carefully the full range of issues involved in the problem. A systematic approach to ethical decision-making begins with a robust description of the research dilemma. This would include details about the purpose of the study, the research design, the sponsors of the study, and the individual participants and community involved in the research. The researcher then must consider the cultural and social values represented in the ethical problem. The relevant values of the study participants, the study community, the professional community, and the study sponsors should be outlined. The researcher must determine the principal value conflict. At this point in the process of decision-making, the fieldworker should consider whose values are threatened and who are the most vulnerable to potential harms. This process of reflection facilitates the identification of the key issues involved in the ethical dilemma. When the primary ethical issues are determined, the investigator should outline a full range of strategies and consider the potential risks and benefits associated with each solution. The decision regarding a course of action should maximize respect for the individual and group values identified. The vulnerability of research participants and the communities within which they live should be of paramount importance in resolving the ethical dilemma encountered.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767001777

The Logic of Diagnosis*

Kazem Sadegh-Zadeh, in Philosophy of Medicine, 2011

3.3.2 Case-based reasoning

Case-based reasoning, or CBR for short, is a method for solving a current problem by utilizing experience on previous problems. It is increasingly becoming an important subject of research in medical artificial intelligence and clinical decisionmaking. Although it is often viewed as a recent brainchild of Janet Kolodner [Kolodner, 1993], it does not represent a novel approach. It is rooted in the socalled casuistry, a case-based moral judgment, that had its origin in Stoicism and in the writings of Cicero (106-43 BC). Also contemporary bioethics research is devoting attention to casuistry as a method of ethical decision-making (cf. [Boyle, 2004; Strong, 1999]). Casuistry flourished during the fifteenth and sixteenth centuries in the Roman Catholic Church [Jonsen and Toulmin, 1989; Keenan and Shannon, 1995]), and was also used in medicine in the eighteenth and nineteenth centuries giving rise to the well-known case reports or casuistics. However, as a moral-philosophical and medical-casuistic approach it lacked a formal methodology. This facility was provided by and after Janet Kolodner's pioneering work on CBR.

As a revival of casuistry, CBR is an empirical approach in that it exploits previous experiences to find solutions for present cases. Previous experiences on individual cases are stored in the memory of a CBR system referred to as its case base. Facing a new problem, e.g. a new patient with a particular medical complaint, it retrieves similar cases from its case base and adapts them to fit the problem at hand and to provide a solution for it (cf. [Jurisica et al., 1998]). It thus rests on the basic CBR axiom that similar problems have similar solutions. This philosophy is reminiscent of homeopathy that relies on Samuel Hahnemann's esoteric law of similars, i.e. “similia similibus curentur” (1796), like cures like. In order for CBR to be distinguishable from such speculative conceptions, therefore, it may be based on a framework with efficient methods of case representation and a clear notion of case similarity.

Usually CBR is contrasted with the so-called ‘model-based reasoning.’ The latter term is an inappropriate one and ought to be avoided. It is a misleading name for a knowledge-based, or knowledge-guided, approach that uses general, scientific knowledge in the premises of arguments, e.g. rule-based clinical expert systems such as Mycin and others. CBR does not do so because the knowledge contained in its case base is merely the description of some individual cases without any generalization or statistical analysis. The expertise that is used as ‘knowledge’ in a CBR system simply consists of narrations on specific problems embodied in a library about single cases, for example, about (i) the patient Hilary who had had symptoms A, B, and C and had received the drug D to the effect E; (ii) the patient Joseph K who had had symptoms F, G, and H and had received the drug I to the effect J; and so on. A current case is matched against such exemplars in the case base to make a judgment and decision. How is the comparison to be made and the judgment and decision to be attained? Otherwise put, how is information on previous cases to be used so as to manage a present case? This is the central methodological question CBR is concerned with. The first answer it provides is that there must be some similarity between the present case and one or more cases in the case base. Such inter-case similarities are utilized in judging and decision-making by CBR. As an example of CBR we will briefly discuss case-based diagnosis.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978044451787650012X

Deciding Machines: Moral-Scene Assessment for Intelligent Systems☆

Ariel M. Greenberg, in Human-Machine Shared Contexts, 2020

6.2 Background

In his short story Runaround, Asimov (1942) asserted his three laws of robotics. Though intended for narrative effect, these laws also serve as a practical starting point when considering equipping machines with the ability to morally reason. Decomposing just the first clause of the first law (“A robot may not injure a human being”) suggests sophisticated perceptual and reasoning capabilities.

Compare the cover of an early printing of Asimov's stories with a more recent rendition. The early cover shows a robot sitting on the ground looking over at a young girl standing next to him; both machine and human seem at ease. A later cover, with ominous shades of red in the background, depicts the robot with its arm raised, as if asserting its dominance. Do we want the caring utopian vision of human-robot interaction depicted in the first image or the domineering picture shown in the second?

6.2.1 Spooner's grudge

In the movie I, Robot (Mark, Davis, Dow, Godfrey, & Proyas, 2004), inspired by Asimov's collection of science fiction stories of the same name, Detective Spooner holds a grudge against robots because one saved him from drowning instead of saving his daughter, having calculated that his odds of survival were better. That action denied Spooner his right to sacrifice his life for the less likely chance that his daughter might be saved.

Though the movie is not of Asimov's canon, it does bring forward the following question: should robots be programmed to be ends-based utilitarians or rights-based Kantians?b This controversial question leads us to an uncontroversial insight: a good starting point for robots making ethical decisions is unilateral nonmaleficence, an all-things-considered obligation of robots not to cause harm to humans.

6.2.2 Decomposing “three-laws safe”

As mentioned, Asimov crafted his three laws for narrative effect, not as design guidelines for implementation. That said, the laws nonetheless provide a good basis for unpacking the entailed perceptual and reasoning capabilities required to process the notion of “three-laws safe.” Even just the first clause of the first law, “A robot may not injure a human being,” embeds at least these questions: What is a human being? How is human being injured? What is injurious to a human being?

Murphy and Woods (2009, p. 14) recognized this basis in their paper presenting the alternative laws of responsible robotics: “With few notable exceptions, there has been relatively little discussion of whether robots, now or in the near future, will have sufficient perceptual and reasoning capabilities to actually follow the laws.”

Intelligent systems may currently possess some degree of agency, but they are not presently capable of appreciating the moral impact of their actions. Such moral foresight is required for an intelligent system to develop action plans that are in accordance with the values (e.g., “three-laws safe”) of the accountable designer. Physical competence is outpacing perceptual competence and reasoning capability, and it is incumbent on the responsible roboticist to make them commensurate for this important purpose.

6.2.3 Pathway to moral action

The development of a pathway to moral action for intelligent systems requires the contributions of many disciplines (Fig. 6.3). The ultimate execution of a moral-laden action is within the realm of the field of robotics and is exemplified by tasks such as gently placing a kitten in a kennel or handing over a firearm in an unthreatening manner. The moral deliberation (or phronesis) to arrive at the appropriate action plan is in the realm of the field of moral philosophy. The presiding heuristic indicates utilitarianism for interacting with animals and Kantianism for interacting with humans. This heuristic might be hard coded into the deciding machine, but in general, which school of moral thought to employ is a matter for philosophical debate and will remain controversial.

What reflects the integrity of decision made and is a function of many factors?

Fig. 6.3. Pathway to moral action. Some questions along the pathway are incontrovertible; others are thorny. All approaches require some basic fundamental perceptual and reasoning capabilities to characterize in moral terms the scene in which action is taken. Several disciplines come together in MSA.

Uncontroversial, however, is that regardless of which school of moral thought guides the action, each without exception requires some basic fundamental perceptual and reasoning capabilities to characterize in moral terms the scene in which action is taken. This characterization process draws from the realm of moral psychology to discover what we find salient when considering taking an action with moral implications in a scene. The emblematic interrogations of the scene include the following: What are the minds in the scene? What is the relationship of objects in the scene to those minds? We dub the set of foundational capabilities that enable satisfaction of this interrogation Moral-Scene Assessment (MSA). Moral-Scene is hyphenated to indicate that it is a moral scene that is being assessed, as opposed to a scene assessment that is being performed morally. The distinction is important because we make no claim here of how to go about moral action. Rather the claim is about how to go about recognizing when a scene contains moral content.

The discipline of cognitive science lends the means to enumerate the mental faculties to accomplish recognition of moral content. The faculties include mind perception, affordance/danger qualification, stance adoption, and reasoning about harms. These rudiments of a moral deliberation engine and the required inputs to feed it are what will be discussed in the sections that follow.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128205433000067

Civil Liability of Security Personnel

Charles P. Nemeth J.D., Ph.D., LL.M, in Private Security and the Law (Fourth Edition), 2012

The Police Moonlighter: A Merging of Public and Private Functions

Many occupational activities in private security and public law enforcement blur their once-distinct lines. Examples include a private security officer who has been granted a special commission license or privilege by the state to perform clearly delineated activities. Certain jurisdictions designate individuals as “special policeman” or use other terminology to grant private security personnel public arrest privileges and rights.261 This type of state involvement may meet the burden of 42 U.S.C. §1983's color of state law standard. The fundamental premise behind the legislation is that the claimant must amply demonstrate an affirmative link between the private officer's conduct and the state or other governmental authority that involves itself directly or indirectly in the conduct.262

A classic merger of public and private interest occurs when public police officers moonlight within the security industry.263 The Hallcrest Report II sees significant dual occupational roles in the private sector:

These surveys revealed that 81% of the law enforcement administrators indicated that their department's regulations permit officers to moonlight in private security, while 19% prohibited or severely restricted private security moonlighting. Law enforcement administrators estimated that about 20% of their personnel have regular outside security employment to supplement their police salaries. Nationally, the Hallcrest researchers estimated that at least 150,000 local enforcement officers in the U.S. are regularly engaged in off-duty employment in private security. The three most common methods of obtaining off-duty officers for security work, in rank order, are: (1) the officer is hired and paid directly by the business, (2) the department contracts with the business firm, invoices for the officer's off-duty work, and pays the officer, and (3) off-duty security work is coordinated through a police union or association.264

So common is moonlighting that the Fraternal Order of Police now offers liability coverage for public police officers engaged in this dual role. Visit http://www.foplegal.com/files/Moonlighting_Fillable_Application2.pdf.

The confusion of roles and functions often gives rise to ethical conundrums. What was once clear is a bit gray. Evaluate how moonlighting impacts ethical decision-making in the following questions:

1.

Who is liable for a tortfeasor's behavior if the individual is off duty from public policing and working in a private security interest? How does the answer gel with a jurisdiction that requires police to be on call 24 hours per day?

2.

What influence does moonlighting have on the efficacy and productivity of police officers?

3.

What potential conflict of interest exists?

4.

Should an arrest, search, or seizure by a private security officer, working part-time while maintaining full-time public police employment, adhere to the rigorous standards of the Fourth, Fifth, and Fourteenth amendments of the U.S. Constitution?

5.

Which standard of constitutional protection should be accorded an appellant in a criminal case who has been victimized by a law enforcement person with both private and public connections?

6.

How many hours per week should a publicly employed law enforcement officer be permitted to work in the private security industry?

7.

Should a publicly employed police officer be permitted to operate as a private investigator, unrestrained by traditional constitutional protections granted in the public sector?

Others have argued that moonlighting suffers from inherent conflicts and is saddled with legal liability problems.265

Another factor courts weigh is the extent of the economic relationship. Is there a contract for private services? Does the proprietor want public officers to act privately or publicly? In Otani v. City and County of Hawaii,266 the federal district court evaluated the question this way:

Plaintiff is correct in his assertion that “[a] private party may be liable under §1983 if he was a willful participant in joint action with state agents.”267 However, “[a] claim of conspiracy or action in concert requires the allegation of ‘facts showing particularly what a defendant or defendants did to carry the conspiracy into effect, whether such acts fit within the framework of the conspiracy alleged, and whether such acts, in the ordinary course of events, would proximately cause injury to the plaintiff.’”268

As the court explained, “it is possible that [the officer's] actions could have caused Plaintiff to be subjected to a deprivation of her civil rights while Safeway's actions did not; the Court merely holds that, whatever Safeway did, it did under color of state law.”269 To hold Safeway liable for the officer's actions, the plaintiff had to produce some evidence that Safeway “caused her to be subjected to a deprivation of her constitutional rights through its hiring and training policies, or the lack thereof.”270 A court is rightfully satisfied when the contract calls for the hiring of a public police officer to direct traffic at a construction site as a sufficient economic relationship.271

The inherent complexities moonlighting, from both an economic and legal point of view, make rock hard rules concerning entanglement difficult to come by. Some cases are easier than others. However, suspects of criminal behavior may be offered a menu of potential causes of action against an officer who is both publicly and privately employed. In Faust v. Mendoza,272 a police officer was caught in an ethical dilemma representing two employers. The facts consisted of the following:

At 10 PM on February 9, 1975 during Mardi Gras celebration in the French Quarter of New Orleans, Louisiana, a couple who had been enjoying the festivities and drinking all day stopped at the ice cream parlor in the Royal Sonesta Hotel. Apparently the man, John Faust, rested his head on the parlor's counter and ignored requests that he move. At this point, Officer John Mendoza entered to wait the 45 minutes until 11 PM when he was to begin work as a security guard for the parlor. He was to work until 3 AM in his police uniform at the parlor after completing 11 AM to 11 PM shift on police assignment controlling crowds around the Mardi Gras parades. After Mendoza approached Faust, testimony on what followed conflicts greatly. Although particular details are unclear, it appears that Mendoza struck both Faust and his female companion … Ingrid Pillar, with a billyclub, smashed the ice cream parlor window (either accidentally or by throwing him against it) and arrested Faust and Pillar for assault upon a police officer.273

The court held the police officer accountable. When these dual roles coalesce, some courts suspect a public law enforcement officer's intentional bypass of the more demanding public standards. In Bauman v. State of Indiana,274 the court grappled with a suspect's right to Miranda warnings before a security officer could custodially interrogate. That security guard also happened to be an off-duty police officer. In affirming the convictions, the court did not accept the argument that Miranda rights were necessary because of the guard's public police officer status. The court was perfectly satisfied with the differentiation of occupational roles, holding that the security guard “was not acting in his capacity as a police officer at the time, but rather in his capacity as a private citizen security officer.”275 In Leach v. Penn-Mar Merchants Assoc.,276 a county police officer, simultaneously employed as a security guard, made while on security duty an arrest at a traffic accident. The court construed his traffic altercation to be a public police function distinguishable from his security work. Other cases dealing with the differentiation of authority and the public/private status of law enforcement include the City of Grand Rapids v. Frederick Impens277 and Cinestate Inc. v. Robert T. Farrell, Administrator.278

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123869227000058

Software Piracy

A. Graham Peace, in Encyclopedia of Information Systems, 2003

VIII. Reasons for Piracy

In recent years, there has been a concerted effort by academic researchers to determine the cause of piracy. What makes an individual decide to illegally copy a software package when he would never consider stealing an automobile or committing other forms of crime? It is most likely a combination of factors, including the ease with which software can be pirated, the cost of software, the victimless appearance of the crime, and the unlikelihood of detection. Very few other crimes meet these specifications, although speeding in an automobile is similar, and anyone who has driven on a major freeway knows that a majority of drivers exceed the speed limit. A comprehensive model has yet to be developed, but strides are being made. Most models focus on three different theories: the theory of reasoned action, the theory of planned behavior, and economic utility theory.

VII.A. Theory of Reasoned Action

The theory of reasoned action (TRA) developed from a stream of research in social psychology that suggests that a person's behavioral intention toward a specific behavior is the major factor in whether or not the individual will carry out the behavior. Behavioral intention is, in turn, predicted by the individual's attitude toward the behavior and subjective norms. The individual's attitude is her perception of the consequences and outcomes of the behavior, on a continuum from positive to negative. If the individual believes that an action will lead to positive results, she will have a positive attitude toward the behavior. This will positively affect intention, which will lead to the committing of the actual behavior.

Subjective norms are the pressures that the individual feels from friends, peers, authority figures, etc., to perform or not perform the behavior in question. This is the individual's perception of the pressures from the social environment and is often referred to as peer norms. Much support has been found for the predictive ability of this theory, although it has also been found lacking in the explanation of ethical decision making in situations involving computer issues.

VIII.B. Theory of Planned Behavior

The theory of planned behavior (TPB) is an extension of the TRA framework. TPB posits that behavior is determined by the intention to perform the behavior, which is predicted by three factors: attitude toward the behavior, subjective norms, and perceived behavioral control (PBC). PBC refers to the perception of the subject as to his ability and opportunity to commit the behavior. A person with a high level of PBC would have confidence in his ability to successfully carry out the action in question. This is an important element, because the Internet provides an easy-to-use platform for obtaining pirated software. It may be that, in the past, potential pirates were dissuaded from illegally copying software by their perceived inability to carry out the act. However, the Internet allows almost any user to find and download pirated software with just the click of a mouse.

Much research has been done to validate TPB empirically, and it has been found to be a good predictor of an individual's behavior, including software piracy.

VIII.C. Expected Utility Theory

Economic issues, such as costs and benefits, are also commonly claimed to be factors in a person's decision-making process. For example, lack of financial resources has been cited as a reason for software piracy behavior. Expected utility theory (EUT) posits that, when faced with an array of risky decisions, an individual will choose the course of action that maximizes the utility (benefits minus costs) to that individual. Different variants of this model exist, but each supposes that a rational individual will analyze the benefits and costs involved. Where probabilities exist, the individual will factor the probability of each outcome into the decision-making process.

In most cases, computer-using professionals have three possible courses of action, when faced with a situation in which software can be used: purchase the software, do without the software, or pirate the software. It is possible to describe these choices in terms of EUT. To do so, it is necessary to determine the costs and benefits involved. Some benefit will most likely be gained through the use of the software. However, a purchase cost is involved, if the software is legally obtained. The expected utility of purchasing the software is the expected benefit gained from the use of the software, less the expected cost of the software.

In the case of software piracy, costs result not from purchasing the software but from the punishment level and the probability that the punishment will be incurred. The expected utility of pirating is the expected benefit gained from pirating less the expected cost (calculated using the punishment probability and punishment level). The individual will pirate the software when the expected utility of pirating is greater than the expected utility of not pirating. If the individual perceives that the chances of a significant punishment are low, then it is more likely that the individual will pirate the software. This may be a partial explanation of why piracy rates are so much higher in areas of the world such as Asia and Eastern Europe, where the chances of punishment are more remote, as opposed to the United States, where the risk of punishment is much more severe.

A key aspect of economic utility is the price paid for the software. This is commonly noted as a factor in the decision to pirate. Ram Gopal and Lawrence Sanders have developed an interesting stream of research into software piracy issues, yielding several useful results regarding pricing issues. Among their findings, Gopal and Sanders discovered that piracy rates are related to per capita GNP. The lower a country's per capita GNP, the higher the level of piracy. This lends credence to the argument that price is a factor, because a lower per capita GNP would translate to relatively higher software costs, if prices are held constant throughout the world. As stated in Section X, this has led to suggestions that software companies utilize price discrimination as a tool for reducing piracy. This would involve basing software prices partially on the ability of the potential purchaser to afford the cost.

VIII.D. Comprehensive Theory of Piracy Behavior

Each of the above theories is useful in predicting the piracy behavior of an individual. Recent research has focused on combining these and other factors into a comprehensive model of behavior. Evidence that all of the factors discussed above influence the decision whether or not to pirate has been found, although different models provide different levels of importance for each factor. Punishment and the probability of punishment are consistently found to be deterrents to undesirable behavior. The relatively high level of piracy in the world may be due to the fact that people perceive the chance of prosecution and punishment as minimal. This would fit with the EUT model. Similarly, as posited by TRA and TPB, peer norms and attitude toward piracy have been demonstrated to be significant influences on a person's decision to pirate. If an individual works in an organization where piracy is accepted, she is more likely to commit the crime. The attitudes toward the behavior may be related to the ethical outlook of the individual. If the person perceives software piracy to be unethical, she will have a more negative attitude toward piracy and will be less likely to carry out an act of piracy.

Much research remains to be done in the development of a useful predictive model of piracy behavior. However, the work to date does give some indication as to what can be done to deter individuals from illegally copying software. Detection and punishment are useful deterrents, as is education, to the immoral and damaging nature of the act. Also, influencing an individual's peer group and changing the culture of an organization to be more antipiracy in nature are both important factors in controlling the problem. Discriminatory pricing practices may be a potential antipiracy tool. Each of these is discussed further in Section X.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001623

Review of Anderson and Anderson's Machine Ethics

Cory Siler, in Artificial Intelligence, 2015

The Machine Ethics anthology, edited by Michael Anderson and Susan Leigh Anderson, is a collection of essays dedicated to the challenge of instilling ethical principles into intelligent machines; in contrast with the more traditional area of computer ethics, it focuses not on programmer or user decisions but on the ethical decision-making capacity of the machines themselves. As an overview of a field in its infancy, Machine Ethics is not meant to be an in-depth reference book; it largely takes Gips’s approach [8], “to raise questions rather than answer them.” Nor is it tailored to be the primary text for any well-established course, although I’d love to see an innovative professor spin a special-topics course around it, and I can imagine individual essays being used as readings for cognitive science, philosophy, and “soft” engineering courses. In any case, the volume is a thought-provoking introduction to the field of machine ethics, and I recommend it to students and researchers outside of the field who are looking to broaden their interests.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0004370215001368

A systematic literature review on online assessment security: Current challenges and integrity strategies

Manika Garg, Anita Goel, in Computers & Security, 2022

5 Concluding remarks

This study presents a systematic literature review about the security and integrity of online assessments. We examined 56 studies published from 2016 to 2021 to identify reasons for student engagement in dishonest behaviors, types of dishonesty, integrity strategies and the role of machine learning within those strategies. In this paper, we learned that multiple online environmental factors influence the students’ ethical decision-making behavior. We also learned that while traditional non-technological methods are still in use, new technology-enabled cheating methods are devised by students regularly to compromise online academic integrity. We identify different integrity strategies that serve two purposes - dishonesty prevention and dishonesty detection. We proposed an Academic Dishonesty Mitigation Plan that provides a comprehensive approach for effective mitigation of dishonesty under all scenarios. ADMP encompasses multiple strategies based on varying factors and necessitates the involvement of major stakeholders - platform owners, institutions, teachers and students, to establish a secure online assessment system. The study also looks into the role of machine learning in the proposed mitigation approaches that can serve as a guide to machine learning researchers focusing on finding automated solutions to establish integrity in online learning environments. Overall, this study helps the academic community to have a holistic understanding of online assessment security. Minimizing academic dishonesty will enhance the credibility of online education, thereby gaining the faith of employers in online graduates.

Appendix

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0167404821003680

What is integrity in decision making?

Ethical principles relate to the way decision makers conduct themselves. This may include: • Acting in the public interest; • Impartiality, honesty and fairness; • Diligence, consistency and timeliness; and • Respect for the interests, rights and safety of others.

Which of the following are factors in the ethical decision making model?

Ethical decision-making is based on core character values like trustworthiness, respect, responsibility, fairness, caring, and good citizenship. Ethical decisions generate ethical behaviors and provide a foundation for good business practices. See a model for making ethical decisions.

Why integrity is importance in decision making?

Integrity in decision making is crucial to good governance and sustaining public trust. The community needs to have confidence in the decisions made by public officers. The integrity in decision making framework outlined below is designed to assist public officers when making decisions.

Which one of the following is an organizational factor that can affect ethical decision making by an individual?

Ethical climate is one of the most important organizational factors, which appears to have a significant impact on the ethical decisions of employees at their place of work.