Decision areas of psychological practice, experience or performanceForensic

DecisionMaking and Reasoning: Evaluate the effect of bias in decision making in anyapplied areas of psychological practice, experience or performanceForensic psychologists are responsible forperforming impartial mental health evaluations and providing expert witnesstestimony, producing an objective report of their findings. The decisions theymake and evidence they provide can be instrumental to a case, so it is vitalthat the information they provide is accurate.

Although psychological expertsare aware of possible cognitive biases which may affect their work, manyforensic psychologists believe that they are able to mitigate against suchbiases and practice objectively (Neal & Brodsky, 2014). Despite thisbelief, they are not immune from such cognitive fallibility. Many thoughtprocesses can occur without proper awareness of them. Daniel Kahneman (2011)refers to thought in terms of two systems; the conscious and the automated.Implicit bias (explicit may also be an issue within forensic psychology but itis of less cognitive interest and so will not be discussed further) isconcerned with the flaws of the automatic system, which generalises and mayneglect some potentially important information. This essay will critically evaluatekr1 how such thinking can affect supposedly impartial decision making, focussingclosely on how mental inadequacies might lead to cognitive biases which may havea negative impact on objectivity.Humans have limited processing abilities; thebrain is equipped with a number of cognitive shortcuts to help navigate thevast amount of information available to us.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

The automatic system of the mind mightcategorise situations based on previous interactions and memories, in order tomake sense of new information. Many of these “rules of thumb”, named heuristics(Tversky & Kahneman, 1974), work as useful tools and can be consideredadequate in many situations. However, it is important to note that they areonly estimations of a situation and if taken as fact can lead to vitalinformation being overlooked. Although it may be pertinent to assume thatforensic evaluators do not encounter the same failures in processing as membersof the general public may do, experts face many of the same biases, but oftentheir judgement and approach to risk differs (Slovic, Finucane, Peters, & MacGregor, 2007).

Decision making isbased on a number of unconscious assumptions (Bargh & Morsella, 2008) andit is relevant to discuss how these heuristics might affect the outcome of aforensic evaluation in which an evaluator may place more validity in theirreasoning than appropriate due to biased cognition.Tversky and Kahneman (1974) described theillusion of validity in reference to individuals selecting the outcome mostrepresentative of the input without proper regard for the factors which maylimit predictive accuracy. This representative heuristic can mean that the confidencein an individual’s prediction or decision often depends primarily on the degreeof representativeness associated to a particular case. This can be observedclearly with the concept of base rate neglect (Koehler, 1996): there is atendency to consider probability of a scenario based on a specific case withouttaking into consideration the prevalence in society.

For example, an individualis likely to categorize another based on personality traits alone, even whenthe base rate probability of the chosen category is negligible. Though thisseems like a problem that statistics can solve and should not be an issue fortrained professionals, it has been observed in real life situations. In the John Hinckleytrial,kr2 the defence expert witness testified that as Mr Hinckley had a brain anomalythat affected 1/3 of schizophrenics and less than 1/50 of the “normal”population, he was likely to have schizophrenia. To his mind, the fact that thebrain anomaly was so much more prevalent amongst the schizophrenic populationwas powerful evidence in support for Mr.

Hinckley having such a condition. Infact, given that the rate of schizophrenia in the general population isapproximately 0.5%, his conclusion is not correct. Considering a sample of10,000 people (in which approx. 50 would be schizophrenic), using theprobabilities above, you would expect to find 216 people with this particularbrain anomaly, only 17 of which had schizophrenia and 199 who did not (Neal , 2014). Thus, the way in which the conclusion has been drawn, withneglect of base rate probabilities in favour of relying on representativeness,can result in an incorrect conclusion.

It can be challenging to rely simply onstatistics when real life cases are presented and feel significant, but inmaking decisions of such magnitude it is always imperative to look at thestatistical facts rather than basing a conclusion on whether the individual fitsthe description. There is a salient connection between the subjectand the diagnosis, they fit qualitatively and as humans find it difficult tojudge probability on a large scale, the simplicity of this connection is easierto process. Another way in which a conclusion may be met is by assessing theavailability of associated situations and outcomes (Tversky and Kahneman, 1973).

  The availability heuristic assumes that themore likely an event, the easier to recall. This feeling can come down to salienceand availability of additional information and the ease at which it is recalledto help inform an opinion on a current case. This process is flawed, though, asmore salient events are not necessarily more probable.

Salience can occur dueto recency of the associated event, how often it is presented to us (in thenews, for example), and how much of an impact it has had. Reading about theprobability of a car accident occurring does not hold as much significance aswitnessing one first hand for example, and although this does not objectivelyincrease the likelihood of an accident occurring, it is more readily availableand so appears more likely to the witness (Tversky & Kahneman, 1974). Inthe case of the forensic evaluator, previous assessments may affect presentdecisions. For instance, if a previous diagnosis has resulted in an outcome whichhas negatively affected the evaluator, a similar case may elicit negative emotionsand, although they will try to guard against it, may cause the assessor to actin a subjective manner. This may also have an association with future risk assessment(Neal & Grisso, 2014). When it is the evaluator’s responsibility to predictthe likelihood of a particular individual reoffending, the possibility of incorrectlyclearing an individual is a far more salient mistake than predicting that theywill reoffend, and they do not. Even if the base rate is low, evaluators mayneglect to apply the relevant probabilities and be reluctant to class adefendant as low risk, based on personal consequences if their prediction turnsout to be incorrect.  When approaching a case, it is important to beaware that first impressions count – the anchoring effect increases thesignificance of initial information, to the point that subsequent informationmay be wasted (Kahneman, 2011).

Once an individual has begun to form ahypothesis, they tend to find it difficult to change their opinion adequatelywhen opposing information comes to light (Mannes & Moore 2013). Therefore,is it clear that in decision making, the order in which information ispresented holds a lot of value. An evaluator might hear a compelling account ofevents told by the first witness and hear a different story later, which isjust as convincing, but a hypothesis has already been formed so the informationis not processed with as much significance. Evaluators must be careful not tocome to a decision prematurely and risk possible biases to fit informationaround their narrative. Confirmation bias may occur when completing anassessment if the evaluator brings preconceptions to the case and a hypothesishas been established prematurely.

In this case an evaluator might selectivelygather information which supports the hypothesis while discarding any evidencewhich might discredit it (Neal & Grisso, 2014). It could be argued thatthis suggests an explicit bias, in which the examiner is motivated to activelyattempt to verify their hypothesis, but Evans (1948) posited that individualsconfirm not out of choice, but because they lack to understanding of how tofalsify. The Wason selection task (1968) demonstrated this theory with a cardgame in which participants were asked to turn 2 cards over to confirm thevalidity of a rule. Participants consistently failed to check the card thatcould prove the rule false as they failed to see that in order to prove a rule,you must show it cannot be disproved. For instance, an evaluator may attributea symptom to an incorrect diagnosis and therefore cease in searching forconflicting information and instead build a portfolio of information to supportthe solution they think they have found. They may not be able to see whatalternative possibilities are available and in finding more supporting evidencetheir case will be strengthened, when in fact the true test of strength is whetheror not the hypothesis can be disproved. Lack of time may contribute to this, asa conclusion will need to be met eventually and not every possibility may beinvestigated. Another possibility is that the accuracy of a diagnosis willsubjectively increase in correlation to the number of people who support it,and so assessors may look to each other for validity rather than performingcomplete evaluations themselves.

This misplaced confidence can prove problematicand could have a major impact on the outcome of a case (Richards, Geiger, &Tussey, 2015). Overconfidence is the most pervasive of theissues facing decision making as it can lead to negligence and lack of properguarding because of misplaced certainty in a diagnosis. Oskamp (1965) showedthat an increase in confidence does not signify accuracy, and does not provideevidence for valid judgement. In fact, in many cases, individuals with higherlevels of expertise exhibit less confidence in their conclusions.

(Tversky& Kahneman, 1974). Overconfidence can lead to diagnostic errors, primarilybecause of the lack of ability to identify different hypotheses (Richards et al,2015). It is therefore essential to consider all other possibilities and maintainconstant awareness of one’s own limitations of experience and knowledge whencoming to a conclusion regarding an assessment.Whilst there are a number of other heuristics andbiases which could affect overall accuracy and objectivity of an evaluation,the few outlined here clearly show the detrimental effect such biases canhave.  As I have demonstrated, inability toprocess vast amounts of data may lead to shortcuts being taken and important informationbeing overlooked. If an evaluator allows emotional responses to an event orinitial impressions of a crime which has taken place to be held significant,their decision-making processes on the case can be seriously affected (Neal& Brodsky, 2016).  It is vital thatemotion stay detached from evaluations, this may prove particularly challengingif the event has significance to the evaluators life, to their beliefs, andmorals.

Personality and opinion should not play a part in these evaluations butis has been shown that substantial variance in forensic evaluations stems fromthe characteristic differences in the evaluators (e.g. Miller et al.

, 2011). Wheninformation is limited, decisions have to be made under relative uncertaintywhich leaves an evaluator open to risk of bias. With ample time, this may bemitigated with in-depth scrutiny and by ensuring that opposing hypothesises arekept in mind at all times throughout the case to attempt to preventconfirmation bias and a theory being formed previous to the relevant informationbeing obtained. As human beings, forensic psychologists’ minds will be fallibleand whilst many of the biases discussed here can be minimized with carefulreflection of facts pertaining to a case and constant scrutiny of one’s work, identifyingand using relevant base rates, and minimizing the role of memory as much aspossible, it is not a certainty that bias will not affect the decision-makingprocess. ReferencesBargh, J.

and Morsella, E. (2008). The Unconscious Mind. Perspectives on PsychologicalScience, 3(1), 73-79.Evans, J. (1989).

 Bias in human reasoning: causes andconsequences. Hove: Laurence Erlbaum Associates.Kahneman, D.

(2011). Thinking, fast and slow. NewYork: Farrar, Straus and Giroux. Mannes, A. and Moore, D. (2013). A Behavioral Demonstrationof Overconfidence in Judgment.

 Psychological Science, 24(7), 1190-1197.Miller,A. K., Rufino, K. A., Boccaccini, M. T.

, Jackson, R. L., and Murrie, D. C.(2011). On individual differences in person perception: Raters’ personalitytraits relate to their psychopathy checklist-revised scoring tendencies.

Assessment, 18, 253-260.Neal,T. and Brodsky, S.

(2016). Forensic psychologists’ perceptions of bias andpotential correction strategies in forensic mental health evaluations. Psychology,Public Policy, and Law, 22(1), 58-76.Neal, T. and Grisso, T. (2014).

The cognitive underpinningsof bias in forensic mental health evaluations. Psychology, PublicPolicy, and Law, 20(2), 200-211.Oskamp, S.

(1965). Overconfidence in case-study judgments. Journal of ConsultingPsychology, 29(3), 261-265.Richards, P.

, Geiger, J. and Tussey, C. (2015). The DirtyDozen: 12 Sources of Bias in Forensic Neuropsychology with Ways toMitigate. Psychological Injury and Law, 8(4), 265-280.Slovic, P., Finucane, M., Peters, E.

and MacGregor, D.(2007). The affect heuristic. European Journal of Operational Research,177(3), 1333-1352.Tversky,A and Kahenman, D. (1973).

Availability: A heuristic for judging frequency andprobability Cognitive Psychology, 4,207-232.Tversky, A. and Kahneman, D. (1974). Judgment underUncertainty: Heuristics and Biases.

 Science, 185(4157), 1124-1131.Wason, P. C.(1968). Reasoning about a rule. QuarterlyJournal of Experimental Psychology, 20, 273–281.

  kr1Discuss? kr2Lookup