Determining Cognitive Functioning of Individual

04 Apr 2018

This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Serial assessment in neuropsychology is necessary to make inferences regarding an individual’s level of functioning, i.e. to determine whether there has been ‘real’ improvement or decline, outside of measurement error, normal variation and clinically insignificant change [1]. A number of psychometric methods have been developed in order to interpret changes in test scores over repeated occasions of assessment. The associated problems and processes that are involved in delineating observed scores into their subcomponents of measurement error and true scores are complex and problematic [1].

Acquiring knowledge and understanding of issues pertaining to measurement error, such as the standard error of measurement (SEM,) is crucial to accurate interpretation of neuropsychological test results and change scores. The SEM refers to the total error variance of a set of obtained scores, where the obtained scores are an unbiased estimate of an individual’s true score [2]. It is the standard deviation (SD) of an individual’s test scores had the specified test been undertaken multiple times, and is calculated by multiplying the baseline SD of a measure by the square root of one minus the reliability coefficient of the measure [3]. The SEM is inversely related to a test’s reliability, such that larger SEMs reflect less reliable tests, and therefore denote diminished accuracy with the measure taken and the scores obtained [1]. This leads to greater variability within a test battery and thus any interpretation of results in such a case should be undertaken with a considerable degree of caution [4].

SEMs are useful in preventing the unwarranted attachment of significant meaning to between-score differences. That is, SEMs and their corresponding confidence intervals may overlap, indicating that some of the observed score difference may actually be attributable to error in measurement [1]. However, whilst the SEM is useful for estimating the degree of measurement error, it is not a suitable predictive measure as it is based on a distribution that presumes true score knowledge, which will always be unknown as tests do not have perfect reliability. As such, utilising the standard error of estimate (SEE) for such purposes may be the more appropriate method [2]. The SEE is a method which utilises a regression-based approach and measures the dispersion of predicted scores [5]. The SEE reflects the SD of true scores when the observed score is held constant, and is the statistic from which confidence intervals should be constructed [2].

The construction of confidence intervals is closely related to a test’s reliability. More reliable tests, in terms of internal consistency, represent homogeneity within the test itself. Thus, the associated confidence intervals will encompass a more narrow range of scores, with the resulting estimate being more precise [2]. It is therefore necessary to consider a test’s reliability coefficient, as below a certain point, the utility of a test is compromised [2]. Furthermore, as the reliability of a test is the single largest factor in determining the degree of change needed to occur over time from which the observed difference can be deemed to reflect actual change, using tests with high reliability coefficients is of paramount importance [6].

The consideration of measurement error in neuropsychological test results may also incorporate the assessment of observed score differences in terms of clinical significance. Clinically significant change can be interpreted on the basis of whether an individual’s change in test performance over two occasions reflects sufficient improvement, so that the individual has shifted classification categories, for example from ‘impaired’ to ‘normal’ [6]. Therefore, if a change is to be considered clinically significant, the tests being used to assess observed score differences need to be reliable.

However, interpreting clinically significant change may also be problematic. Whilst there may be a considerable observed change in test scores from one measurement occasion to the next, if the starting point is at the extreme low end of a category, and the end point is at the extreme high end of a category, then an individual’s classification will not change and clinically significant improvement will not be deemed to have occurred [6]. This is a problematic interpretation as these changes may well have had important functional consequences for the individual that underwent assessment, and thus it is important to employ sensible clinical judgement [6].

Caution also needs to be applied to the interpretation of statistically reliable change, to avoid the implication that it represents real change. In reality, the observed change may instead reflect measurement error [6]. Statistically meaningful differences may also be a common occurrence within a particular population [7], but these are not necessarily clinically significant differences. Whilst neuropsychological test interpretation must consider, amongst other things, base rates of expected differences and abnormalities, the number of measures in a battery must also be taken into account, as abnormal performance on a proportion of subtests within a battery should be regarded as psychometrically normal [4].

A number of methods for calculation of reliable change have been proposed, adopted and further modified. These methods are usually given the designation of Reliable Change Index (RCI), and are used to estimate the effect of error variance on test score accuracy [6]. The value of the RCI is used to indicate the probability of the difference between two observed scores being the result of measurement error, and thus if the resulting probability is low, the difference is likely due to factors external to the test itself [1].

The notion of reliable change originated in classical test theory, with the standard error of the difference used as the criterion for determining whether an observed difference is credible under the null hypothesis of no real change [8]. However, the original, unmodified classical approach assumes that there are no practice effects. Certain subsequent variations of this approach have aimed to account for practice effects, in one of two ways. Either by a simple adaptation of the Jacobson and Truax approach (a widely used, simplified version of the classical approach, called the JT index), or via estimation of true change by using a regression equation, with the latter method being the favoured alternative in this context [8]. This regression-based approach does not require the test scores at each of the time points to have equal variance, and thus practice effects can occur [6].

There are many further approaches to calculation of RCIs, with no real consensus about which method is superior and should represent the ‘gold standard’ approach [8]. Furthermore, whilst RCI methods do have a number of advantageous features, there are still inherent limitations when considering factors such as real change that remains undetected if it falls below the RCI threshold [6]. Additionally, whilst reliable change methodology adjusted for practice effects has the potential to reduce measurement error and improve clinical judgement, it utilises a constant value - the group mean – and so does not take into account the full range of possible practise effects, nor does it traditionally account for regression to the mean, so that error estimates are not proportional to the extremities of observed changes [1]. However, this methodology does at least provide a systematic and potentially empirically valid approach to assessment of real change [6]. In contrast, whilst regression methods do also have their own inherent limitations, such as greater utility in larger sample sizes, these are considered less extensive than RCI methodology [1].

The methods discussed thus far are primarily distribution-based approaches, meaning that they express observed change in a standardised format. A primary disadvantage of this type of approach is that they are purely statistical measurements which do not reveal the clinical significance of any observed change [9]. Alternative approaches include the use of reference states to estimate the minimal important difference or change, which refers to the smallest change in health quality that the patient is able to perceive and that is considered clinically relevant change [3]. However, these approaches have their own inherent limitations, with direct and subjective patient involvement in the change assessment process increasing the complexity of the measurement [3].

As the determination of an individual’s current cognitive functioning, as well as whether this functioning has improved or declined since prior assessment, is fundamental to the efficacy of clinical neuropsychology, the ability to reliably determine change via comparison of test scores is crucial [6]. However, as has been outlined above, the approaches involved in this determination are varied in their efficacy, and come with inherent limitations. As such, when considering the clinical significance of test results, a patient’s performance needs to be interpreted contextually, taking into account relevant behavioural, medical and historical information, as psychometric variability alone is not sufficient [4]. Furthermore, examination of the functional outcomes of any measured change is crucial, as this is of at least equivalent importance in determining whether improvement or decline has taken place [6].


1. Brooks, B.L., et al., Developments in neuropsychological assessment: Refining psychometric and clinical interpretive methods. Canadian Psychology/Psychologie canadienne, 2009. 50(3): p. 196.

2. Charter, R.A., Revisiting the standard errors of measurement, estimate, and prediction and their application to test scores. Perceptual and Motor Skills, 1996. 82(3c): p. 1139-1144.

3. Rejas, J., A. Pardo, and M.Á. Ruiz, Standard error of measurement as a valid alternative to minimally important difference for evaluating the magnitude of changes in patient-reported outcomes measures. Journal of clinical epidemiology, 2008. 61(4): p. 350-356.

4. Binder, L.M., G.L. Iverson, and B.L. Brooks, To err is human:“Abnormal” neuropsychological scores and variability are common in healthy adults. Archives of Clinical Neuropsychology, 2009. 24(1): p. 31-46.

5. McHugh, M.L., Standard error: meaning and interpretation. Biochemia Medica, 2008. 18(1): p. 7-13.

6. Perdices, M., How do you know whether your patient is getting better (or worse)? A user's guide. Brain Impairment, 2005. 6(03): p. 219-226.

7. Crawford, J.R., P.H. Garthwaite, and C.B. Gault, Estimating the percentage of the population with abnormally low scores (or abnormally large score differences) on standardized neuropsychological test batteries: a generic method with applications. Neuropsychology, 2007. 21(4): p. 419.

8. Maassen, G.H., E. Bossema, and N. Brand, Reliable change and practice effects: Outcomes of various indices compared. Journal of clinical and experimental neuropsychology, 2009. 31(3): p. 339-352.

9. Ostelo, R.W., et al., Interpreting change scores for pain and functional status in low back pain: towards international consensus regarding minimal important change. Spine, 2008. 33(1): p. 90-94.

Request Removal

If you are the real writer of this essay and no longer want to have the essay published on the our website then please click on the link below to send us request removal:

Request the removal of this essay
Get in Touch With us

Get in touch with our dedicated team to discuss about your requirements in detail. We are here to help you our best in any way. If you are unsure about what you exactly need, please complete the short enquiry form below and we will get back to you with quote as soon as possible.