In this video, we're going to be talking about reliability and validity and how these concepts apply to our psychometric assessments. So first up, reliability is basically just the degree to which an assessment provides consistent scores across time and place. So, if we have a survey and we give it to participants in January and then again in December, are we going to get consistent scores? If people take our survey in the lab versus at school versus at home, are we going to get consistent scores? That's the idea behind reliability.
Now one important type of reliability that you may need to know is something called test-retest reliability, which, like I just alluded to, is the measure of the consistency of a test over time. For example, if we were to develop a new survey, we may have people come into our lab on a Monday, take the survey, come back into our lab on Friday, take the survey again, and what we are looking for, ideally, is that their score is the same at both time points. This would be evidence of good test-retest reliability. So that is reliability. Now, validity is essentially the overall appropriateness or accuracy of a conclusion.
And I do apologize for how vague that definition is, but there are many different types of validity and so the overarching definition tends to be a bit vague because we're trying to encapsulate so many different things. You don't have to know many different types of validity at this level, but one that I do recommend you are aware of is something called construct validity. So, construct validity is basically just the degree to which an assessment accurately measures what it is intending to measure. So for example, if you were trying to create an assessment for depression, you want to make sure that it actually measures depression and not anxiety or psychosis symptoms. Right?
That's very important. So one way that you can assess construct validity of a measure is by basically thinking about some other variables that should be theoretically related to your construct and seeing if they actually are. So for example, if we were developing a new survey to test extroversion, how much a person feels energized by being in social situations, theoretically, that should be related to how many hours people spend socializing. So what we would hope to see is that our extroversion survey is positively correlated. It is related to how many hours people spend socializing. And this would give us evidence that we have good construct validity.
We do seem to be measuring what we are intending to measure. You can also do the complete opposite; you can think of a theoretically unrelated construct and hope that it does not relate to your measure. So for example, if we're doing our extroversion survey, that should not be related to something like reading speed. Right? That has nothing to do with extroversion. So what you would hope is that there is no correlation or no relationship between your survey and this variable, and that would also give you evidence of good construct validity. So that is just one way to assess the construct validity of a measure. So, broad strokes here: reliability is basically telling you, you know, the degree to which an assessment is going to give you consistent scores.
Right? And then validity is getting more at, you know, are we assessing this variable in an accurate way? Can we really trust this measure to be assessing this variable is kind of the idea there. Alright. So that is how reliability and validity apply to psychometric assessments, and I will see you guys in the next one.
Bye bye.