How to ensure validity in research

How can validity be improved?

The validity of the research findings are influenced by a range of different factors including choice of sample, researcher bias and design of the research tools. The table below compares the factors influencing validity within qualitative and quantitative research contexts (Cohen, et al., 2011 and Winter, 2000):

Qualitative research

Quantitative research

Researcher bias / objectivity / honesty

Appropriate statistical analysis of the data

Design of research tools

Design of research tools

Sample selection

Sample selection

The use of triangulation

Sample size

Validity should be viewed as a continuum, at is possible to improve the validity of the findings within a study, however 100% validity can never be achieved. A wide range of different forms of validity have been identified, which is beyond the scope of this Guide to explore in depth (see Cohen, et. al. 2011 for more detail).

The chosen methodology needs to be appropriate for the research questions being investigated and this will then impact on your choice of research methods. The design of the instruments used for data collection is critical in ensuring a high level of validity. For example it is important to be aware of the potential for researcher bias to impact on the design of the instruments. It is necessary to consider how effective the instruments will be in collecting data which answers the research questions and is representative of the sample.

It is also necessary to consider validity at stages in the research after the research design stage. At the implementation stage, when you begin to carry out the research in practice, it is necessary to consider ways to reduce the impact of the Hawthorne effect. Finally at the data analysis stage it is important to avoid researcher bias and to be rigorous in the analysis of the data (either through application of appropriate statistical approaches for quantitative data or careful coding of qualitative data).

How can reliability be improved?

In qualitative research, reliability can be evaluated through:

  • respondent validation, which can involve the researcher taking their interpretation of the data back to the individuals involved in the research and ask them to evaluate the extent to which it represents their interpretations and views;

  • exploration of inter-rater reliability by getting different researchers to interpret the same data.

In quantitative research, the level of reliability can evaluated be through:

  • calculation of the level of inter-rater agreement;

  • calculation of internal consistency, for example through having two different questions that have the same focus.

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

Validity within quantitative research is a measure of how accurately the study answers the questions and hypotheses it was commissioned to answer. For research to be deemed credible, and to ensure there is no uncertainty on the integrity of the data, it is essential to achieve high validity.

In summary, research isn’t helpful at all when it doesn’t answer the questions you intend it to! In fact, it’s an absolute waste of time and budget if this is the case.

Of course, there are ways to avoid this and ensure your quantitative research gets the thumbs up from both the wider industry you operate in and the stakeholders commissioning or approving the project. Keep scrolling to read our advice.

💡 Research method

This one is fundamental to securing valid results, as it sets the tone for the entire project. The research method you select needs to accurately reflect the type, format and depth of data you need to capture in order to suitably answer your questions.

As an example, if you are running research with participants that are lower on the digital spectrum and aren’t confident online, I would advise against incorporating complex question types, such as large grids, into your survey. Chances are the participant will get to this type of question and:

+ struggle and feel frustrated
+ input dud data just to get it over with
+ skip it entirely

None of these potential outcomes are ideal, and all severely affect the validity of the overall results.

💡 Question content

It sounds obvious, but the question type and wording itself truly steers the validity of quantitative research. As a rule, quantitative research is usually unmoderated, so if your questions are ambiguous or do not accurately reflect what you intend to ask, there is no opportunity to provide further explanation or for participants to ask questions.

Questions must be straightforward, free of jargon, and must mean the same thing to all who read it. Getting others who are entirely removed from your research to test the survey is a great workaround – this will also allow you to check their responses do indeed answer or confirm the underlying hypothesis.

At PFR, as part of our remote unmoderated task service, we regularly offer our clients the chance to test their surveys or card sorts with a small number of participants before sending it to a large group of people.

💡 Avoiding bias and leading the participants

This is about approaching your quantitative research from an entirely objective and unassuming standpoint – which can be really challenging, since unintentional bias is often a problem in quantitative studies. For example, asking a participant how frequently they bank online: whilst this is common, they may in fact prefer in branch or telephone.

To avoid guiding participants, you should camouflage the true intent of your questions, particularly when asking about brand loyalty. This can be done by simply asking what experience they have had with multiple brands or asking about general purchasing habits. Again, if your questionnaire design is done in a way whereby participants are encouraged to respond in a certain manner, your results are more likely to be invalid.

💡 Sample size and type

This focuses on whether the group taking part in your research is representative of your users and whether you have an adequate number of responses that can provide sound answers to your questions. Quantitative research is usually done on a large scale and for good reason, or you run the risk of getting narrow results that damage the overall validity of your study.

When asked about the biggest challenges faced in quantitative research, 37% of UX practitioners interviewed by the Norman Nielsen Group claimed that recruiting large samples of participants was the most difficult task of all.

How to ensure validity in research

At People for Research, we have clients who come to us with varying degrees of experience with quantitative studies, and even those most experienced benefit from our consultancy on securing valid data. We are Market Research Society trained on best practice and understand the importance of capturing actionable insights, so our full support is included in the service when you partner up with us.

If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or .

At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.

In Quantitative research, reliability refers to consistency of certain measurements, and validity – to whether these measurements “measure what they are supposed to measure”. Things are slightly different, however, in Qualitative research.

Reliability in qualitative studies is mostly a matter of “being thorough, careful and honest in carrying out the research” (Robson, 2002: 176). In qualitative interviews, this issue relates to a number of practical aspects of the process of interviewing, including the wording of interview questions, establishing rapport with the interviewees and considering ‘power relationship’ between the interviewer and the participant (e.g. Breakwell, 2000; Cohen et al., 2007; Silverman, 1993).

What seems more relevant when discussing qualitative studies is their validity, which very often is being addressed with regard to three common threats to validity in qualitative studies, namely researcher bias, reactivity and respondent bias (Lincoln and Guba, 1985).

Researcher bias refers to any kind of negative influence of the researcher’s knowledge, or assumptions, of the study, including the influence of his or her assumptions of the design, analysis or, even, sampling strategy. Reactivity, in turn, refers to a possible influence of the researcher himself/herself on the studied situation and people. Respondent bias refers to a situation where respondents do not provide honest responses for any reason, which may include them perceiving a given topic as a threat, or them being willing to ‘please’ the researcher with responses they believe are desirable.

Robson (2002) suggested a number of strategies aimed at addressing these threats to validity, namely prolonged involvement, triangulation, peer debriefing, member checking, negative case analysis and keeping an audit trail.

How to ensure validity in research

So, what are these strategies and how can you apply them in your research?

Prolonged involvement refers to the length of time of the researcher’s involvement in the study, including involvement with the environment and the studied participants. It may be granted, for example, by the duration of the study, or by the researcher belonging to the studied community (e.g. a student investigating other students’ experiences). Being a member of this community, or even being a friend to your participants (see my blog post on the ethics of researching friends), may be a great advantage and a factor that both increases the level of trust between you, the researcher, and the participants and the possible threats of reactivity and respondent bias. It may, however, pose a threat in the form of researcher bias that stems from your, and the participants’, possible assumptions of similarity and presuppositions about some shared experiences (thus, for example, they will not say something in the interview because they will assume that both of you know it anyway – this way, you may miss some valuable data for your study).

Triangulation may refer to triangulation of data through utilising different instruments of data collection, methodological triangulation through employing mixed methods approach and theory triangulation through comparing different theories and perspectives with your own developing “theory” or through drawing from a number of different fields of study.

Peer debriefing and support is really an element of your student experience at the university throughout the process of the study. Various opportunities to present and discuss your research at its different stages, either at internally organised events at your university (e.g. student presentations, workshops, etc.) or at external conferences (which I strongly suggest that you start attending) will provide you with valuable feedback, criticism and suggestions for improvement. These events are invaluable in helping you to asses the study from a more objective, and critical, perspective and to recognise and address its limitations. This input, thus, from other people helps to reduce the researcher bias.

Member checking, or testing the emerging findings with the research participants, in order to increase the validity of the findings, may take various forms in your study. It may involve, for example, regular contact with the participants throughout the period of the data collection and analysis and verifying certain interpretations and themes resulting from the analysis of the data (Curtin and Fossey, 2007). As a way of controlling the influence of your knowledge and assumptions on the emerging interpretations, if you are not clear about something a participant had said, or written, you may send him/her a request to verify either what he/she meant or the interpretation you made based on that. Secondly, it is common to have a follow-up, “validation interview” that is, in itself, a tool for validating your findings and verifying whether they could be applied to individual participants (Buchbinder, 2011), in order to determine outlying, or negative, cases and to re-evaluate your understanding of a given concept (see further below). Finally, member checking, in its most commonly adopted form, may be carried out by sending the interview transcripts to the participants and asking them to read them and provide any necessary comments or corrections (Carlson, 2010).

Negative case analysis is a process of analysing ‘cases’, or sets of data collected from a single participant, that do not match the patterns emerging from the rest of the data. Whenever an emerging explanation of a given phenomenon you are investigating does nto seem applicable to one, or a small number, of the participants, you should try to carry out a new line of analysis aimed at understanding the source of this discrepancy. Although you may be tempted to ignore these “cases” in fear of having to do extra work, it should become your habit to explore them in detail, as the strategy of negative case analysis, especially when combined with member checking, is a valuable way of reducing researcher bias.

Finally, the notion of keeping an audit trail refers to monitoring and keeping a record of all the research-related activities and data, including the raw interview and journal data, the audio-recordings, the researcher’s diary (see this post about recommended software for researcher’s diary) and the coding book.

If you adopt the above strategies skilfully, you are likely to minimize threats to validity of your study.

Don’t forget to look at the resources in the reference list (bottom of the page, below the video), if you would like to read more on this topic!

 Also, here is a video I recorded on the same topic:

References

Breakwell, G. M. (2000). Interviewing. In Breakwell, G.M., Hammond, S. & Fife-Shaw, C. (eds.) Research Methods in Psychology. 2nd Ed. London: Sage.

Buchbinder, E. (2011). Beyond Checking: Experiences of the Validation Interview. Qualitative Social Work, 10 (1), 106-122.

Carlson, J.A. (2010). Avoiding Traps in Member Checking. The Qualitative Report, 15 (5), 1102-1113.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. 6th Ed. London: Routledge.

Curtin, M., & Fossey, E. (2007). Appraising the trustworthiness of qualitative studies:

Guidelines for occupational therapists. Australian Occupational Therapy Journal,

54, 88-94.

Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: SAGE.

Robson, C. (2002). Real world research: a resource for social scientists and practitioner-researchers. Oxford, UK: Blackwell Publishers.

Silverman, D. (1993) Interpreting Qualitative Data. London: Sage.