What is the most used research method in psychology?

A wide range of research methods are used in psychology. These methods vary by the sources from which information is obtained, how that information is sampled, and the types of instruments that are used in data collection. Methods also vary by whether they collect qualitative data, quantitative data or both.

Qualitative psychological research is where the research findings are not arrived at by statistical or other quantitative procedures. Quantitative psychological research is where the research findings result from mathematical modeling and statistical estimation or statistical inference. Since qualitative information can be handled as such statistically, the distinction relates to method, rather than the topic studied.

There are three main types of psychological research:

  • Correlational research
  • Descriptive research
  • Experimental research

The following are common research designs and data collection methods:

  • Archival research
  • Case study – Although case studies are often included in 'research methods' pages, they are actually not a single research method. Case study methodology involves using a body of different research methods (e.g. interview, observation, self-report questionnaire). Researchers interpret what the data together mean for the area of study. So, case studies are a methodology, not a method.
  • Computer simulation (modeling)
  • Event sampling methodology, also referred to as experience sampling methodology (ESM), diary study, or ecological momentary assessment (EMA)
  • Experiment, often with separate treatment and control groups (see scientific control and design of experiments). See Experimental psychology for many details.
  • Field experiment
  • Interview, can be structured or unstructured.
  • Meta-analysis
  • Neuroimaging and other psychophysiological methods
  • Observational study, can be naturalistic (see natural experiment), participant or controlled.
  • Program evaluation
  • Quasi-experiment
  • Self-report inventory
  • Survey, often with a random sample (see survey sampling)
  • Twin study
  • Ethnography
  • Focus groups

Research designs vary according to the period(s) of time over which data are collected:

  • Retrospective cohort study: Subjects are chosen, then data are collected on their past experiences.
  • Prospective cohort study: Subjects are recruited prior to the proposed independent effects being administered or occurring.
  • Cross-sectional study, in which a population are sampled on all proposed measures at one point in time.
  • Longitudinal study, in which subjects are studied at multiple time points: May address the cohort effect and indicate causal directions of effects.
  • Cross-sequential study, in which groups of different ages are studied at multiple time points; combines cross-sectional and longitudinal designs

Research in psychology has been conducted with both animals and human subjects:

  • Animal study
  • Human subject research

References

  • Stangor, Charles. (2007). Research Methods for the Behavioral Sciences. 3rd ed. Boston, MA: Houghton Mifflin Company.
  • Weathington, B.L., Cunningham, C.J.L., & Pittenger, D.P. (2010). Research Methods for the Behavioral and Social Sciences. Hoboken, NJ: John Wiley & Sons, Inc.

Retrieved from "https://en.wikipedia.org/w/index.php?title=List_of_psychological_research_methods&oldid=1080142250"

By Dr. Saul McLeod, updated 2022

The aim of the study is a statement of what the researcher intents to investigate.

The hypothesis of the study is an idea, derived from psychological theory which contains a prediction which can be verified or disproved by some kind of investigation, usually an experiment.

A directional hypothesis indicates a direction in the prediction (one-tailed) e.g. ‘students with pets perform better than students without pets’.

A non-directional hypothesis does not indicate a direction in the prediction (two-tailed) e.g. ‘owning pets will affect students’ exam performances’.

Further Information

A sample is the participants you select from a target population (the group you are interested in) to make generalisations about.

Representative means the extent to which a sample mirrors a researcher's target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

A Volunteer sample is where participants pick themselves through newspaper adverts, noticeboards or online.

Opportunity sampling, also known as convenience sampling, uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.

Random sampling is when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.

Systematic sampling is when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.

Stratified sampling is when you identify the subgroups and select participants in proportion with their occurrences.

Snowball sampling is when researchers find a few participants, and then ask them to find participants themselves and so on.

In quota sampling, researchers will be told to ensure the sample fits with certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Further Information

Independent variable (IV) – the variable the experimenter manipulates, aassumed to have a direct effect on the DV.

Dependent variable (DV) – the variable the experimenter measures after making changes to the IV.

We must use operationalisation to ensure that variables are in a form that can be easily tested e.g. Educational attainment → GCSE grade in maths.

Extraneous variables are all variables, which are not the independent variable, but could affect the results of the experiment.There are two types: Situational variables (controlled through standardisation) and Participant variables (controlled through randomisation).

Further Information

In an independent measures design (between-groups design), a group of participants are recruited and divided into 2. The first group does the experimental task with the IV set for condition 1 and the second group does the experimental task with the IV set for condition 2. The DV is measured for each group and results are compared.

In a repeated measures design (within groups), a group of participants are recruited, and the group does the experimental task with the IV set for condition 1 and then the same for condition 2. The DV is measured for each group and results are compared.

In a matched pairs design, a group of participants are recruited. We find out what sorts of people we have in the group and recruit another group that matches them one for one. The experiment is then treated like an independent measures design and the results are compared.

Further Information

This type of experiment is conducted in a well-controlled environment – not necessarily a laboratory – and therefore accurate and objective measurements are possible.

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances and using a standardized procedure.

Further Information

These are conducted in the everyday (i.e. natural) environment of the participants but the situations are still artificially set up.

The experimenter still manipulates the IV, but in a real-life setting (so cannot really control extraneous variables).

Further Information

Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway.

Participants are not randomly allocated and the natural event may only occur rarely.

Further Information

Case studies are in-depth investigations of a single person, group, event or community.

Case studies are widely used in psychology and amongst the best-known ones carried out were by Sigmund Freud. He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity.

Further Information

Correlation means association - more precisely it is a measure of the extent to which two variables are related.

If an increase in one variable tends to be associated with an increase in the other then this is known as a positive correlation.

If an increase in one variable tends to be associated with a decrease in the other then this is known as a negative correlation.

A zero correlation occurs when there is no relationship between variables.

Further Information

Unstructured (informal) interviews are like a casual conversation. There are no set questions and the participant is given the opportunity to raise whatever topics he/she feels are relevant and ask them in their own way. In this kind of interview much qualitative data is likely to be collected.

Structured (formal) interviews are like a job interview. There is a fixed, predetermined set of questions that are put to every participant in the same order and in the same way. The interviewer stays within their role and maintains social distance from the interviewee.

Further Information

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone or post.

The questions asked can be open ended, allowing flexibility in the respondent's answers, or they can be more tightly structured requiring short answers or a choice of answers from given alternatives.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent, or causing offence.

Further Information

Covert observation is where the researcher doesn’t tell the participants that they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular method of observation.

Overt observation is where a researcher tells the participants that they are being observed and what they are being observed for.

Controlled: behavior is observed under controlled laboratory conditions (e.g. Bandura's Bobo doll study).

Natural: Here spontaneous behavior is recorded in a natural setting.

Participant: Here the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.

Non-participant (aka "fly on the wall): The researcher does not have direct contact with the people being observed.

Further Information

Pilot study is a small scale preliminary study conducted in order to evaluate feasibility of the key steps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low. The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

In cross-sectional research, a researcher compares multiple segments of the population at the same time

Sometimes we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies, the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period of time.

Triangulation means using more than one research method to improve the validity of the study.

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

Test-retest reliability – Assessing the same person on two different occasions which shows the extent to which the test produces the same answers.

Inter-observer reliability – the extent to which there is agreement between two or more observers.

Further Information

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the validity of the conclusions drawn as they’re based on a wider range.

Weaknesses: Research designs in studies can vary so they are not truly comparable.

A researcher submits an article to a journal. The choice of journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

A The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.

ualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.

Primary data is first hand data collected for the purpose of the investigation.

Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Further Information

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect in genuine and represents what is actually out there in the world.

Concurrent validity – the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.

Face validity – does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.

Ecological validity – the extent to which findings from a research study can be generalised to other settings / real life.

Temporal validity – the extent to which findings from a research study can be generalised to other historical times.

Further Information

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

Test-retest reliability – Assessing the same person on two different occasions which shows the extent to which the test produces the same answers.

Inter-observer reliability – the extent to which there is agreement between two or more observers.

Further Information

Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.

Paradigm shift – The result of scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.

Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.

Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.

Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.

Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Further Information

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happened when a stringent significance level is used, an error of pessimism).

Further Information

Informed consent is when participants are able to make an informed judgement about whether to take part. It causes them to guess the aims of the study and change their behavior. To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.

Deception should only be used when it approved by an ethics committee as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.

All participants should be informed at the beginning that they have the Right to Withdraw if they ever feel distressed or uncomfortable. It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.

Participants should all have Protection from harm. The researcher should avoid risks greater than experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.

Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Further Information

How to reference this article:

McLeod, S. A. (2017). Research methods. Simply Psychology. www.simplypsychology.org/research-methods.html

How to reference this article:

McLeod, S. A. (2017). Psychology research methods. Simply Psychology. www.simplypsychology.org/research-methods.html

Home | About Us | Privacy Policy | Advertise | Contact Us

Simply Psychology's content is for informational and educational purposes only. Our website is not intended to be a substitute for professional medical advice, diagnosis, or treatment.

© Simply Scholar Ltd - All rights reserved

What is the most used research method in psychology?
report this ad