Which of the following is a way to measure the quality of a research publication?

Citation indexes can provide a range of different metrics at an article and author level, including citation counts (author and article), h-index (author) and output (author).  QUT Library subscribes to:

There are other freely available options such as:

  • Google Scholar
  • Google Scholar Citations

Research evaluation tools analyse the research output of institutions, their departments and researchers. These tools can also assist with benchmarking the research performance of institutions and their researchers. These tools analyse citation data harvested from a citation index. The library subscribes to:

  • SciVal (based on Scopus data)
  • InCites (based on Web of Science data)
  • The Lens (also has a freely available option)

If you're looking for a free option, you can access Dimensions, which has a free version.

A number of tools are available for calculating the impact of journals. 

See Finding tradiontal metrics for more detail on how to use these tools.

Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.

Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment.

On this page we introduce some of the most popular citation-based metrics employed at the journal level. Where available, they are featured in the “Journal Insights” section on Elsevier journal homepages (for example), which links through to an even richer set of indicators on the Journal Insights homepage (for example).

CiteScore metrics

CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract and citation database of peer-reviewed literature.

Calculating the CiteScore is based on the number of citations to documents (articles, reviews, conference papers, book chapters, and data papers) by a journal over four years, divided by the number of the same document types indexed in Scopus and published in those same four years. For more details, see this FAQ.

CiteScore is calculated for the current year on a monthly basis until it is fixed as a permanent value in May the following year, permitting a real-time view on how the metric builds as citations accrue. Once fixed, the other CiteScore metrics are also computed and contextualise this score with rankings and other indicators to allow comparison.

Which of the following is a way to measure the quality of a research publication?

CiteScore metrics are:

  • Current: A monthly CiteScore Tracker keeps you up-to-date about latest progression towards the next annual value, which makes next CiteScore more predictable.
  • Comprehensive: Based on Scopus, the leading scientific citation database.
  • Clear: Values are transparent and reproducible to individual articles in Scopus.

The scores and underlying data for nearly 26,000 active journals, book series and conference proceedings are freely available at www.scopus.com/sources or via a widget (available on each source page on Scopus.com) or the Scopus API.

Which of the following is a way to measure the quality of a research publication?

SCImago Journal Rank (SJR)

SCImago Journal Rank (SJR) is based on the concept of a transfer of prestige between journals via their citation links. Drawing on a similar approach to the Google PageRank algorithm - which assumes that important websites are linked to from other important websites - SJR weights each incoming citation to a journal by the SJR of the citing journal, with a citation from a high-SJR source counting for more than a citation from a low-SJR source. Like CiteScore, SJR accounts for journal size by averaging across recent publications and is calculated annually. SJR is also powered by Scopus data and is freely available alongside CiteScore at www.scopus.com/sources.

Source Normalized Impact per Paper (SNIP)

Source Normalized Impact per Paper (SNIP) is a sophisticated metric that intrinsically accounts for field-specific differences in citation practices. It does so by comparing each journal’s citations per publication with the citation potential of its field, defined as the set of publications citing that journal. SNIP therefore measures contextual citation impact and enables direct comparison of journals in different subject fields, since the value of a single citation is greater for journals in fields where citations are less likely, and vice versa. SNIP is calculated annually from Scopus data and is freely available alongside CiteScore and SJR at www.scopus.com/sources.

Journal Impact Factor (JIF)

Journal Impact Factor (JIF) is calculated by Clarivate Analytics as the average of the sum of the citations received in a given year to a journal’s previous two years of publications (linked to the journal, but not necessarily to specific publications) divided by the sum of “citable” publications in the previous two years. Owing to the way in which citations are counted in the numerator and the subjectivity of what constitutes a “citable item” in the denominator, JIF has received sustained criticism for many years for its lack of transparency and reproducibility and the potential for manipulation of the metric. Available for only 11,785 journals (Science Citation Index Expanded plus Social Sciences Citation Index, per December 2019), JIF is based on an extract of Clarivate’s Web of Science database, and includes citations that could not be linked to specific articles in the journal, so-called unlinked citations.

Although originally conceived as an author-level metric, the h-index (and some of its numerous variants) have come to be applied to higher-order aggregations of research publications, including journals. A composite of productivity and citation impact, h-index is defined as the greatest number of publications h for which the count of lifetime citations is greater than or equal to h. Being bound at the upper limit only by total productivity, h-index favours older and more productive authors and journals. As h-index can only ever rise, it is also insensitive to recent changes in performance. Finally, the ease of increasing h-index does not scale linearly: an author with an h-index of 2 needs only publish a 3rd paper and have all three of them cited at least 3 times to rise to an h-index of 3; an author with an h-index of 44 must publish a 45th paper and have it and all the other attain 45 citations each before progressing to an h-index of 45. h-index is therefore of limited usefulness to distinguish between authors, since most have single-digit h-indexes.

In the theoretical framework of your thesis, you support the research that you want to perform by means of a literature review. Here, you are looking for earlier research about your subject. These studies are often published in the form of scientific articles in journals (scientific publications).

Why is good quality important?

The better the quality of the articles that you use in the literature review, the stronger your own research will be. When you use articles that are not well respected, you run the risk that the conclusions you draw will be unfounded. Your supervisor will always check the article sources for the conclusions you draw.

We will use an example to explain how you can judge the quality of a scientific article. We will use the following article as our example:

Example article

Perrett, D. I., Burt, D. M., Penton-Voak, I. S., Lee, K. J., Rowland, D. A., & Edwards, R. (1999). Symmetry and Human Facial Attractiveness. Evolution and Human Behavior, 20, 295-307. Retrieved from http://www.grajfoner.com/Clanki/Perrett%201999%20Symetry%20Attractiveness.pdf

This article is about the possible link between facial symmetry and the attractiveness of a human face.

Check the following points

1. Where is the article published?

The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

Example

The article from the example is published in the journal “Evolution and Human Behavior”. The journal is not on the Journal Quality List, but after googling the publication, it seems from multiple sources that it nevertheless is among the top in the field of Psychology (see Journal Ranking at  http://www.ehbonline.org/). The quality of the source is thus high enough to use it.

So, if a journal is not listed in the Journal Quality List then it is worthwhile to google it. You will then find out more about the quality of the journal.

2. Who is the author?

The next step is to look at who the author of the article is:

  • What do you know about the person who wrote the paper?
  • Has the author done much research in this field?
  • What do others say about the author?
  • What is the author’s background?
  • At which university does the author work? Does this university have a good reputation?
Example

The lead author of the article (Perrett) has already done much work within the research field, including prior studies of predictors of attractiveness. Penton-Voak, one of the other authors, also collaborated on these studies. Perrett and Penton-Voak were in 1999 both professors at the University of St Andrews in the United Kingdom. This university is among the top 100 best universities in the world. There is less information available about the other authors. It could be that they were students who helped the professors.

3. What is the date of publication?

In which year is the article published? The more recent the research, the better. If the research is a bit older, then it’s smart to check whether any follow-up research has taken place. Maybe the author continued the research and more useful results have been published.

Tip! If you’re searching for an article in Google Scholar, then click on ‘Since 2014’ in the left hand column. If you can’t find anything (more) there, then select ‘Since 2013’. If you work down the row in this manner, you will find the most recent studies.

Example

The article from the example was published in 1999. This is not extremely old, but there has probably been quite a bit of follow-up research done in the past 15 years. Thus, I quickly found via Google Scholar an article from 2013, in which the influence of symmetry on facial attractiveness in children was researched. The example article from 1999 can probably serve as a good foundation for reading up on the subject, but it is advisable to find out how research into the influence of symmetry on facial attractiveness has further developed.

4. What do other researchers say about the paper?

Find out who the experts are in this field of research. Do they support the research, or are they critical of it?

Example

By searching in Google Scholar, I see that the article has been cited at least 325 times! This says then that the article is mentioned at least in 325 other articles. If I look at the authors of the other articles, I see that these are experts in the research field. The authors who cite the article use the article as support and not to criticize it.

5. Determine the quality

Now look back: how did the article score on the points mentioned above? Based on that, you can determine quality.

Example

The example article scored ‘reasonable’ to ‘good’ on all points. So we can consider the article to be qualitatively good, and therefore it is useful in, for example, a literature review. Because the article is already somewhat dated, however, it is wise to also go in search of more recent research.

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Swaen, B. (2019, March 04). How do you determine the quality of a journal article?. Scribbr. Retrieved December 5, 2022, from https://www.scribbr.com/tips/how-do-you-determine-the-quality-of-a-journal-article/