- Open Access
Z factor: a new index for measuring academic research output
© Zhuo; licensee BioMed Central Ltd. 2008
- Received: 03 November 2008
- Accepted: 09 November 2008
- Published: 09 November 2008
With rapid progress in scientific research activities and growing competition for funding resources, it becomes critical to effectively evaluate an individual researcher's annual academic performance, or their cumulative performance within the last 3–5 years. It is particularly important for young independent investigators, and is also useful for funding agencies when determining the productivity and quality of grant awardees. As the funding becomes increasingly limited, having an unbiased method of measuring recent performance of an individual scientist is clearly needed. Here I propose the Z factor, a new and useful way to measure recent academic performance.
Of late, there have been continuous and intensive debates regarding the roles of the impact factor [1–7]. There are many reasons the misuse of the impact factor, sometimes called the 'impact factor fever', generates concern. First, the impact factor does not provide any evaluation for the quality of science in the research articles. Although the majority of papers published in high impact journals are generally novel and of highest quality, there are a few 'bad apples'. Second, the impact factor represents the mean citation rate of papers published in one journal, and it does not represent the citation of the specific article itself. A paper published in a low impact factor journal may end up well cited, while conversely, a paper published in a high impact factor journal may garner very few citations. Despite these limitations, it is still generally agreed upon that it is much harder to publish an article in a journal with an impact factor of greater than 10 than a journal with an impact factor of 1. It is therefore unfair to claim that the impact factor is useless or only doing harm to the academic community.
Considering the increase in scientific activity in recent years, it is important to have objective criteria for evaluating academic performances when considering, for example, new faculty recruitment, annual performance and tenure evaluation, and funding decisions. Unlike commercial businesses, the broad spectrum of academic research activities render effective evaluation by administrators or funding agencies to be rather difficult, especially those who are involved in 'basic' research. In most cases, published papers are the only considerations for evaluation. It is important to have a consistent standard to evaluate these papers, as most will not be translated into commercial activities or be awarded with the Noble prize for a few decades to come, if at all. However, it is these 'basic' researches that lay the foundations and holds hope for future drug discoveries and treatments of human diseases.
What is the best way to evaluate scientific performance? I still recall one department head claiming that good science in his department was defined by whether he thinks the researcher is doing good science, not by where the faculty member has published papers (including in high impact factor journals), nor by how many papers he or she has published. Although there may be some merit in holding to such a high standard, this would require the department chair, university dean or government funding agency to carefully read and evaluate the individual papers. Considering the increasing number of disciplines in scientific research, this has become almost impossible. And what about the panel of experts? Can we always trust the experts' evaluation? Yes and no. First, it depends on who is sitting on the review board. For example, while the NIH study sections have been seen as the best review system for evaluating scientific proposals, they have also been consistently under criticism for many years. Second, it is impractical for each department to invite experts to review its faculty members annually. It must also be pointed out that it is impossible to remove the human aspect with the use of experts. What happens if a faculty member is a long-term collaborator or a family friend of an expert reviewer? What happens if a faculty member proposes a model that goes against a reviewer's long-held theory? In reality, our decisions are affected by culture, religion, race, friendships and personal interests. Thus, I believe that there is a need for a better measurement that should be used to evaluate a researcher's performance. Here I propose a new index to evaluate the active performance of an individual scientist, and I will explore how such an index can be used in combination with current impact factors and H factors to attain a better assessment of one researcher's productivity.
List of factors used for the measurement of researchers' performance
The total number of publications
Does not consider the quality of the study; co-authorship
The impact factor of the journal where the paper is published
Does not consider the productivity; co-authorship
Too early for young investigators
The new measure integrates both the impact factor of the journal and the productivity
Due to space limitations, journals such as Science and Nature mostly select papers on 'hot' topics of science. Quite often, what constitutes a hot topic changes over the time. It can also be influenced by the scientific policy of the current government, as often seen in Science. Due to the space limitation (the way to get such high impact factors, as suggested by some experts) [see ] and the focus on hot topics, scientists commonly complain that some papers are not published in high impact journals simply because the paper's topic is not 'hot' enough. Additionally, many papers in highly profiled journals face intense scrutiny, and indeed, a few are un-reproducible or are downright fabricated. Papers in low impact journals usually do not receive such careful examinations. High impact journals also typically require novelty, and it is difficult to switch gears enough to publish in those journals, particularly if your focus has been on your own area of research for a long period of time. While there is nothing wrong with publishing in lower impact journals, the problem occurs when some scientists strive to publish in high impact journals by constantly changing their research topics. Many outstanding scientists, including some who have spent decades becoming established in their field, feel concerned or even depressed because they lack high impact papers in recent years. To help overcome this 'mental' burden, I suggest the use of a new index factor, called the Z factor. The Z factor considers both the number of publications and the impact factors of the journals in which they were published.
Performance examples of four professors from Harvard University during the last 12 months (based on data from Pubmed and Web of Science)
Assistant Professor C
Assistant Professor D
In addition to counting the number of publications, the Z factor gives significant credit to those published in high impact journals. And unlike the H factor, which requires a waiting period of at least 2–3 years after publication to reflect any change, the Z factor can be calculated annually. In my opinion, this method will be very useful in evaluating one's academic achievements annually.
The Z factor can be used in other ways. For example, one can use the Z factor to evaluate a faculty member's achievement in their specific research field over a period of time (say 3–5 years). By using 2 or 3 keyword as defined by him or her as the major focused topics, one can generate a list of published papers. The Z factor based on these papers will give a nice measurement for his or her recent achievement in his research field. This will also help to eliminate possible erroneous accounting of researchers' publications, where their names appear on papers solely for lending the use of unique elements or methodologies to other investigators.
Also, the Z factor can aid in evaluating the research quality or originality of faculty members, notably if one faculty member always needs numerous papers (larger N) to catch up to the same Z score of another faculty member. In those cases, additional scientific review of published papers may be required. More importantly, the Z factor does not bias against those individuals who make basic and fundamental contributions to the established fields. Three papers published in journals such as Molecular Pain with impact factor of 4.13 can accrue to a similar Z score as one article published in a higher impact journal such as Nature Neuroscience(I = 14). Publishing three papers in Molecular Pain in three years is a reachable goal for many good pain researchers, but publishing one paper in Nature Neuroscience every three year can be an uncertain goal even for top neuroscientists. The Z factor will give individual scientists more freedom to focus on their own research interests, rather than having to change their research topic and chase after so-called 'hot topics' in their attempts to publish in high impact journals. More over, the use of Z factor will help to cool the fever for the impact factor, and encourage the publication of novel scientific findings [see ].
In summary, the Z score may serve better than the current methods of assessing achievement, which simply count the number of published papers, those published in high impact factor journals, or merely counting on the assessment from review members.
I like to thank Professor Jianguo Gu, Xiao-Jiang Li, Bong-Kiun Kaang and Lin Mei for helpful discussions and comments, and Emily England, Susan Kim and Giannina Descalzi for the help with corrections. Professor Min Zhuo is supported by grants from the Canadian Institutes of Health Research (CIHR66975, CIHR84256), the EJLB-CIHR Michael Smith Chair in Neurosciences and Mental Health, and the Canada Research Chair to M. Z.
- Davies J: Journals: impact factors are too highly valued. Nature 2003, 421: 210. 10.1038/421210aView ArticlePubMedGoogle Scholar
- The impact factor game. It is time to find a better way to assess the scientific literature PLoS Med 2006, 3: e291. 10.1371/journal.pmed.0030291Google Scholar
- Notkins AL: Neutralizing the impact factor culture. Science 2008, 322: 191. 10.1126/science.322.5899.191aView ArticlePubMedGoogle Scholar
- Cherubini P: Impact factor fever. Science 2008, 322: 191. 10.1126/science.322.5899.191bView ArticlePubMedGoogle Scholar
- Simons K: The misused impact factor. Science 2008, 322: 165. 10.1126/science.1165316View ArticlePubMedGoogle Scholar
- Rosenbaum JL: High-profile journals not worth the trouble. Science 2008, 321: 1039. 10.1126/science.321.5892.1039bView ArticlePubMedGoogle Scholar
- Raff M, Johnson A, Walter P: Painful publishing. Science 2008, 321: 36. 10.1126/science.321.5885.36aView ArticlePubMedGoogle Scholar
- Hirsch JE: An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA 2005, 102: 16569–16572. 10.1073/pnas.0507655102PubMed CentralView ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.