Category Archives: webometrics

Measuring the Academic Impact of Higher Education Institutions and Research Centres

uni

Although research impact metrics can be used to evaluate individual academics, there are other measures that could be used to rank and compare academic institutions. Several international ranking schemes for universities use citations to estimate the institutions’ impact. Nevertheless, there have been ongoing debates about whether bibliometric methods should be used for the ranking of academic institutions.

The most productive universities are increasingly enclosing the link to their papers online. Yet, many commentators argue that hyperlinks could be unreliable indicators of journal impact (Kenekayoro, Buckley & Thelwall, 2014; Vaughan & Hysen, 2002). Notwithstanding, the web helps to promote research funding initiatives and to advertise academic related jobs. The webometrics could also monitor the extent of mutual awareness in particular research areas (Thelwall, Klitkou, Verbeek, Stuart & Vincent, 2010).

Moreover, there are other uses of webometric indicators in policy-relevant contexts within the European Union (Thelwall et al., 2010; Hoekman, Frenken & Tijssen, 2010). The webometrics refer to the quantitative analysis of web activity, including profile views and downloads (Davidson, Newton, Ferguson, Daly, Elliott, Homer, Duffield & Jackson, 2014). Therefore, webometric ranking involves the measurement of volume, visibility and impact of web pages. These metrics seem to emphasise on scientific output including peer-reviewed papers, conference presentations, preprints, monographs, theses and reports. They also analyse other academic material including courseware, seminar documentation, digital libraries, databases, multimedia, personal pages and blogs among others (Thelwall, 2009; Kousha & Thelwall, 2015; Mas-Bleda, Thelwall, Kousha & Aguillo, 2014a; Mas-Bleda, Thelwall, Kousha & Aguillo, 2014b; Orduna-Malea & Ontalba-Ruipérez, 2013). Thelwall and Kousha (2013) have identified and explained the methodology of five well-known institutional ranking schemes:

  • “QS World University Rankings aims to rank universities based upon academic reputation (40%, from a global survey), employer reputation (10%, from a global survey), faculty-student ratio (20%), citations per faculty (20%, from Scopus), the proportion of international students (5%), and the proportion of international faculty (5%).
  • The World University Rankings: aims to judge world class universities across all of their core missions – teaching, research, knowledge transfer and international outlook by using the Web of Science, an international survey of senior academics and self-reported data. The results are based on field-normalised citations for five years of publications (30%), research reputation from a survey (18%), teaching reputation (15%), various indicators of the quality of the learning environment (15%), field-normalised publications per faculty (8%), field-normalised income per faculty (8%), income from industry per faculty (2.5%); and indicators for the proportion of international staff (2.5%), students (2.5%), and internationally co-authored publications (2.5%, field-normalised).
  • The Academic Ranking of World Universities (ARWU) aims to rank the “world top 500 universities” based upon the number of alumni and staff winning Nobel Prizes and Fields Medals, number of highly cited researchers selected by Thomson Scientific, number of articles published in journals of Nature and Science, number of articles indexed in Science Citation Index – Expanded and Social Sciences Citation Index, and per capita performance with respect to the size of an institution.
  • The CWTS Leiden Ranking aims to measure “the scientific performance” of universities using bibliometric indicators based upon Web of Science data through a series of separate size- and field-normalised indicators for different aspects of performance rather than a combined overall ranking. For example, one is “the proportion of the publications of a university that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited” and another is “the average number of citations of the publications of a university, normalised for field differences and publication year.”
  • The Webometrics Ranking of World Universities Webometrics Ranking aims to show “the commitment of the institutions to [open access publishing] through carefully selected web indicators”: hyperlinks from the rest of the web (1/2), web site size according to Google (1/6), and the number of files in the website in “rich file formats” according to Google Scholar (1/6), but also the field-normalised number of articles in the most highly cited 10% of Scopus publications (1/6)” (Thelwall & Kousha, 2013).

Evidently, the university ranking systems use a variety of factors in their calculations, including their web presence, the number of publications, the citations to publications and peer judgements (Thelwall and Kousha, 2013; Aguillo, Bar-Ilan, Levene, & Ortega, 2010). These metrics typically reflect a combination of different factors, as shown above. Although they may have different objectives, they tend to give similar rankings. It may appear that the universities that produce good research also tend to have an extensive web presence, perform well on teaching-related indicators, and attract many citations (Matson et al., 2003).

On the other hand, the webometrics may not necessarily provide robust indicators of knowledge flows or research impact. In contrast to citation analysis, the quality of webometric indicators is not high unless irrelevant content is filtered out, manually. Moreover, it may prove hard to interpret certain webometric indicators as they could reflect a range of phenomena ranging from spam to post publication material. Webometric analyses can support science policy decisions on individual fields. However, for the time being, it is difficult to tackle the issue of web heterogeneity in lower field levels (Thelwall & Harries, 2004; Wilkinson, Harries, Thelwall & Price, 2003). Moreover, Thelwall et al., (2010) held that webometrics would not have the same relevance for every field of study. It is very likely that fast moving or new research fields could not be adequately covered by webometric indicators due to publication time lags. Thelwall et al. (2010) argued that it could take up to two years to start a research and to have it published. This would therefore increase the relative value of webometrics as research groups can publish general information online about their research.

This is an excerpt from: Camilleri, M.A. (2016) Utilising Content Marketing and Social Networks for Academic Visibility. In Cabrera, M. & Lloret, N. Digital Tools for Academic Branding and Self-Promotion. IGI Global (Forthcoming).

Advertisements

Leave a comment

Filed under Higher Education, Marketing, University Ranking, webometrics

Using Content Marketing Metrics for Academic Impact

Academic contributions start from concepts and ideas. When their content is relevant and of a high quality they can be published in renowned, peer-reviewed journals. Researchers are increasingly using online full text databases from institutional repositories or online open access journals to disseminate their findings. The web has surely helped to enhance fruitful collaborative relationships among academia. The internet has brought increased engagement among peers, over email or video. In addition, they may share their knowledge with colleagues as they present their papers in seminars and conferences. After publication, their contributions may be cited by other scholars.

The researchers’ visibility does not only rely on the number of publications. Both academic researchers and their institutions are continuously being rated and classified. Their citations may result from highly reputable journals or well-linked homepages providing scientific content. Publications are usually ranked through metrics that will assess individual researchers and their organisational performance. Bibliometrics and citations may be considered as part of the academic reward system. Highly cited authors are usually endorsed by their peers for their significant contribution to knowledge by society. As a matter of fact, citations are at the core of scientometric methods as they have been used to measure the visibility and impact of scholarly work (Moed, 2006; Borgman, 2000). This contribution explores extant literature that explain how the visibility of individual researchers’content may be related to their academic clout. Therefore, it examines the communication structures and processes of scholarly communications (Kousha and Thelwall, 2007; Borgmann and Furner 2002). It presents relevant theoretical underpinnings on bibliometric studies and considers different methods that can analyse the individual researchers’ or their academic publications’ impact (Wilson, 1999; Tague-Sutcliffe, 1992).

 

Citation Analysis
The symbolic role of citation in representing the content of a document is an extensive dimension of information retrieval. Citation analysis expands the scope of information seeking by retrieving publications that have been cited in previous works. This methodology offers enormous possibilities for tracing trends and developments in different research areas. Citation analysis has become the de-facto standard in the evaluation of research. In fact, previous publications can be simply evaluated on the number of citations and the relatively good availability of citation data for such purposes (Knoth and Herrmannova, 2014). However, citations are merely one of the attributes of publications. By themselves, they do not provide adequate and sufficient evidence of impact, quality and research contribution. This may be due to a wide range of characteristics they exhibit; including the semantics of the citation (Knoth and Herrmannova, 2014), the motives for citing (Nicolaisen, 2007), the variations in sentiment (Athar, 2014), the context of the citation (He, Pei, Kifer, Mitra and Giles, 2010), the popularity of topics, the size of research communities (Brumback, 2009; Seglen, 1997), the time delay for citations to show up (Priem and Hemminger, 2010), the skewness of their distribution (Seglen, 1992), the difference in the types of research papers (Seglen, 1997) and finally the ability to game / manipulate citations (Arnold and Fowler, 2010).

Impact Factors (IFs)
Scholarly impact is measure of frequency in which an “average article” has been cited over a defined time period in a journal (Glanzel and Moed, 2002). Journal citations reports are published in June, every year by Thomson-Reuters’ Institute of Scientific Information (ISI). These reports also feature data for ranking the Immediacy Index of articles, which measure the number of times an article appeared in academic citations (Harter, 1996). Publishers of core scientific journals consider IF indicators in their evaluations of prospective contributions. In Despite there are severe limitations in the IF’s methodology, it is still the most common instrument that ranks international journals in any given field of study. Yet, impact factors have often been subject to ongoing criticism by researchers for their methodological and procedural imperfections. Commentators often debate about how IFs should be used. Whilst a higher impact factor may indicate journals that are considered to be more prestigious, it does not necessarily reflect the quality or impact of an individual article or researcher. This may be attributable to the large number of journals, the volume of research contributions, and also the rapidly changing nature of certain research fields and the increasing representation of researchers. Hence, other metrics have been developed to provide alternative measures to impact factors.

h-index
The h-index attempts to calculate the citation impact of the academic publications of researchers. Therefore, this index measures the scholars productivity by taking into account their most cited papers and the number of citations that they received in other publications. This index can also be applied to measure the impact and productivity of a scholarly journal, as well as a group of scientists, such as a department or university or country (Jones, Huggett and Kamalski, 2011). The (Hirsch) h-index was originally developed in 2005 to estimate the importance, significance and the broad impact of an academic’s researcher’s cumulative research contributions. Initially, the h-index was designed to overcome the limitations of other measures of quality and productivity of researchers. It consists of a single number that reports on an author’s academic contributions that have at least the equivalent number of citations. For instance, an h-index of 3 would indicate that the author has published at least three papers that have been cited three times or more. Therefore, the most productive researcher may possibly obtaining a high h-index. Moreover, the best papers in terms of quality will be mostly cited. Interestingly, this issue is driving more researchers to publish in open access journals.

 

Webometrics
The science of webometrics (also cybermetrics) is still in an experimental phase. Björneborn and Ingwersen (2004) indicated that webometrics involves an assessment of different types of hyperlinks. They argued that relevant links may help to improve the impact of academic publications. Therefore, webometrics refer to the quantitative analysis of activity on the world wide web, such as downloads (Davidson, Newton, Ferguson, Daly, Elliott, Homer, Duffield and Jackson, 2014). Webometrics recognise that the internet is a repository for a massive number of documents. It disseminates knowledge to wide audiences. The webometric ranking involves the measurement of volume, visibility, and the impact of web pages. Webometrics emphasise on scientific output including peer-reviewed papers, conference presentations, preprints, monographs, theses, and reports. However, these kind of electronic metrics also analyse other academic material (including courseware, seminar documentation, digital libraries, databases, multimedia, personal pages and blogs among others). Moreover, webometrics consider online information on the educational institution, its departments, research groups, supporting services, and the level of students attending courses.

Web 2.0 and Social Media
Internet users are increasingly creating and publishing their content online. Never before has it been so easy for academics to engage with their peers on both current affairs and scientific findings. The influence of social media has changed the academic publishing scenario. As a matter of fact, recently there has been an increased recognition for measures of scholarly impact to be drawn from Web 2.0 data (Priem and Hemminger, 2010).

The web has not only revolutionised how data is gathered, stored and shared but also provided a mechanism of measuring access to information. Moreover, academics are also using personal web sites and blogs to enhance the visibility of their publications. This medium improves their content marketing in addition to traditional bibliometrics. Social media networks are providing blogging platforms that allows users to communicate to anyone with online access. For instance, Twitter is rapidly becoming used for work related purposes, particularly scholarly communication, as a method of sharing and disseminating information which is central to the work of an academic (Java, Song, Finin and Tseng B, 2007). Recently, there has been rapid growth in the uptake of Twitter by academics to network, share ideas and common interests, and promote their scientific findings (Davidson et al., 2014).

Conclusions and Implications

There are various sources of bibliometric data, each possess their own strengths and limitations. Evidently, there is no single bibliometric measure that is perfect. Multiple approaches to evaluation are highly recommended. Moreover, bibliometric approaches should not be the only measures upon which academic and scholarly performance ought to be evaluated. Sometimes, it may appear that bibliometrics can reduce the publications’ impact to a quantitative, numerical score. Many commentators have argued that when viewed in isolation these metrics may not necessarily be representative of a researcher’s performance or capacity. In taking this view, one would consider bibliometric measures as only one aspect of performance upon which research can be judged. Nonetheless, this chapter indicated that bibliometrics still have their high utility in academia. It is very likely that metrics will to continue to be in use because they represent a relatively simple and accurate data source. For the time being, bibliometrics are an essential aspect of measuring academic clout and organisational performance. A number of systematic ways of assessment have been identified in this regard; including citation analysis, impact factor, h-index and webometrics among others. Notwithstanding, the changes in academic behaviours and their use of content marketing on internet have challenged traditional metrics. Evidently, the measurement of impact beyond citation metrics is an increasing focus among researchers, with social media networks representing the most contemporary way of establishing performance and impact. In conclusion, this contribution suggests that these bibliometrics as well as recognition by peers can help to boost the researchers’, research groups’ and universities’ productivity and their quality of research.

References
Arnold, D. N., & Fowler, K. K. (2011). Nefarious numbers. Notices of the AMS, 58(3), 434-437.

Athar, A. (2014). Sentiment analysis of scientific citations. University of Cambridge, Computer Laboratory, Technical Report, (UCAM-CL-TR-856).

Borgman, C. L. (2000). Digital libraries and the continuum of scholarly communication. Journal of documentation, 56(4), 412-430.

Borgman, C. L., & Furner, J. (2002). Scholarly communication and bibliometrics.

Bornmann, L., & Daniel, H. D. (2005). Does the h-index for ranking of scientists really work?. Scientometrics, 65(3), 391-392.

Bornmann, L., & Daniel, H. D. (2007). What do we know about the h index?. Journal of the American Society for Information Science and technology, 58(9), 1381-1385.

Björneborn, L., & Ingwersen, P. (2004). Toward a basic framework for webometrics. Journal of the American Society for Information Science and Technology, 55(14), 1216-1227.

Glänzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171-193.

Harter, S. (1996). Historical roots of contemporary issues involving self-concept.

He, Q., Pei, J., Kifer, D., Mitra, P., & Giles, L. (2010, April). Context-aware citation recommendation. In Proceedings of the 19th international conference on World wide web (pp. 421-430). ACM.

Java, A., Song, X., Finin, T., & Tseng, B. (2007, August). Why we twitter: understanding microblogging usage and communities. In Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis (pp. 56-65). ACM. http://scholar.google.com/scholar?q=http://dx.doi.org/10.1145/1348549.1348556

Knoth, P., & Herrmannova, D. (2014). Towards Semantometrics: A New Semantic Similarity Based Measure for Assessing a Research Publication’s Contribution. D-Lib Magazine, 20(11), 8.

Kousha, K., & Thelwall, M. (2007). Google Scholar citations and Google Web/URL citations: A multi‐discipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055-1065.

Moed, H. F. (2006). Citation analysis in research evaluation (Vol. 9). Springer Science & Business Media.

Nicolaisen, J. (2007). Citation analysis. Annual review of information science and technology, 41(1), 609-641.

Priem, J., & Hemminger, B. H. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday, 15(7).

Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628-638.

Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. Bmj, 314(7079), 497.

Tague-Sutcliffe, J. (1992). An introduction to informetrics. Information processing & management, 28(1), 1-3.

Wilson, C. S. (1999). Informetrics. Annual Review of Information Science and Technology (ARIST), 34, 107-247.

Leave a comment

Filed under webometrics