Many thanks to Hardy Schwamm for today’s Research Bite introducing altmetrics, and also to those of you who came along and contributed your questions and thoughts to the discussion.
In the context of huge increases in publication output, the Altmetrics manifesto claims that we need new ways to filter for quality. The development of altmetrics is also an attempt to catch up with the changes in scholarly communication. Altmetrics are a reaction to the fact that the impact factor is often incorrectly used to assess the impact of individual articles, when in fact it just evaluates the impact of the journal. Altmetrics also acknowledge that research outputs are broader than just journal articles, and encompass datasets, software, blog posts, presentations and so on.
Metrics based on the social web
Among the most promising and influential Altmetrics services are:
- Altmetric – UK based, not affiliated to a publisher, focusing on article-centric or article level metrics
- PlumX – gather metrics on different ‘artifacts’, but also on individuals, groups and institutions
- ImpactStory – focusing on individuals
The key audiences are:
- Individuals can use altmetrics to understand their own impact, or influence.
- Publishers can use it to inform readers, marketing and identify strengths
- Institutions can use it to gauge impact, inform REF and funding bids and recruitment?
Not everyone is happy with incorporating the Almetric score. Can you really compare across disciplines? Is an article with a score of 2000 really that much ‘better’ than one scoring 25? Altmetric does include percentile information when you click through to the detail. This ability to click through to the detail, and actually delve into those ‘mentions’ is a useful feature.
How do they track your output (e.g. article)? Altmetric use the DOI (Digital Object Identifier), PubMed ID and ArXiv ID to track mentions.
Does it count mentions of your name? Altmetric just looks at articles, not the identity of the researcher. ImpactStory and PlumX do focus more on the individual, but it would be in association with a particular work, as far as we understand. So an appearance on TV wouldn’t be counted (unless the broadcaster had a link on their website too).
Does it only measure positive mentions? No. There’s no discrimination, so like traditional citation counts, others could be denouncing your research, but you’d still benefit from the stats!
Is there going to be an institutional licence to the product? The Library and RSO are aware of altmetrics and inviting providers to visit and tell us more.
Hardy: Would this kind of product be useful? Is the score useful? Participants: It’s all ‘grist to the mill’. If you can provide your line manager with some data like this it could contribute to them understanding your performance. The score seems pretty meaningless though. It’s good to have a service which aggregates all this data in such a timely way. Citations can take a long time to accrue. It bridges the gap between informal discussions and citations in journal articles. Though it can never be a complete measure. Altmetrics could inform the REF in terms of impact of research in the community or public engagement outside of the academy.
Is it just a snapshot in time (i.e. does it only provide data from Twitter for 30 days?) It appears that it is not just a snapshot in time, but a record of mentions, though data collection would presumably have to start somewhere, so may not be accurate for older papers (by older, I mean pre-2011!).
Do altmetrics cover patents? Plum covers citation in US patents.
What’s our definition of impact? Is it just getting your research out there and generating a buzz? Some disciplines view this kind of attention as public engagement. Impact is making a change to the world, e.g. through policy change or a technological advance. Altmetrics contribute to a multi-faceted view of impact, including non-academic engagement.
Can you track the geographic element? Yes, with Altmetric. When you click through to the detailed data. e.g.
Is it performing as a filter? Ranking efforts and scores like in Altmetric are difficult to interpret and shouldn’t be compared across disciplines. However, if you see that an article was discussed in the (social) media it might be worth seeing why it got mentioned so often.
Won’t it be open to scholars artificially inflating scores? Yes, just like traditional citation practices! It also favours researchers who are active on the social web.
The Altmetrics manifesto disagrees:
Some have suggested altmetrics would be too easy to game; we argue the opposite. The JIF is appallingly open to manipulation; mature altmetrics systems could be more robust, leveraging the diversity of of altmetrics and statistical power of big data to algorithmically detect and correct for fraudulent activity.
This post was amended on 24/09/2014 following feedback from Hardy.