Blog

Update on the Resourcing Crossref for Future Sustainability research

We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.

Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:

  • Making fees more equitable
  • Simplifying our complex fee schedule
  • Rebalancing revenue sources

In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above. This is an update on the research and discussions that have been underway with our Membership & Fees Committee and our Board, and what we’ve learned so far in each of these areas.

Meet the candidates and vote in our 2024 Board elections

On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.

Each year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.

We maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization’s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers). We have two large member seats and two small member seats open for election in 2024.

The myth of perfect metadata matching

Crossref logo icon https://doi.org/10.13003/pied3tho

In our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true! Read on to learn why.

Re-introducing Participation Reports to encourage best practices in open metadata

We’ve just released an update to our participation report, which provides a view for our members into how they are each working towards best practices in open metadata. Prompted by some of the signatories and organizers of the Barcelona Declaration, which Crossref supports, and with the help of our friends at CWTS Leiden, we have fast-tracked the work to include an updated set of metadata best practices in participation reports for our members. The reports now give a more complete picture of each member’s activity.

Metadata schema development plans

Patricia Feeney

Patricia Feeney – 2024 July 22

In Metadata

It’s been a while, here’s a metadata update and request for feedback

In Spring 2023 we sent out a survey to our community with a goal of assessing what our priorities for metadata development should be - what projects are our community ready to support? Where is the greatest need? What are the roadblocks?

The intention was to help prioritize our metadata development work. There’s a lot we want to do, a lot our community needs from us, but we really want to make sure we’re focusing on the projects that will have the most immediate impact for now.

Crossmark community consultation: What did we learn?

In the first half of this year we’ve been talking to our community about post-publication changes and Crossmark. When a piece of research is published it isn’t the end of the journey—it is read, reused, and sometimes modified. That’s why we run Crossmark, as a way to provide notifications of important changes to research made after publication. Readers can see if the research they are looking at has updates by clicking the Crossmark logo. They also see useful information about the editorial process, and links to things like funding and registered clinical trials. All of this contributes to what we call the integrity of the scholarly record.

Celebrating five years of Grant IDs: where are we with the Crossref Grant Linking System?

We’re happy to note that this month, we are marking five years since Crossref launched its Grant Linking System. The Grant Linking System (GLS) started life as a joint community effort to create ‘grant identifiers’ and support the needs of funders in the scholarly communications infrastructure.

Crossref Grant Linking System logo
The system includes a funder-designed metadata schema and a unique link for each award which enables connections with millions of research outputs, better reporting on the research and outcomes of funding, and a contribution to open science infrastructure. Our first activity to highlight the moment was to host a community call last week where around 30 existing and potential funder members joined to discuss the benefits and the steps to take to participate in the Grant Linking System (GLS).

Some organisations at the forefront of adopting Crossref’s Grant Linking System presented their challenges and how they overcame them, shared the benefits they are reaping from participating, and provided some tips about their processes and workflows.

The anatomy of metadata matching

Crossref logo icon https://doi.org/10.13003/zie7reeg

In our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions.

Basic terminology

Metadata matching is a high-level concept, with many different problems falling into this category. Indeed, no matter how much we like to focus on the similarities between different forms of matching, matching affiliation strings to ROR IDs or matching preprints to journal papers are still different in several important ways. At Crossref and ROR, we call these problems matching tasks.

Drawing on the Research Nexus with Policy documents: Overton’s use of Crossref API

Update 2024-07-01: This post is based on an interview with Euan Adie, founder and director of Overton._

What is Overton?

Overton is a big database of government policy documents, also including sources like intergovernmental organizations, think tanks, and big NGOs and in general anyone who’s trying to influence a government policy maker. What we’re interested in is basically, taking all the good parts of the scholarly record and applying some of that to the policy world. By this we mean finding all the documents, finding what’s out there, collecting metadata for them consistently, fitting to our schema, extracting references from all the policy documents we find, adding links between them, and then we also do citation analysis.

Rebalancing our REST API traffic

Since we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service. It is the result of 20,000 organisations around the world that have worked for many years to curate and share metadata about their various resources, from research grants to research articles and other component inputs and outputs of research.

The REST API is relied on by a large part of the research information community and beyond, seeing around 1.8 billion requests each month. Just five years ago, that average monthly number was 600 million. Our members are the heaviest users, using it for all kinds of information about their own records or picking up connections like citations and other relationships. Databases, discovery tools, libraries, and governments all use the API. Research groups use it for all sorts of things such as analysing trends in science or recording retractions and corrections.