Skip to Main Content

Collection Analysis

Why do we care about journal usage?

There has been an ocean of ink spilled in the scholarly literature of the library field discussing the problems of usage statistics and suggesting solutions. For our purposes here, let's agree that usage measures can help us gain an understanding of whether a resource has been used relatively more or less than other resources. That's valuable. We can also dive more deeply into specific usage measures to understand exactly what they are intended to measure and how, to understand how "solid" or "soft" they might be. We make calculations based on usage measures and these can provide insights we couldn't have gained otherwise. For example, we can use our own homegrown "Southworth ratio," named for the librarian who invented it, to compare journals within a package or subject. We calculate the Southworth ratio by dividing the total of the last 4 years of article downloads by the total of the last 8 years of article downloads. We've used this ratio to sort and see which journals are trending toward more use and which ones are trending lower. Such trends often reflect changes to programs or the curriculum. The Southworth ratio, and other calculations, have been enormously helpful for adjusting our collection on a rational basis. We love usage measures. We just don't try to be too pedantic about what they actually mean.

Article Downloads and YOP: COUNTER (Vendor-supplied) Usage Statistics

COUNTER usage statistics are provided by library resource suppliers. Not all resource suppliers are COUNTER-compliant. For journals, we do not use non-COUNTER statistics for the CPBI or other reports, although we will refer to non-COUNTER statistics in some cases when appropriate to support decision-making.

What they say: "COUNTER provides the Code of Practice that enables publishers and vendors to report usage of their electronic resources in a consistent way. This enables libraries to compare data received from different publishers and vendors."

Within the Mankato library, we continue to use the COUNTER J3 measure "Total_Item_Requests" primarily, which we call "Article Downloads" to help non-library people understand the meaning of this measure. We rely on Total_Item_Requests because it is backward compatible to the JR1 measure in previous versions of the COUNTER Code of Practice and trending is important to us. Starting soon, we will provide both Total and Unique Item Requests in our reports.

We provide "YOP" measures for just one year -- the most recent complete year. "YOP" is shorthand for "Journal Requests by Year of Publication." So, if the most recent usage year is 2022, YOP would show the total of "Article Downloads" (see above) for 2022 broken out by the year of publication of the articles downloaded. YOP is defined by the COUNTER J4 report (formerly the JR5 report). Unfortunately, fewer vendors provide the J4 than the J3, so the YOP numbers will not total up to match Article Downloads for a given year as derived the J3.

For our reports, we have discontinued providing J2 Journal Access Denied, or "Turnaway" measures. For overview reports, we've found that this measure can be confusing. 

Why? The library very often provides access to journals via more than one platform. We might purchase current access via a subscription platform and archival access (access to older content) via a different platform. When we review Turnaways in an overview report, we very often see Turnaways even if we provide access to the entire run of a journal. The problem is that users will try to access content unavailable through a given platform, even if the library provides that access through an alternative platform. The Turnaways measure is not a good indicator that we should try to provide more access -- because in many cases, we already provide that additional access -- instead, the measure shows us that the vendor division of content ownership and distribution is not good for our users.

MavScholar Clicked Requests: Link Resolver Usage Statistics

(1) After performing a search in MavScholar (Mankato's branding for the Primo discovery layer), if a user clicks the title of an electronic resource, the system will generate an OpenURL request, which in turn will be used by the system automatically to return a menu to the user. The menu will either indicate there are no links, one link, or many links to "targets" providing access to the selected result. The OpenURL request is counted by a measure called "Number of Requests." When users click on at least one link in the menu, the click and any subsequent clicks are counted once by a measure called "Number of Clicked Requests." The "Number of Clicked Services" is the total of all clicks on the menu. (2) If the user doesn't click on the title, but instead clicks on the link "Available Online," the click will be counted by both measures. In this latter case, the system is not generating the menu of options, but it is still generating the OpenURL request and it is delivering the user to what would be the top "target" on the menu.

These measures are defined by Ex Libris on their Link Resolver Usage page.

A functional description of how the Link Resolver works is provided on the Ex Libris Alma Link Resolver Workflow page.

According to our understanding, it is also possible for a user to generate Requests and Clicked Requests if they find an article using a subject index which does not provide the full text, so instead they access the article using "Click Here for other Article Options" or similar functionality.

What it means: If a Mankato library user clicks on a link to an electronic resource they found using MavScholar, these clicks will be counted as "MavScholar Clicked Requests" (or, depending on the report, it might only say "Primo Clicked Requests" or "Link Resolver Clicked Requests," or some variation thereon). The point here is that this variable is useful to get a sense of MavScholar usage to find electronic resources. The variable is also a (soft) indicator of electronic resource usage itself. This variable will generally be much lower than the COUNTER J3 measure "Total_Item_Requests," which we call "Article Downloads" in some reports (see above), because users also find and access electronic resources using Google Scholar, subject indexes, the vendor discovery platform, and other means.

ILL Requests: Interlibrary Loan Usage Statistics

Mankato uses two systems to fulfill ILL requests, OCLC and Alma, so it can be difficult to get an overview of all actual submitted ILL requests. In addition, the data used for Alma ILL requests can be pretty spotty, so this specific data source is not great. 

The Mankato workflow to submit ILL requests allows us to use OCLC as a "pretty good" proxy for overall ILL requests. When a user sends a journal article ILL request to our ILL office, we first look up the journal in OCLC. This look up generates the OCLC count of "Total eSerials Requests Received." Most often, the next step will be for the ILL office to enter the actual ILL request via Alma, to be fulfilled by our partners within Minnesota. 

Because ILL request counts can be very sensitive to anomalous causes, especially to faculty and graduate student research priorities, which can vary enormously from year to year, Mankato does not put much stock in a single year of data. Most often, we will combine all OCLC "Total eSerials Requests Received" as "ILL Requests Since" a given year.

Per Browses: Physical Journals Usage Statistics

For some reports, we will include periodicals browsing data, most often for collection development relating to the physical journals. For journals in the Periodicals collection, we do not allow students to check out the journals. However, we do count "Browses" (Alma "Loans In-house") whenever we find a journal out of place. 

Other Usage Statistics

It is possible to track usage in other ways. Most notably, we could use EzProxy logs to learn much more about which specific users or user types from which programs are accessing which specific resources. By analyzing such logs, it would be possible to track more directly the library's impact on student success. However, we have decided not to use EzProxy logs.

Why? In the first place, we have concerns about privacy. Although we could anonymize and aggregate EzProxy data, we have seen projects at other libraries which did not seem to have done enough to protect users. In the absence of a specific need, there is not a good enough reason for us to analyze EzProxy data. In the second place, we have enough data to understand how students are using library resources in the aggregate. We can already analyze subject usage pretty effectively and, most of all, safely. Instead of tracking usage from the bottom up using EzProxy -- from the user and their personally identifying information up to the journal, we track usage from the top down -- from a journal list including subjects down to always already aggregated and anonymous data. In the third place, it can take more effort and more organizational collaboration to analyze EzProxy data. Without a compelling case (a specific need), there's no reason to divert the time (at this time).

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License
.