Guidelines for Journal Usage (Dorothy Milne) Marcia Tuttle 02 Jul 1996 14:59 UTC
---------- Forwarded message ---------- Date: Tue, 2 Jul 1996 11:47:03 -0230 From: Dorothy Milne <dmilne@MORGAN.UCS.MUN.CA> Subject: Guidelines for Journal Usage Al Henderson's comment on "all the trouble caused by usage studies" (in his posting on June 25th) caught my eye and begs for a reply, all the more because he has misquoted and misinterpreted the results of a paper I published on the subject. So, here goes. --- [ Al wrote ] ------------------------------------------------------- <Writing in the Journal of Academic Librarianship (5:66, May 1979), Melvin <Voigt argued that research use cannot be determined by circulation <statistics, particularly where open stacks permit patrons to stand among <the shelves browsing, reading, checking citations, etc. This may have been a fair comment in 1979, but there are a number of more recent studies which have shown that when the total use of journals is measured (browsing, reshelving), it correlates quite well with circulation statistics. It is far less clear that using reshelving statistics alone would correlate well. In the posting that elicited Al's reply, the library had both reshelving and circulation data. A good first step for them would be to see if the two sets of data correlated well. If not, they should rely on the circulation data. <A study by Dorothy <Milne and Bill Tiffany (Serials Librarian 19,3/4, 1991) indicated that <many patrons did not cooperate with their reshelving methodology. First of all, we did not use a reshelving methodology. We used a method that asked patrons to tick a tag whenever they made any use of a journal issue - any use that would otherwise lead them to order the article by ILL. So - browsing an issue counted as a use, as did reading the article, photocopying the article, or borrowing it. What we found was that users failed to tick about one-third of the time. However, since there is no reason to think that the ratio of ticking to not-ticking would vary from one journal to another (i.e. that physicists failed to tick more than historians), this problem was resolved by adjusting the use figures upwards by 1.5. Since the decision on which journals to cancel depended on a listing of titles in order of the intensity of their use, the method produced usable and repeatable results. <The infamous Pitt study used a sample that indicated low usage of journals <while interviews indicated that faculty members systematically browsed all <new issues. A number of studies have shown that _actual use_ of journals by faculty members and the use that faculty members _say_ they make of journals are quite different. Our own results showed almost no correlation between the two. In my view, the only reliable results come from measuring actual use. <There are also arithmetical complications. Suppose your dept. heads browse <all major journals systematically. A weekly such as NEJM will show 50 uses <per patron while monthlies show twelve and bimonthlies show six. In my view, the only meaningful basis for cancellation/retention of journals is the cost-per-use of a title, based on the total number of uses per year. If one measures use, estimates the annual total of uses, and then calculates an estimate of the cost-per- use (i.e. annual subscription cost / annual no. of uses), the distinction between weeklies, monthlies, quarterlies is taken into account. This is neither complicated nor difficult to do. <A "scientific" study of usage would probably use some other method to <assure a given level of reliability. Our methodology was scrutinized by some very annoyed mathematicians and physicists (annoyed because their journals were slated for cancellation). They were itching to find deficiencies in the method. They came back apologetic and said they found no methodological problem. (The method was devised by a Ph.D. scientist in the first place.) As for reliability, we have checked our results against ILL requests over the past 8 years and so far have found no major errors. A recent collection evaluation based on a recent citation analysis in chemical instrument analysis offered an interesting confirmation that our cancellations (based on our use studies) have indeed targetted the low use titles and spared the high use titles. I would agree that no estimate of usage will ever be "scientifically accurate" - this sort of human behaviour is too difficult to analyse with total precision. This is why our method was based on "estimates" which then rank the journals in sequence. A better question, in my opinion, is what is a library's best approach to getting reliable information? As far as I know, no better method has yet appeared in the library literature. The commonest errors that some cancellation projects make is to fail to correct for different lengths of journal runs, to fail to estimate use for a total year for all titles (to correct for the weekly/monthly/quarterly distinction), and to fail to calculate the cost-per-use. Cost alone and number of uses alone do not yield results which make economic sense. The purpose of the cost-per-use estimate is to aid in the decision of whether it would be cheaper to supply the information by ILL or from an electronic source. <Best wishes, Al Henderson, Editor, Publishing Research Quarterly <email@example.com Publishers are not overjoyed to have libraries judge their journals by the actual use they receive. Publishers' focus on "quality" has included paper and printing quality, binding quality, and quality of an editorial board. They have not, however, concentrated on the "usability" or relevance of the information they publish. I sympathize with them, since this is a difficult thing to identify and promote. None the less, I get the impression that publishers have focussed more on getting more _quantity_ (more articles, more issues) rather than on keeping costs down by publishing articles which the readership really wants to see. Thus, their journals are swollen with articles that will never be read, and the costs of this system have spiralled out of sight. There is little that libraries can do in this environment than observe user behaviour and act accordingly, Cheers, Dorothy Milne ```````````````````````````````````````````````````````````````````````` Dorothy Milne e-mail: firstname.lastname@example.org Science Collections Librarian Memorial University, voice: -737-8270 St.John's, Nfld., Canada fax: -737-2153 ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''