Virtual Racism Rears its Head: Uncovering Librarian Bias in E-mail Reference Services

Wendy Furlan

Abstract


A review of:


Shachaf, Pnina, and Sarah Horowitz. "Are Virtual Reference Services Color Blind?" Library & Information Science Research 28.4 (Sept. 2006): 501-20.

Abstract

Objective – To examine whether librarians provide equitable virtual reference services to diverse user groups.

Design – Unobtrusive method of defined scenarios submitted via e-mail.

Setting – Twenty-three Association of Research Libraries (ARL) member libraries from across the United States. All ARL member libraries were invited to participate, with the 23 acceptances providing 19% participation.

Subjects – Anonymous librarians from the 23 participating libraries’ virtual e-mail reference services. Up to 6 librarians from each library may have been involved.

Six fictitious personas were developed to represent particular ethnic or religious groups, whereby the ethnic or religious affiliation was only indicated by the name chosen for each user and the corresponding e-mail address. Names were selected from lists of names or baby names available online: Latoya Johnson (African-American), Rosa Manuz (Hispanic), Chang Su (Asian - Chinese), Mary Anderson (Caucasian/Christian), Ahmed Ibrahim (Muslim), and Moshe Cohen (Caucasian/Jewish). These personas were used to submit reference queries via e-mail to the virtual reference services taking part in the study.

Methods – Five different types of reference queries were developed for use in this study. Three were based on prior published research as they were deemed to be answerable by the majority of libraries. They included a dissertation query, a sports team query, and a population query all designed to be tailored to the target institution. The other 2 queries were developed with participating institutions’ virtual reference guidelines in mind, and were thought to not be answered by the target institutions when submitted by unaffiliated users. They consisted of a subject query on a special collection topic that asked for copies of relevant articles to be sent out, and an article query requesting that a copy of a specific article be e-mailed to the patron.

The study was conducted over a 6 week period beginning the second week of September, 2005. Each week, 1 fictitious persona was used to e-mail a reference query to the virtual reference service of each of the 23 participating institutions. Five of each type of query were sent by each persona. During September and October 2005, a total of 138 queries were sent. Each institution received a different query for each of the first 5 weeks, and in the sixth week they received a repeat of a previous request with details of title or years altered. All other text in every request sent was kept consistent. Each institution only received 1 request from each persona during the study.

In order to eliminate any study bias caused by an informed decision regarding the order in which personas were used, they were randomly arranged (alphabetically by surname). Furthermore, to avoid suspicions from responding librarians, queries were e-mailed on different days of the week at different times. This created some limitations in interpretating response times as some queries were submitted on weekends.

All queries were analysed by Nvivo software in order to identify attributes and patterns to aid qualitative analysis. Each transaction (a single query and any related responses) was classified according to 12 attributes and 59 categories based on various associations’ digital reference guidelines. Transactions were coded and then 10% re-coded by a different coder. This led to the clarification and refinement of the coding scheme, resulting in the number of categories used being reduced to 23. Coding was then performed in 3 iterations until 90% agreement between the 2 coders was reached. The final inter-coder reliability was 92%. The study did not support cross tabulation among user groups on most content categories due to the small sample size.

Main results – Response times varied greatly between users. Moshe (Caucasian/Jewish) received an average turn-around of less than a day. At the other end of the spectrum, Ahmed’s (Muslim) responses took an average of 3.5 days. Both Ahmed and Latoya (African-American) sent queries which took over 18 days to receive a response. The length (number of words) of replies also indicated a differing level of service with Mary (Caucasian/Christian) and Moshe receiving far lengthier responses than the other 4 personas. Number of replies (including automatic replies) was examined in comparison with the number of replies which answered the question, and again indicated Mary and Moshe were receiving a better level of service.

The way in which the user was addressed by the librarian was examined as another measure of service, i.e. first name, full name, honorific. This again mirrored the low level of service received by Ahmed. The professional endings used by librarians in their replies also reinforced the high quality of service received by Moshe across other categories.

Results for Rosa (Latino) and Chang (Asian - Chinese) were average for most categories presented.

Conclusion – In this study, a discriminatory pattern was clearly evident, with the African-American and Muslim users receiving poor levels of service from virtual reference librarians across all dimensions of quality evaluated. The Caucasian (Christian and Jewish) users also noticeably received the best level of service. It is noted, however, that the sample size of the study is not large enough for generalisations to be drawn and that future, more statistically significant studies are warranted. Many other questions are raised by the study for possible future research into racism exhibited by library staff and services.


Full Text:

PDF



Evidence Based Library and Information Practice (EBLIP) | EBLIP on Twitter