Digital Resource Use and Non-Use in the Humanities and Social Sciences Academic Settings is Multifaceted

Lotta Haglund, David Herron

Abstract


A review of:


Harley, Diane. “Why Study Users? An Environmental Scan of Use and Users of Digital Resources in Humanities and Social Sciences Undergraduate Education.” First Monday 12.1 (Jan. 2007). 7 May 2007 http://www.firstmonday.org/issues/issue12_1/harley/index.html.

Abstract

Objective – (1) To map the digital resources available to undergraduate educators in the humanities and the social sciences, (2) to survey faculty about their use of digital resources, and (3) to examine how understanding use and users can benefit the integration of resources into teaching.

Design – A mixed-methods approach, which included a survey, conducting discussion groups, and in-depth interviews.

Setting – Academic institutions in the United States.

Subjects – (1) “Various stakeholders”; (2) 31 instructors from three institutions, and 4500 full-time and part-time faculty and graduate students (at California public research universities, liberal arts colleges and community colleges); and (3) 13 digital resource providers and two other stakeholders, and 16 site owners or user researchers.

Methods – (1) A literature review, combined with discussions with various stakeholders. (2) Four sessions of discussion groups with 31 instructors from three institutions formed the basis for developing a faculty survey instrument. The survey was distributed both on paper and online. (3) Collection of data on cost and collaborative development strategies, in-depth interviews with 13 digital resource providers and two other stakeholders, combined with a two day workshop with 16 experts, both on the subject of online educational resources.

Main results – (1) Concerning the humanities and social sciences digital resource landscape, the main results of the literature study were the conclusions that the field of online education studies is complicated by a lack of common vocabulary, definitions, and analyses; and that different stakeholder interests and agendas also influence the understanding of how digital resources are used. With the help of discussion groups, an attempt at creating a typology for digital resources available to undergraduates was made, looking at type of resource, origin, and type of role of the provider or site owner. From the article, it is unclear whether or not this attempt at classification was successful.

(2) Concerning faculty use or non-use of digital resources, the most important result was the insight that personal teaching style and philosophy influence resource use more than anything else, and this also seemed to be the most important reason for not using digital resources. Faculty use digital resources for a number of reasons, to improve student learning, provide context, and also because it is expected of them. More than 70% of faculty maintain their own collection of digital resources. However, the lack of efficient tools for collecting, managing, and using these resources in teaching is seen as a problem. There is also a variation between scholarly fields, where faculty in different disciplines require different types of resources and use them in different ways, and for different educational reasons.

(3) Concerning how understanding use and users can benefit the integration of resources in teaching, the results of the interviews show a lack of common terms, metrics, methods, or values for defining use and users; but a shared desire to measure how and for what purpose digital resources were being used. Few of the providers had any plans to evaluate use and users in a systematic way.

Conclusion – The digital landscape is complicated. Faculty use is determined by personal teaching style and philosophy. Digital resource providers would like to know more about how and for what purpose digital resources are being used. Experts see a number of areas for further research, the results of which might help clarify the situation.

The only way to understand the value of digital resources is to measure their impact and outcomes, but further work is needed to provide common vocabulary, metrics, and methods for evaluation.


Full Text:

PDF



Evidence Based Library and Information Practice (EBLIP) | EBLIP on Twitter