top of page

Competency N

“Evaluate programs and services using measurable criteria.”

Introduction

Evaluation is a crucial tool in determining whether or not key outcomes have been achieved. We see evaluation used in school, public, and academic libraries, although evaluation is useful for any type of information organization. Several different evaluation metrics exist, so it is important for information professionals to select the type of evaluation that is designed to provide the answers or measure the desired outcomes.

 

Surveys work well to evaluate user response or satisfaction with a program or service. A traditional way to gauge this would be for the library to survey participants after using the service or attending the program. For example, if the library were to run a class on social media for beginners, upon completion of the program, the library could offer a survey to users, asking them evaluate what they liked or disliked about the program, how helpful they found it to be, and what should be expanded upon or left out of the program in the future. All of these elements combined provide useful feedback that the information organization could act upon to improve the program for the future.

 

Circulation records, and other library usage data, such as ILL requests, downloads, social media interactions, or website visits are all elements that can also be evaluated. Using analytics through Twitter or Facebook, an institution can see the number of follows, likes, retweets, etc. that they receive within a given period of time. They can also use those analytics to see whether interactions are increasing or decreasing. This information can be useful when considering when and how to allocate resources. If the number of likes or retweets shows a significant increase from one month to another, then the information organization should take note, and evaluate what types of posts and interactions were happening on their social media site that month. Were they more active on social media? Did the quality of the content shared change significantly? Using this data as a measure can help the information organization determine how best to leverage its resources.

 

For academic librarians, assessments are a valuable way to evaluate student learning. Many academic librarians find themselves in the position of having to teach information literacy to students. This can happen in the form of one-shot instruction, as a part of a course, or embedded into an online (or hybrid) course. One-shot instruction is often the most common format, and can often be the most difficult to evaluate. Some ways to achieve this is to provide a brief survey before instruction (where possible) or ask the faculty member in charge of the course to have students complete an assessment afterwards. Assessment both before and after a session (if possible) would provide the most information about how much the instruction helped, and whether or not the intended learning goals were achieved by the students.

 

Other types of evaluation, which are relevant to all types of information organizations, include user testing and focus groups. A recent emerging trend in information science that lends itself to user testing and focus groups is the inclusion of design principles. Design is an element that information professionals now take into consideration for many aspects of the information organization. From the design of the building or room, to the design of the website, after the concept is completed, it is important to have users evaluate the elements included. For example, having a group test a new website before it launches could provide valuable feedback, and prevent a site from being launched before it is ready. Using and evaluating the website against design principles can help in creating a better user experience for patrons.


During my time at SJSU, I have encountered many different types of evaluation first hand. In my information literacy and instruction courses, I have created assessments, and both summative and formative evaluations for learning materials. I have also created surveys and questionnaires. Other forms of evaluation that I’ve encountered have come from my experience as the Editor-in-Chief of the School of Information Student Research Journal (SRJ). In this position, I was responsible for steering the direction of the journal, and the quality of its content. Graduate students would submit papers, which I would then review, and then decide whether or not to submit the paper for peer review. The peer review process is another method of evaluation with measurable criteria. Peer review exists to ensure that there is a high standard of quality, and is something that most serious publications implement. The point of peer review is to evaluate the research or writing of a submitting author, and to provide them feedback in order to help them present and publish the best version of his or her research possible. At SRJ, editors used a rubric to provide the best and most detailed possible feedback for authors.

 

Evidence

The first piece of evidence that I submit to demonstrate mastery of this competency comes from my INFO 254: Information Literacy and Learning course. For this project, I had to develop a pre-assessment for students who would be attending a one-shot instructional session. The point of the questionnaire was to find out what information the students already knew, and what they did not with the purpose of developing a focus for the instruction. In addition to developing the questionnaire, I had to find participants willing to take it, and then I had to write a reflection and discussion the creation of the questionnaire, along with the results. This project demonstrates mastery by showing my ability to create an assessment, and to use the results to create an effective learning experience for my students.  

 

The second piece of evidence that I submit to demonstrate mastery of this competency is my final project for INFO 250: Design and Implementation of Instructional Strategies for Information Professionals. For this project, I created a unit of instruction on copyright and fair use. Part of the assignment was to develop appropriate learning outcomes, and then develop a method to evaluate whether or not those outcomes were achieved. For my project, I decided to go with both summative and formative evaluation. Formative evaluation for this project was informal, with an instructor walking around the room and observing the learners’ behavior. This allowed for students to have individual attention if necessary, without holding up the entire group. Summative evaluation for this project would be in the form of a Google Sites survey, asking learners about their experiences, so that adjustments to the learning materials or outcomes could be made if and where necessary.

 

The third and final piece of evidence that I submit to demonstrate my mastery of this competency is a presentation that I created on the peer review process. I gave this presentation as one of the monthly training sessions for the SRJ during my term as Editor-in-Chief. The point of the presentation was to help the editors with writing their reviews, and helping them to understand what we were trying to achieve through the peer review process. This presentation demonstrates my mastery of this competency by demonstrating my knowledge of the peer review process and how it works. Also, how peer review is experienced from the point of view of an author. There is a difference in providing evaluative feedback, and simply offering criticism. My focus, then, was to help the editors reviewing papers to understand that their job in providing a review was to evaluate and help improve the quality of the paper, not to judge or criticize the flaws of the paper.

 

Conclusion

 

Evaluation comes in many different forms. The key component in any evaluation is to make sure that the feedback recieved is usable and appropriate. Having appropriate feedback can help the information organization make difficult decisions about programming and services. Good evaluation can help an organization decide whether a program needs to be cut, or if a few changes to improve it would suffice. Additionally, evaluation can help when launching new program or services include focus groups to evaluate design, usability, and overall effectiveness.

bottom of page