You need to collect data on your indicator (or indicators) in order to answer your question. Data collection can be resource-intensive, so choose your data sources and collection strategy wisely. Ask the following questions (adapted from page 53 of the CAISE Principal Investigator’s Guide) to guide you through the process:
In the simplest terms, quantitative data means numbers. Traditional library assessments tend to be quantitative: the number of items borrowed or the number of program attendees.
Qualitative data is anything other than numbers — usually written or spoken words, but images, sounds, or behaviors can also be collected. Qualitative data, collected by listening to or observing people, can be more useful for understanding individual experiences, thoughts, and attitudes.
An assessment can collect both qualitative and quantitative data. For instance, a survey can have quantitative and qualitative questions, and an and an observation may involve both counting the number of times something happens (quantitative) and listening to someone’s comments (qualitative). You can also quantitatively analyze qualitative data (see the next section on analyzing data).
Charities [often throw] out a bunch a numbers, wrongly assuming that hard data is all funders need to see as proof of impact. However, without a story – without that compelling dialogue that charities can share – the numbers lose value as impact indicators… Nonprofits that are able to present their impact with a strong story will often be more successful at attaining grants than those who only present their numbers.
— Measuring Impact: How Small and Medium-Sized Nonprofits Can Benefit From Effectively Measuring Their Impact
The data collection process usually involves tools (also called instruments) — for example, surveys, interview guides, or observation records — that directly capture the indicators that will tell you if you’ve achieved your desired outcomes. If possible, it can be helpful to reuse a tool that someone else has already created (you can find examples in the Module Resources). However, connected learning programs are often unique, with one-of-a-kind elements that may need one-of-a-kind assessments. You can modify someone else’s tool to fit your own assessment needs, or even create one that is entirely original.
Before diving headlong into data collection, test the entire process. For a small and simple assessment, this could simply be asking a co-worker to review your plan. A larger effort should be tested more thoroughly, ideally with a sample that resembles who or what you will be assessing. A large survey, for example, would benefit from being tried out with a handful of teens first. As you conduct your pilot tests, consider the following questions:
Be sure to respect your participants’ privacy and autonomy as you are collecting data (and any time outside stakeholders are involved). This is particularly important when you are interacting with minors. Project Outcome is a good starting place for learning about issues of privacy and consent in library assessment (requires free registration).
You can gain a great deal of insight from the items learners create through your initiatives, whether they are musical performances, poetry, or 3-D models. Knowledge, skill, development, attitudes, behaviors—these can all be observed in learners’ creations. You can also ask students to write about the learning experience or their creative process, and analyze their responses.
Good for:
Potential drawbacks:
Collecting data through active feedback means that you, the researcher, take action to get data from particpants — such as making observations, holding focus groups, or asking for feedback after a program. Passive feedback means that the data collection happens in the background, without direct interaction from the researcher — you can think of it as “always-on” data collection.1 Along with on-demand surveys and questionnaires, you can collect passive feedback through methods such as comment cards or “talkback boards”.
Good for:
Potential drawbacks:
Using an observation guide or protocol, you can collect data by simply observing what is happening during a program or in a space and taking detailed “field notes.” This can be done unobtrusively in the background, or in a more engaged manner. If observing an individual directly (perhaps as they use a resource or do an activity) you can ask them to use a “think-aloud” strategy to help you understand why they do what they do.
Good for:
Potential drawbacks:
Formal tests of knowledge or skills feel too much like school for a connected learning library setting, but you can find small, creative ways to “test” a teen’s progress without making it feel like an exam, such as integrating a competitive quiz towards the end of a program or presenting challenges for participants to work on.
Good for:
Potential drawbacks:
Talkback boards can be a quick and easy way to gather data about the people who visit your library. Look through this Talkback Board Repository from the Connected Learning Lab, and the Talkback Board Instructions from Impact Libraries. Now, let’s design a talkback board that you can put up in your library or teen space in the next few days.