Episode 09

CET Talks: Accreditation, Learning and Leadership

Episode 09

OCTOBER 3 2023 . 30 MINUTES

CET Talks podcast episode 9 featuring Kevin Perry, an expert on learning assessments. Kevin is pictured on the bottom left of the graphic. The episode title, “Assessment of Learning Outcomes” is in the center of the graphic.

Assessment of Learning Outcomes

Randy Bowman, Interim President and CEO of IACET, and co-host, Mike Veny, CEO of Mike Veny, Inc., an IACET Accredited Provider, interview Kevin Perry, an expert on learning assessments and Chair of the IACET Standards Development Committee. Kevin shares insights on why assessments are crucial for measuring learning outcomes, provides examples of different assessment methods, and offers tips for writing valid and reliable test questions. Discover the importance of aligning assessments to learning goals, as well as techniques to make their assessments more effective. Kevin also discusses key findings from reviewing IACET Accredited Provider applications regarding their use of assessments.

Listen to the Podcast

Transcription

Host: Welcome to CET Talks, the International Accreditors for Continue Education and Training podcast, where we convene thought leaders in the Continuing Education and Training ecosystem to share ideas, research best practices, and experiences that promote the creation of a world that learns better. Enjoy the episode.

Randy Bowman: Hello, and welcome to CET Talks. My name is Randy Bowman, and I am here with my co-host, Mike Veny, a certified Corporate Wellness Specialist and CEO of an IACET-accredited Provider.

Mike Veny: I am the CET Co-host Mike Veny. Randy, today we are going to have a very important conversation. We’re going to be talking about the assessment of learning. What’s great about this conversation is the ANSI/IACET Standard for Continuing Education and Training recognizes the importance of the learning assessment by requiring Accredited Providers to have a process that ensures learners have achieved the learning outcomes through the learning assessment. So, we have to be able to show that. Also earning CEUs, therefore, requires not only learning event participation and attendance, but the evidence that learning outcomes have been achieved for one or more assessment methods. I’m really excited for today’s guest; someone I have not yet had the chance to meet, Kevin Perry. Kevin Perry is a Continuing Education and Training consultant, specializing in the non-profit environment serving technical industries and higher education. He has a Bachelor of Sciences in Education from Penn State, Master’s in Education in Counselor Education from Penn State, Education in Administrative Policy and Studies from the University of Pittsburgh, MBA essential certificate from the University of Pittsburgh, Katz School of Business. He retired from SAE International in 2018, where he served as Director of Professional Development since 2001. Before joining SAE, Kevin worked in the continuing education field for nearly 12 years, at both Penn State University and Duquesne University, where he held positions as Program Developer, Administrator, and Marketing Director. In 2019, he became an IACET commissioner and in 2022 was appointed Chair of the Standards Development Committee to update the ANSI/IACET Standard for Continuing Education and Training. Welcome to our show, Kevin.

Kevin Perry: Thank you, Mike. Thank you, Randy. It’s good to be here.

Mike Veny: First, I want to ask, for those that are listening out there, what is an IACET commissioner. and what do they do?

Kevin Perry: A commissioner is IACET’s term for reviewer or auditor. We’re a group of three dozen learning professionals who work with IACET to review applications, either initial applications for IACET accreditation or re-accreditation applications, because IACET requires reaccreditation every five years. So every application has two commissioners assigned, and we do essentially a peer review to determine whether an applicant is meeting requirements for either becoming initially accredited or re-accredited.

Randy Bowman: Awesome. Thank you Kevin, for giving that brief explanation of what commissioners are. We have quite a number who come on the podcast, so it’s good to give some context to our listeners about what we mean when we say that term. We’re really excited, though, to have you here today to talk to us about learning assessments. Can you tell me, what is a learning assessment, and what got you interested in that aspect of curriculum development?

Kevin Perry: Very simply, learning assessment is a method of measuring the extent to which a learner has achieved the learning outcomes that the course stated learners should be able to achieve. A very common example would be the multiple choice test at the end of the course. What got me interested? Well, shortly after becoming involved in the field of Continuing Ed, I started getting curious to understand the extent to which the training programs we were putting on for these adult learners, if they were really producing the benefit that we had hoped for or that the course intended. Employers spend a lot of money and invest a lot of worker time in sending them through training. Is all that really accomplishing what either the employer’s hope or the individual learner’s hope? Then I ended up doing my doctoral dissertation on training evaluation based on the famous Kirkpatrick four level model. Just a quick reminder about what that is, this guy, Donald Kirkpatrick, way back in the fifties, came out with a model of training evaluation that was progressive. So level one is what he called the ‘reaction level’. That’s essentially the form that we all fill out at the end of a training course, sometimes referred to as the “happy sheet”. Level two is ‘learning level’ evaluation. A typical good example would be a paper and pencil, multiple choice test. Level three is what is called ‘behavior change’. That attempts to measure the extent to which the learner goes back to the job and exhibits a new and improved behavior as a result of the training. Level four is ‘impact on the organization’. To what extent has training over time positively impacted the organization? And then, over time, there was actually a fifth level that got bolted on called ROI. What’s interesting about Kirkpatrick, as we fast forward to today, is that it really mixes assessment with evaluation. He called it evaluation, but this is a good time to clarify the difference between assessment and evaluation, which IACET takes very seriously. So assessment, again, is measuring the extent to which individual learners are achieving learning outcomes. That’s totally focused on the learner, whereas evaluation is more about the course; how effective has the course been? We learn that by the reaction level, happy sheet in most cases. But there are many other ways to measure that, too. When you think about Kirkpatrick’s model, he’s really blending evaluation and assessment together. Level one and level four you could argue, are evaluation, whereas two and three, you know, learning level and behavior change on the job are assessment. When IACET updated its standard in 2013, and at that point for the first time required that Providers have a method of assessing learning, I oversaw the effort at my company to implement multiple choice learning assessments throughout all of our courses, including feedback to learners. That was that whole exercise in coaching up our instructors on how to write tests and provide feedback for the wrong answers and so forth. I became involved as an IACET commissioner and continued to be interested and fascinated by the types of assessments I was seeing in applications. Just recently, like earlier this year, I received permission from IACET to conduct a little study of more than a hundred applications, both initial accreditation and reaccreditation applications, to sort of “suss out” the various methods of learning assessment that are being used by our accredited providers. Long answer, but that’s what got me interested.

Mike Veny: Oh, this is good. I love this difference between evaluation and assessment, and we have lots of training providers listening, some are accredited, some are pursuing accreditation. I want to get deeper here. Can you share your insight on why assessments are crucial for measuring whether intended learning outcomes have been achieved, and also for facilitators in professional development programs, how do assessments ensure that their training methods are effective and aligned with their training goals?

Kevin Perry: So, why are assessments crucial? Very simply, they determine which learners in a course have achieved the learning outcomes, but they can also inform the instructor, the training provider in some cases, even the employer, on whether the training program has been effective. In other words, whether it’s lived up to its promises. If a learning outcome is that by the end of the course you will be able to properly don a safety harness. A learner demonstrates that process, and he or she gets it wrong, you could surmise that there’s something wrong with the training. It may not be a fault of the training design or the instructor; it may be the learner, but it certainly gives you data that would compel you to look into whether there is some flaw in the course design or delivery. The assessment helps reveal all that on the question of how assessments ensure that training methods are effective and aligned with learning goals. Learning assessments might also inform instructors and course designers that a particular learning outcome might be unrealistic. If most learners are not achieving a particular learning outcome, it could be the fault of the assessment, or it could be that the learning outcome was not realistic to begin with. This gets us to the whole business of crafting effective learning outcomes, which are reliant upon the approach called SMART. This is something that the IACET Standard requires, that learning outcomes adhere to the methodology called SMART; that’s an acronym for specific, measurable, achievable, realistic, and time-based. What I mean by an unrealistic learning outcome is, if the training design is promising that by the end of the course you’ll be able to write an effective business plan on paper, that sounds like a good learning outcome. But the question is, can you really measure whether someone will now be able to write a business plan? Are you going to require that they write a business plan, submit it by the end of the course, and the instructor or somebody’s going to evaluate that? Maybe. But in many cases, that’s not what’s happening. So we have to be very mindful of the learning outcome. SMART is the guidelines or the guidepost on how to make sure your learning outcomes are properly drafted in the first place.

Randy Bowman: Can you provide some examples of some different types of assessments that can be used to measure various types of learning outcomes?

Kevin Perry: Yes. First, let me talk a little bit about formative versus summative. Formative learning assessments are things that are done to check the progress of the learner along the way, whereas summative is an assessment that typically occurs at the end of the learning event to test the learners’ mastery of all the learning outcomes that were covered throughout the course. An example of formative would be in an online course. You get to the end of a module or a section, and you’re asked to answer a couple multiple choice questions or maybe engage in a little matching exercise. Summative is at the end of the course, and you’re asked to take a 15 or 20 question test that measures everything covered in the entire course. The traditional multiple choice exam is very common, and those work well for learning outcomes that are knowledge or comprehension level outcomes. Those tend to be the most common. In a classroom learning environment, for a formative knowledge or comprehension level outcome assessment, the instructor might engage in a question and answer session. That’s giving the instructor feedback on the extent to which the learning group at large is more or less getting it. But it also informs individual learners about whether they’re thinking, whether they actually raised their hand and answering the question, or just sitting back and listening, whether their thinking is correct so far. Now, if you have a higher level learning outcome, like an application level outcome, a common assessment here would be to demonstrate something. If we get back to the example of the learning outcome, by the end of the course, you’ll be able to properly don a safety harness. Then the learning assessment method would be demonstrate if you can don the safety harness, and the instructor would ideally have some kind of rubric or a checklist to check off, that you’ve progressed through each step correctly. Other examples of application-level assessments might be to produce a project or teach back a concept to the class, make a presentation, engage in a role play, or engage in any variety of other activities. Now, if the outcome is at the top of the pyramid, so to speak, at the analysis level where the learning outcome is expecting you to be able to distinguish or analyze or differentiate or criticize, then assessment methods here could be things like case studies, case problems, sophisticated exercises or games. Gaming has become a very popular activity in training, but also a very effective method of assessment. And there are many more. Obviously, if you think back to some of your college courses, or even training you’ve gone through that’s had a significant hands-on component, there are labs that you might engage in. Labs can be used not only as a learning activity, but an assessment method, written assignments, self-assessments. I’ve seen those show up in IACET applications where learners are invited to assess themselves. That’s kind of interesting. Peer assessments get one or more of your peers to assess you on the extent to which you’re mastering learning outcomes. I just talked with a fellow commissioner recently who reviewed an application where the assessment method was storytelling. Either along the way or at the end of the course, every learner had to tell a story, which I would say is another form of a presentation, but it’s just a very creative and fascinating way to assess learning.

Randy Bowman: Wow. So many different ways to assess learning outcomes and, at the end of the day, an assessment is just a tool, and we know tools can be misused. What are some ways we shouldn’t use assessments? What can assessments not tell us?

Kevin Perry: That’s a great question. I would say that very likely assessments cannot measure unrealistic or grandiose learning outcomes. We’ve seen course descriptions where the learning outcomes are, “By the end of the course you’ll be able to transform something, or you’ll be able to revolutionize your industry.” That’s copy that may be appropriate for the overview paragraph of the course description, but when you get to learning outcomes, again, we need to focus on SMART. They need to be specific, measurable, achievable, realistic, and time-based. The other thing assessments may not do is they won’t necessarily guarantee that the learner will be able to do something beyond what the learning outcome promised. So employers, for example, may expect that they’re going to send Randy and Mike off to this training course, and they’re going to come back and be able to do something that they thought the course would deliver and promise. But, if you really go back and read the course description and the learning outcomes, the learning outcomes may have been drafted at the comprehension level. By the end of the course, Randy and Mike will be able to identify or recognize something, or describe something, or list something. The employer may have expected that you’re going to come back and totally transform your department or engage in some higher level behavior that the course didn’t promise. That’s where we need to be careful about assessments, as well as the stated learning outcomes and the promise that the course puts out there.

Mike Veny: Yes, there are a lot of different promises that a lot of courses put out there in the world, so appreciate you clearing that up. As the CEO of an Accredited Provider, I remember the experience of completing the IACET application, and I was just curious. Can you tell us more about the study you recently conducted reviewing IACET Accredited Provider applications, and what were some of your key observations and findings?

Kevin Perry: Right. I ended up taking a look at 111 applications that were submitted to IACET during the last three years, and I finished this review in the April/May timeframe of 2023. The applications were roughly an even mix of initial accreditation applications and re-accreditation. Incidentally, 111 applications turns out to be around 17% of the total IACET Accredited Provider population. Many sampling experts or psychometricians would likely agree that that’s probably a representative sample. What I looked at were not only types of assessment methods, but I also made a record of industry type, the region of the world that the provider came from, and the delivery methods that they used. Delivery methods would be classroom, virtual, meaning over a platform like Zoom, asynchronous, also known as online or anytime, anywhere learning, and then blended, a blending of one or more of the previous. In terms of the assessment methods, what I discovered was that over this sample of 111 applicants, 17 different methods of learning assessment were identified, and I’ll run down through that list. But what was interesting is that exams or tests were the number one method. 89% of the applications I looked at reported administering an exam or a test as a method of assessment. That was followed by instructor question and answer sessions, which I thought was interesting, followed closely by a variety of different activities than written assignments. Instructor observation, so instructors observing learners doing something as a result of having just gone through a training module, learners putting on presentations, self-assessments, skill demonstrations, so learners demonstrating they can do something, and then projects. Those methods rose to the top after exams. The percentage of applicants doing them fell off pretty dramatically, but they were still in the 10 to 30% range in terms of the percentage of applicants that were doing those things I just listed. There was a great drop off for the remaining types of assessment, where just a few providers reported using things like case studies, games, labs, video feedback. So that’s interesting. Take a video of a learner and give them feedback. I actually experienced that, and of course, I took where we were trained to give good presentations. So that makes sense, right? Videotape the learner giving the presentation, and then give them feedback. Then supervisor assessment, supervisor observations. That’s interesting. That gets us back to Kirkpatrick level three, behavior change on the job. The learner goes back to the job, and now the supervisor is offering an assessment or observation on how they’re doing on the job. Peer assessments, mentioned that earlier. Then finally, blog posts; that was interesting. Blog posting as an assessment method. The other thing that was interesting is not only did we have these 17 unique methods of assessment, but I also discovered that the average number of assessment methods used was between two and three. The mean average actually turned out to be 2.3. But, generally speaking, you could say our Accredited Providers, on average, are anywhere between two and three separate methods of assessment in their courses when they assess their learners. Getting back to delivery methods, it was also interesting to discover that our providers deliver training through an average of two delivery methods. For example, not only are they providing classroom learning, but they’re also doing it virtually. The pandemic forced many providers to do that. In addition to that, there were plenty of examples of they’ll do some classroom, and they’ll do some asynchronous. The e-learning world is being catered to in that regard. Just some other quick notes to sum it up, there was really no significant differences in methods of assessment between for-profit training providers and not-for-profit. There was also not really any significant difference between US and non-US. Admittedly, we did not have many non-US providers, but the only thing I did discover from the few non-US providers that we had in this study is that there was a greater likelihood they would be delivering training through traditional classroom versus some other method of delivery. I also think it’s noteworthy to point out that supervisor assessments or observations were very rare. Going back to our Kirkpatrick model, that would suggest that level three behavior change on the job is still really not being measured all that much.

Randy Bowman: Wow. Thank you so much, Kevin. I mean that. I think that’s very insightful, and we can all learn a lot from those results. Not only about how we’re doing, but how we can, as accredited providers, improve. We thank you so much for doing this research and sharing your findings with us. As we head out today, can you just give us a brief vision about what does a world that learns better look like to you?

Kevin Perry: Well, aside from all of our training providers being IACET-accredited, I would say it’s a world where training providers, in general, put serious effort into clearly defining their learning outcomes, and then equally serious effort into creating learning assessment methods that effectively measure them. This keeps the focus on the true purpose of training, which I believe is ultimately to improve human behavior. It’s easy to keep your focus on the business of training rather than the purpose of training. Whether you’re a training provider that caters to an external audience or whether you’re an internal training provider in a company, it’s easy to get caught up in metrics like revenue, and growth, and the number of learners you’re serving, the number of CEUs you’re awarding, and so on. But what we really want to keep our eye on is the quality of the learning. Take a look at the extent to which your learning assessments are being passed, what kind of scores are there, test questions that are not being answered properly. This gets to validity and reliability and so forth. So yeah, stay focused on the purpose of training and resist getting caught up in the business of training.

Randy Bowman: Thank you. Thank you. Thank you. So, today, you know what I’m going to take away from your presentation is that difference between evaluation and assessment; with evaluation being focused on the organizational goals, and assessment being focused more on the individual learner. So, Mike, what was your key takeaway?

Mike Veny: I’m going to be a copycat and have the exact same one as you. It just was so clear, and to the point, and something I know I often confuse in my head. As we head out today, I want to ask you, our listeners, what’s one creative way you believe assessments can be more effective? Share your innovative ideas using the hashtag #assessinnovation. We’d love to have you find us on LinkedIn or on Twitter at @IACETorg and share your vision. Don’t forget, you can submit topic ideas, suggestions for guests, and other feedback on our CET Talks podcast page of the IACET.org website. We certainly hope you’ll subscribe to this podcast on your favorite podcast platform, so you don’t miss any episodes. Thank you so much for joining us today.

Host: You’ve been listening to CET Talks, the official podcast of IACET. Don’t forget to subscribe to the podcast on Spotify, Apple podcasts, or wherever you listen to podcasts. To learn more about IACET visit IACET.org, that’s I-A-C-E-T.org. Thanks for listening, and we’ll be back soon with the new episode.

Trending Now

Episode 24: Cultivating Careers: The Power of Employee Engagement for Organizational Success

Episode 23: Igniting Imagination: Crafting Creativity in Training Environments

Episode 22: The Metrics of Change: Navigating Purposeful Measurement in L&D

Episode 21: Pathways to Success: The Value of Lifelong Learning through Digital Credentials

Episode 20: Outcomes to Achievement: Crafting Tomorrow’s Workforce Through Competency Models

Episode 19: Chatting with the Future: Enhancing AI Output Through Prompt Engineering

Episode 18: On the Inclusive Frontier: Harnessing Neurodivergence in Modern Training

Episode 17: Designing with Purpose: Strategies for Accessible e-Learning Development

Episode 16: Innovating Education: Navigating Accreditation for Short-Term Training

Episode 15: Beyond the Basics: Elevating Virtual Training through Expert Facilitation

Episode 14: Aligning Your LMS with Accreditation Standards: Insights from an Award-Winning LMS Provider

Episode 13: Instructional Design on a Shoestring

Episode 12: Taking a Whole Person Approach to Skills Assessment

Episode 11: The Accreditation Journey: Practical Strategies from Detroit’s Accreditation Manager

Episode 10: Implementing Digital Badges: Engaging Learners and Enhancing Retention

Episode 09: Assessment of Learning Outcomes

Episode 08: Designing for the Learner: Expert Insights on Needs Analysis

Episode 07: Data Analytics for Training Businesses

Leave a Reply