Episode 19

CET Talks: Accreditation, Learning and Leadership

Episode 19

JULY 15 2024 . 22 MINUTES

CET Talks podcast episode 19 featuring Josh Cavalier of JoshCavalier.ai. Josh is shown on the bottom left of the graphic. The episode title, “Chatting With the Future” is in the center of the graphic.

Chatting with the Future: Enhancing AI Output Through Prompt Engineering

In the ever-evolving landscape of Generative AI, mastering the art of prompt engineering is crucial for anyone looking to leverage this powerful technology effectively. Join Josh Cavalier, a seasoned expert in Learning and Development, as he shares key strategies and insights on crafting prompts that yield precise and valuable outputs. This episode will explore common challenges, best practices, and future directions in prompt engineering, providing listeners with the tools they need to enhance their interactions with AI platforms like ChatGPT. Tune in to discover how precise prompting can transform your AI experience and drive better results across various applications.

Listen to the Podcast

Transcription

Host: Welcome to CET talks, the International Accreditors for Continuing Education and Training’s podcast, where we convene thought leaders in the continuing education and training ecosystem to share ideas, research best practices, and experiences that promote the creation of a world that learns better. Enjoy the episode.

Randy Bowman: Hello, and welcome to CET Talks. My name is Randy Bowman, and I’m here with my co-host, Mike Veny, a certified corporate wellness specialist and CEO of an IACET-accredited provider. Thank you for being here with us today. Mike, how’re you doing?

Mike Veny: I am wonderful, Randy, and I have a question for you. What role can AI play in training? Am I allowed to use it to help create content? I don’t know, and it’s very confusing to me.

Randy Bowman: That is a great question and one that I think everyone’s struggling with. Can I use AI to create my content? Are my learners using AI to cheat? What’s going on with all this AI stuff? You’re in luck, though. We have an expert here today to help us address these issues and to talk about this. With us today is Josh Cavalier. He is the founder of Josh Cavalier.ai, and he has been creating learning solutions for corporations, government agencies, and colleges for over 30 years. He is an expert in learning and development and has applied his industry experience to Chat GPT and other generative AI frameworks for human performance. Josh is so passionate about sharing his knowledge. He has a popular YouTube channel and a weekly live show called Brainpower, where he teaches you step-by-step how to use generative AI. Josh, welcome to the podcast. We’re happy you’re here, and we are waiting to learn all the answers about AI and its impact on L&D.

Josh Cavalier: Randy, I’m excited to be here. I don’t know if I have all the answers, but maybe we can get to some by the end of the show.

Randy Bowman: All right, we’ll deal with some. I’d say there’s a lot of confusion in the marketplace right now, especially with learning and development professionals. For those who may be unfamiliar or confused about the terminology, can you explain what a large language model is, what prompt engineering is, and why these are essential skills in the realm of generative AI?

Josh Cavalier: Large language models have been around for a while. Think about how you interact with your phone. You text and there’s auto correct, right? So, natural language processing, we’ve been using it for a while. Now it’s just showing up very differently. When Open AI released Chat GPT in November of 2022, that was a major inflection point because, as consumers, we finally got an interface to interact with this large language model, which essentially is nothing more than a corpus of text, or actually they’re numbers, but it represents text. Through some amazing engineering, when we go and give it information or an instruction, we get pretty substantial responses back. Without this excellent engineering happening in the models—it’s called the transformer—we wouldn’t be having this conversation today. So, through access and through an amazing amount of data and computing power, we now have the ability to leverage large language models, which is under generative AI. Oh, and in regard to prompting and prompt engineering. That is how we communicate. That is how we interact with the model and give it a task or produce some type of response. Not just with text, but there are many models that are out there where we can prompt and get an image, we can get audio, we can get video, and we can even get code. So prompting is the way that we instruct the model give us a response. Now, it’s funny. That term “prompt engineering”; sometimes people get a little put off by it. I like to use the term “prompt writing”. Engineering really comes into play when we start talking about automations and AI agents. That’s truly where engineering skills come into play. I like to make prompting accessible to everybody and sometimes giving it the term “engineering” excludes a group where they think, “Oh, it’s too complex or too much for me.” But it’s not. You can start out with just a simple request and begin that process of building a relationship with AI.

Mike Veny: As an instructional designer, when it comes to prompt engineering or writing prompts to assist in creating content for learners, where does one begin, and what are some of the challenges people face when crafting prompts for generative AI? How can these impact the outcomes of the content?

Josh Cavalier: Yeah, it’s a great question. I think one of the most difficult things is we know that AI can help in all kinds of different facets along the learning journey, but it really comes down to understanding your craft. What are you really good at, and what do you do day in and day out? Start there. I don’t care if you use AI to help you create an email, to create a multiple-choice question, or to perform a learning needs analysis, whatever the case may be. Again, you know what you’re really good at, and you understand what a certain format or a certain output needs to be, in regard to a quality level. By taking that level of skills or knowledge and applying it to AI, you are off and running. Now you can have a conversation with a large language model, begin to coax it, and begin to work with it, to build in productivity, or to build in the expedited workflows that you are already familiar with. Don’t go outside those boundaries; stay right in your lane. Take baby steps, and then you’ll begin to accelerate.

Mike Veny: Can I follow up with a question? Can you give a more specific example of how I might use it? Let’s just say, pretend I’m a steak training company, and I train people on how to grill steaks. How would I start creating a prompt for AI?

Josh Cavalier: The formula that I like to use for individuals who are just getting started is to identify three components. The first component is the role. Who or what do you want the AI to mimic? For instance, act as an instructional designer, act as a learning architect. Then you describe the task; create for me three or four learning objectives, let’s say. But we all know there’s a certain way to write those learning objectives. Is it measurable? Are we using the SMART format? Whatever you want to use, right? Whatever you’re used to using to writing a learning objective, it’s those nuances that are then going to give you great responses back. Within a large language model, like ChatGPT, we know because it’s trained on the whole entire internet, Wikipedia, every book, even the copyrighted ones, that the domain of grilling a steak is in there. We’re going to coax out of it a really well-formed learning objective about grilling that steak. However, if you don’t know your craft and you don’t know what a good learning objective looks in the first place, that’s where you’re going to struggle.

Randy Bowman: So as AI technology continues to evolve, what should prompt writers be aware of regarding their strategies to keep up with the changing algorithms and capabilities?

Josh Cavalier: This is difficult because there’s a lot of noise out there right now. There’s a lot of FOMO, and I get it, totally. Each and every week I get introduced to professionals who are just getting started. They’re concerned about their job; they’re concerned about what they need to do day in and day out. Now they have this AI thing dropped on top of them, like, surprise, here’s AI. Now they have something else to learn. I think that, being mindful of this ever-increasing technology, you’re going to see and hear all kinds of craziness happen out there, as far as advancements, but the reality is that in most organizations, things are going to be methodic, and things are going to move slow. And how does that show up? Well, it’s either going to show up through a bottom-up activity where you and your coworkers, as long as you have an AI policy in place where you can use the technology without any guidance, you’re going to go ahead and start using it for your day-to-day work, securely. Or there could be a top-down initiative where, from a strategic standpoint, leadership has a pretty exact way that AI is going to show up. Maybe you get a portal with all kinds of different models. Maybe you have a prop library across your team. There are many more nuances to that that are going to show up. But I see it one of two ways. Again, it’s a bottom-up activity, sometimes even on the down low. I just talked with someone last week where they’re prompting on their phone, they’re getting a response, and they’re emailing it to themselves, right? So, there’s some of that going on, or it’s very focused and you have access to models and your team begins to accelerate forward.

Mike Veny: How crucial are data and analytics in refining the process of prompt engineering? And depending on your answer, what specific metrics or feedback loops should practitioners consider enhancing for their prompting strategies?

Josh Cavalier: In regard to data and analytics, it really depends upon where along the learning journey that you’re going to be applying it. For instance, let’s say I’m doing a learning needs analysis and there are specific metrics or specific information that I want to put as part of that process. I can take those reports or documents and upload it into a model like ChatGPT. Securely, you know; I just don’t want to put corporate IP up there on an unsecure connection to a model. But then have a conversation about what my learning needs analysis looks like. I can go ahead and take that data and begin to use it. As far as analytics, I have one customer who is full steam ahead with AI, and they are building productivity dashboards to indicate to leaders how much time they’re saving using AI. That’s one analytic. Now we’re talking about learning analytics. This is really interesting because I believe that over time—it’s not going to happen this year, it may show up next year, but definitely three or four years down the line—L&D professionals must become proficient with data and analytics. The content creation that we perform today, and again, I’m 30 years in on this, and so I’ve produced a lot of e-learning content over the years, and I’m passionate about using tools like Storyline, Captivate, RA, Beyond, Premier, and Camtasia and all these tools. But the reality is that we’re going to get into a phase where a lot of that content is going to be automated, where we’re going to have certain inputs that will drive personalization, and we’ll have content automated, and we’re going to be orchestrating a lot. We need to make sure that we are on the front end as a performance consultant, or I would like to term it a performance business analyst, where you’re working with the business, you’re understanding where skill gaps are occurring, what’s happening to our frontline workers, whatever the case may be. Then you begin to orchestrate training interventions or experiences that will bridge that skill gap. But then off the back end, you still have to look at the analytics. Did we really move the needle? And if not, what do I need to adjust in regard to my AI orchestration to put out different types of experiences, to get to where we need to be? Or it’s a success and we can go ahead, chalk it up and like hey, that orchestration was awesome. Let’s replicate that again for a different role.

Randy Bowman: Can you give us some success stories where prompt writing has significantly improved AI performance?

Josh Cavalier: I’ll give you a couple examples. One is a personal example. Another one is an example that just happened last week at a workshop. The first example is prior to starting my current company. I was working for a $5 billion supply chain company as an individual contributor. One of the tasks that was scheduled to be performed was to create 80 multiple choice questions on ESG—environment and social governance. I pretty much had chat GPT in my hands. I knew that we had a public ESG report, so I uploaded the report, generated 80 multiple choice questions, and reviewed them. Let me tell you, creating 80 multiple choice questions, including creating really good distractors, is extremely time consuming. What would take me a few days essentially took me 15 or 20 minutes, and once I gave it to the SME, only four, or 5%, of those questions came back. I was like, wow, okay, we’re off and running here. It’s saving me 80% of my time on this one effort and that was because I understood how to prompt. I can’t just ask the model to create a multiple-choice question. I need to say, “I need a multiple-choice question with two distractors, one correct answer, indicate the correct answer, give me an explanation, and make sure that all the answers are similar in length.” Again, if you’re in the game, you understand these little nuances that make a great question. By understanding that and by modifying my prompt, I had great outputs. The other example came from a workshop. I had a group working on a project, and they wanted to work in this domain and create PowerPoint slides right out of chat GPT. You totally can do it, but they were really confused. They’re like, “I don’t quite understand. I’m asking it for PowerPoint slides, but it’s not giving me the format.” I’m like, “Just tell it, here’s the format.” They’re like, “What do you mean?” I’m like, “Go ahead and put in quotes. Here’s the format of a PowerPoint slide. Here’s the heading, here are bullets, this is what the format is.” And sure enough, they ran the prompt again and it started knocking out slides. I mean, not physical slides, but formatted heading and bullet points. They could then import to a PowerPoint outline or put it in Word and export it to PowerPoint. Just by tweaking the prompt, they began to accelerate their productivity forward.

Mike Veny: Josh, thank you so much for this interview. I’ve gotten quite a few things out of it, and I wanted to share with you, Randy, some of my takeaways. One that’s really important for all of us using AI right now is to know your craft and be specific when you are crafting a prompt. Be as specific as possible. Recently, I had to do one where I had to ask it to be warm and welcoming but be very visual in the description of something. That really, really helped, rather than just saying, do this in in one sentence. The other thing is, I felt like I’ve been in this conversation before. 20, 25 years ago when the worldwide web started to become really popular. no one knew what to do with it, no one knew where it was going. We were predicting all these things. Nowadays we all use Google, right? We Google things all the time. We use it in our research, we quote things, we’ve slowly developed ethics around it and use it a certain way, and it’s still evolving the internet. I just want to encourage the listeners out there to think of this whole AI conversation like that. It’s an evolving conversation that we don’t know all the answers to, but there are certain things that we can take from the past, take from success stories, and use to better our learning experiences.

Randy Bowman: You are correct. So, my background is not learning and development. When AI first came out, everyone was scared about it and is scared about it. I was like, this is the third major revolution I’ve seen in technology in my lifetime. We can’t put computers in classrooms, and now every classroom has computers. We can’t let students choose the internet. Now, we don’t even give kids textbooks anymore. We give them Chromebooks and have them use the internet. That is going to be the evolution of AI, where it becomes so integrated into our fabric that we use it everywhere. What do you think about that?

Josh Cavalier: Yeah, I’ve been there. When I first started my career, I was doing mainframe training, and then we got to the PC, where I would teach individuals how to use a mouse. I remember doing mouse skills courses. Having gone through the internet, mobile, social, and now AI, it is just another technology. At the end of the day, we’re still human. Humans haven’t changed, but now we have the opportunity to have a partner to make us show up better. I’m an optimist. I realize that this technology could be used for nefarious reasons, but my hope is that guardrails get put in place, and that at the other side of this, when we think about our profession and what we do day in, day out, it’s about human performance. Those mundane things that we do day in, and day out are going to be done by AI, which is going to allow even more time for humans to connect, to coach, one-on-one training, 360 reviews, whatever. I do believe that it’s going to be a positive outcome here with AI when it comes to L & D and human performance.

Randy Bowman: Thank you so much, Josh. I think you’ve hit the nail on the head. It is a tool, and it’s a tool that’s going to allow us to be more human, to be more creative, to do what AI can’t do. And I now have four learning objectives for how to cook a steak. It’s right here in my chat, and by the end of the training, participants will be able to analyze and compare the effects of three grilling methods, charcoal, gas, and electric, on steak flavor and texture, articulating their findings in a structured group discussion. So, I’m eager to get to that class!

Josh Cavalier: Yeah, just don’t cook it over 140. You’ll be all right.

Randy Bowman: Right!

Mike Veny: Okay. Well, as we wrap up today’s discussion on enhancing AI output through prompt engineering, we’d love to hear from you. What prompts are you using in the creation of your CET programs? Feel free to share your discussions or even your prompts on IACET’s social media channels. Your stories can provide invaluable lessons and inspiration for others navigating similar paths. And don’t forget, you can submit topic ideas, suggestions for guests and other feedback on CET Talks podcast page of the IACET.org website. We certainly hope you’ll subscribe to this podcast on your favorite podcast platform and tell all your friends about it, and your family members, so you don’t miss any episodes. Thank you so much for joining us today.

Host: You’ve been listening to CET Talks, the official podcast of IACET. Don’t forget to subscribe to the podcast on Spotify, apple podcasts, or wherever you listen to podcasts. To learn more about IACET visit IACET.org. That’s I-A-C-E-T.org. Thanks for listening, and we’ll be back soon with the new episode.

Trending Now

Episode 24: Cultivating Careers: The Power of Employee Engagement for Organizational Success

Episode 23: Igniting Imagination: Crafting Creativity in Training Environments

Episode 22: The Metrics of Change: Navigating Purposeful Measurement in L&D

Episode 21: Pathways to Success: The Value of Lifelong Learning through Digital Credentials

Episode 20: Outcomes to Achievement: Crafting Tomorrow’s Workforce Through Competency Models

Episode 19: Chatting with the Future: Enhancing AI Output Through Prompt Engineering

Episode 18: On the Inclusive Frontier: Harnessing Neurodivergence in Modern Training

Episode 17: Designing with Purpose: Strategies for Accessible e-Learning Development

Episode 16: Innovating Education: Navigating Accreditation for Short-Term Training

Episode 15: Beyond the Basics: Elevating Virtual Training through Expert Facilitation

Episode 14: Aligning Your LMS with Accreditation Standards: Insights from an Award-Winning LMS Provider

Episode 13: Instructional Design on a Shoestring

Episode 12: Taking a Whole Person Approach to Skills Assessment

Episode 11: The Accreditation Journey: Practical Strategies from Detroit’s Accreditation Manager

Episode 10: Implementing Digital Badges: Engaging Learners and Enhancing Retention

Episode 09: Assessment of Learning Outcomes

Episode 08: Designing for the Learner: Expert Insights on Needs Analysis

Episode 07: Data Analytics for Training Businesses

Leave a Reply