EdTech

Navigating the AI Landscape: A Conversation with Dr. Kecia Ray and Adam Garry

6 Min Read
WF2224800 Shaped 2025 Blog Post HMH Labs AI Video Blog with Angela Maiers ep7

Note: This is part of a 7-video series with Dr. Kecia Ray in conversation with industry experts on using artificial intelligence in the classroom.

Welcome back to AI Insights: Conversations with Dr. Kecia Ray where we connect with experts in the educational technology field to stay on top of trends in artifical intelligence (AI).

In this episode Dr. Ray speaks with Adam Garry, an education consultant at StrategicEDU Consulting, with over two decades of experience guiding educational institutions in digital and student-centered learning transformations, including a 14-year tenure leading education innovation and strategy at Dell Technologies.

AI integration in teaching practices with Dr. Kecia Ray and Adam Garry

A full transcript of the episode appears below; it has been edited for clarity.

Discover best practices for integrating AI in the classroom.

Kecia: Hello, everybody. I’m Kecia Ray and I am here to discuss our very favorite topic, AI, with the very famous and very knowledgeable Adam Garry. Adam, you all may know. You probably have been in the room with him at some point or another as he led education at Dell. He did so many great things. Our paths crossed when I was in Nashville and needed some assistance in reforming our district. I reached out to Adam.

He came in and we did some super cool stuff. I think you still work with Nashville, don’t you? 

Adam: I do. I’m doing quite a bit of work with our friends in Nashville. They’re great. 

Kecia: That’s how cool Adam is. They just keep him around forever and ever. Adam, tell everybody what you’re up to today, because you’re not with Dell anymore.

You’ve branched off and you’re doing new cool stuff. 

Adam: Yeah, sure. Thanks for opportunity, Kecia. It’s always great to see you. Today I have my own company. It’s called Strategic EDU, and we are very focused on helping school systems in the U.S. and Canada kind of roll out generative AI in safe and secure ways, but also allow for innovation, creativity, and curiosity.

We have some executive consulting work that we do. I’ve also been doing a bunch of keynotes and just 1-on-1 coaching and consulting. We’re working very much at the superintendent cabinet level and district level, so that we get all that secured to be able to roll out to the teachers and the school leaders.

But I’ve met some amazing folks along the way, very excited. But the main thrust of it is ensuring that folks understand this is an emerging technology. There’s risks and possibilities involved, and we have to enter these spaces discussing both of those things. 

Kecia: Yeah, one of the things that I love to do when I’m just out and about doing either keynotes or working with districts or even companies is lay the kind of groundwork of AI isn’t a new thing.

It’s been around since the 60s, late 50s/early 60s. It’s not new, but what we’ve done or what it’s done with the generative component is new and it’s taken on a life of its own, so to speak. I think that is what the buzz is today. But when we’re developing policies and things—and you’re probably seeing this in your work in districts—we name everything under the umbrella of AI, not recognizing that AI has really been around, and we’ve probably been using it in a lot of our divisions, for a very long time. As you’re working with districts, how do you lay the groundwork for just the AI conversation in general? 

Adam: Yeah, and a lot to unpack there. Yes, certainly what we do in day 1 is we have this conversation that AI is not magic.

We bring them through the stages, like you just said: 1950 is this conversation around I wonder if computers could think like a human. What would that actually look like? That’s the Turing test type of thing, and just opening their eyes to stuff, but it’s very activity based.

We have some conversations. They do activities, but I walk them through a timeline from machine learning to deep learning to natural language processing, which quite honestly, I really help them to unpack that because that to me is the transferable skill that we’ll work on for the next 3 or 4 days and for the rest of their life. It really opens their eyes when I say in the future—when I say future, probably 6 months or a year from now—you won’t go edit this. You’ll just say to the computer, I want to do this and it just does it because that’s natural language processing.

That, I think, is the key to things, and then I get into computer vision, and then lastly into generative. I show them how that all kind of fits to be able to utilize generative, you needed to go through that timeline of things. And then from there we do some activities so that they understand why machine learning is so important to a generative approach.

We’re just unpacking and having a lot of conversations, but they do know, by the end of it, yes, this is a giant prediction engine, though. It’s not I’m saying that this thing is thinking—there is technology behind it, and here’s how it works. They understand what a GPT is and what the transformer technology does and how it works.

It’s about an hour and a half worth of stuff that we do, and then we take it and we directly apply it. Okay, what does this mean for education? These things hallucinate. What does that mean? It’s not Google. How do we make sure we’re curious with these things, right? And I think that’s been the approach we’ve used and it’s been pretty successful for the past year-and-a-half/two years.

Kecia: When we think about what’s going on in the realm of AI and how districts are integrating it into their work from a process side, like what they’re doing around standards development, or human resources, or even transportation and food services, there’s also this realm of what we do with kids and how we help kids understand what AI is and how AI is entering our world in such a way that they may not recognize it. How do you work with schools and districts on just that awareness for the students and the learners?

Adam: Do you mind if I hit the first part of what you’re saying systemic and then get into the student piece of it? All of this is grounded in this idea that schools are places for learners, right? Whether it’s the adult learner or the student as the learner. 

But we do believe that from a generative standpoint, these resources as they roll out will help systemically. And so we don’t want to limit it to the learning side. That opening day when it’s cabinet when I’m saying that, I have CFO in the room. We’ve got HR in the room. We’ve got operations because we want everyone to be on a path for kind of AI literacy and that comes back into the process as it unfolds.

But as we go on from day one to day two, that second day really is policies, procedures, regulations, frameworks, guidelines. We have a whole process that we’ve built to help districts navigate that and figure out where do I need to focus my time? So that when we roll this out systemically, we have all the things in place.

And it becomes evident in that process that policy outside of the state of Tennessee, to be honest with you, is something that we really don’t need to be creating right now unless someone asks for it, because it’s an emerging technology. What we do is we build guiding principles in that second day.

Then we make sure that those guiding principles really are what we could build into a policy document if needed. If you look at national public, that’s literally the process we took there was we started with our guiding principles and turned it into a policy document. The reason we do that is because it helps to really get to a more broader understanding that we need to focus on and not things that will time out because the technology changes so quickly.

We build all these things and I’ve localized a lot of this because I don’t want to walk in and say, look, here’s a guiding document. All you have to do is download these things and operationalize it, because what we learned is that the process is just as important as the product. When you have people having to rethink their academic integrity policy and what plagiarism means in a world with AI and human collaboration, you have to have those conversations.

As we move on through doing that work, there’s a bunch of deliverables and they have a story they build and they build their website. The next piece to this is really focused on AI literacy across the system. Early on, it was really just saying a year of learning for everyone. We split it into the different groups: your students, your teachers.

I think what’s emerged as of the last month, quite honestly, is the ability to now have enterprise-wide systems for students 13 and up with Google releasing their tools. So, it creates that sense of urgency. On the student side, we’re backwards designing essentially to say, when would you want to make these tools available to your students?

And what do we need to do from an AI literacy standpoint for your students, your teachers, your leaders, your community to ensure that we’ll have a safe and secure rollout. What does the phase of that look like? But also we have a group that’s been working on what the standards for AI literacy look like for kids, right?

It’s not just saying, okay, Digital Promise said this. Just copy and paste. It’s really saying there’s five or six different places that have thought through this. What would be best for our community, and how do we want to roll this out? We’re being very specific. We have a whole work group right now that’s working on what are all the steps and the things you would have to do for students to ensure that when you roll these tools out, they’re on a path to AI literacy, and they have what they need to continue on that path. 

Kecia: I’m sure there’s some convergence with information literacy and just ethics in general, because kids have to be old enough to understand ethics. When you’re talking about some of this generative AI, you could take somebody’s art or music and they have to understand that’s not cool. You really shouldn’t do that in a world where they can rip stuff off the internet all day, any day, and they just don’t think about it. It’s just something we literally have to teach them now that we, in the past, may not have had to concentrate so much on teaching about that principle as we will from now into the future, I think.

Information literacy . . . we used to relegate that to the librarians, which I love librarians. Everybody knows that. But I think this is something that, with the convergence with AI literacy, is something that every teacher and parent needs to understand. What would you say to . . . you’re speaking of AI literacies, but how do you put that into practice?

Adam: Yeah, I know. And there’s some pushback online I’ve seen recently too on some of this. We’ll land on something that’s important. I think the main thing that I’m trying to help folks with is, first of all, the ethics and bias stuff that you talked about. I’ll come back to that in a second, but that’s true for everyone to your point.

It’s not just going to be for students. What we do in our day ones and day twos is we actually do this activity where I will show them images and they have to say whether it’s AI or human generated. And across doing this with probably hundreds of groups—and I even do it during the keynote. It’s just a fun thing for people.

They get about 40 percent of these right. On average, that’s what groups are doing. And so then they’re throwing their arms up, like what do we do? What they have to recognize is the technology as it moves forward. We’re going to need technology to actually help us with this stuff.

You can’t count six fingers anymore and seven arms. That’s how much it’s evolved. And then you get into video, right? Then, you get into audio. If you look across what we’ve done in school systems and universities forever, it’s always been about can you Google and read information on the web that you Google?

We’ve never really paid attention to images or video or audio. It’s a redesign of all of that for the adults and the kids. You can call it AI literacy. You can call it digital literacy with AI components to it. It doesn’t matter to us. The umbrella could always be under digital. But the idea of focusing on bias and ethics is so important, and it’s built into what we do in our day 1 and day 2.

It’s built into every goal that we build for AI literacy across the board. It’s helping school districts understand and re-examine from the standpoint of they’ve been, to your point that you started with, they’ve been using AI and these systems for a long time, but have never asked the question on, okay “name company”, how is your algorithm biased?

Tell me what you’ve done to ensure that there is no bias in an algorithm. And what you’re generally met with is that’s our IP, so we can’t tell you that. That’s not transparent. Those are terms we’re teaching people that they hadn’t really thought of before: interpretability, explainability, transparency.

What does that look like in these models? How do you evaluate these things? How do you build simple tools for teachers to understand? I shouldn’t probably use a tool that’s asking for personal identifiable information, right? All of that comes into the ethics and the bias stuff that we’re trying to teach them.

But I think the only way you can do this is by having them engage in the activities and the conversations to build the things. By just doing a PowerPoint conversation with them, or giving them a document that they can roll out to everyone that says this is ethical behavior without the conversation . . .

Kecia: Yeah. It has to be immersive. I think it has to be something. . .

The presence is everywhere in our society, and we don’t notice it. It’s so common that we don’t notice it. You have to do an activity, an immersive activity, in my opinion, which I know is what you’re probably doing, to help them see it. It’s like that exercise where you’re like, what do you see in the picture?

And they don’t see what the picture really is because it’s like an illusion. And that to me is how AI is. It’s so evident everywhere. We don’t notice it. 

Technology in general is becoming that way. You can’t go to McDonald’s without ordering from a computer.

Everything is so digitized in our society that we’re not able to differentiate human from technology anymore. I’m on the board of the human intelligence movement for that purpose. Let’s not forget, humans created this technology.

And it’s up to humans to not let the technology get away from us. 

Adam: Those are all good points. I’ve read some stuff on the alignment problem. I don’t know if you read that book. But the reality is that it can’t really take over society because we’ll make sure it’s trained on our values.

You turn to the three people around you and say, what are your values? I don’t know that we have alignment as a society on what the values should be. And then you look around the globe and you have a very different alignment on values. But I think what the consensus has been on that is look, if this thing goes terribly bad, it’s because of something the humans did.

The humans are the ones that will mess this up. We have a very deep conversation about humans in the loop, and it literally shows up in every step of what we’re doing with people and their guiding principles. We want human-centric or human in the loop. Part of that, and I explained to them, at some point someone’s going to come in to sell you something and it’s going to seem amazing.

And you’re going to ask the question: Is there a point where the human can either turn this off or intervene? And if they say no, then you pretty much know that this is not following your guiding principles. Or we’ve talked a lot about this idea of uploading a rubric with writing and having it score.

But then I bring them back to didn’t I just teach you that you’re going to get a different answer every time? That’s the way the technology works. What do you think is going to happen when these things aren’t calibrated, you’re just using an LLM. And I show them something Leon Furze did out of Australia, where he took the same essay five times, same rubric, different names, five different scores.

Getting people to recognize, yes, we want to augment some stuff, but we have to keep humans in the loop for certain things. Giving feedback would be amazing. Scoring at this point? Probably not the best way to do it just yet until we can calibrate to the human. Which then opens up the whole other question, which is the humans don’t always score the same way.

How do we remove that noise from the process with the AI in the calibration? That’s where you can use the tools to your advantage. 

Kecia: How do we think about assessment? That’s my early research. My dissertation in the first few years that I was in academia was all around assessment. Super boring topic to most people, but very interesting to me because assessments define a path.

They defined my path. I didn’t perform well on assessments. I was told in high school, you’re going to go do this, and it wasn’t what I wanted to do at all. I had to fight the system to get into college even because of an assessment. I see that possibility with . . . maybe not taking the humans out of the loop.

I agree with you a thousand percent, even though that’s not a percentage, that you’ve got to put humans and keep them in the loop. Recognizing the value of individualization within our humankind and that everybody’s essay might not look the same. It might not have the same components.

That’s probably okay. Things that are uniquely human, we might not need to standardize on. Just changing the conversation, if you will, around that. I think writing is an intimate, expressive act that we all do as humans. It’s very insulting to me when my editor comes back and is this isn’t right.

And I’m like, what is wrong about it? They can’t really tell me. It just doesn’t feel right because it doesn’t align with what they would do or say. It’s very subjective. AI doesn’t take any of that subjectivity out of the equation at all. 

Adam: Daniel Kahneman’s work on noise and what they did with judges and all that. There is a fair amount of at least the human bias that you can remove from some of this with AI, but in all of his research, he was presenting is let the AI give the initial and then let the human be the final. How do we augment with some of that stuff to make sure kids get the feedback they want? But what’s been coming up in the assessment conversations with school districts is very much a focus more on process than product.

They’re starting to come to these conclusions that maybe we’ve been so focused on the end product that we haven’t focused on process. And these resources will actually help us be very much more focused on, long term, how we’re helping kids and how they’re having choice in that process of learning and not having to say, it’s just easier to do it this way.

Let’s give them a multiple choice because that’s easy to grade. I think that’s going to be a huge kind of benefit to the learner moving forward, if we can really shift that conversation more to product. The uniquely human stuff is also a really cool topic, because what I’ve been doing is . . . did you read Charles Bedell’s book that he published last year?

He did this whole thing on curriculum with generative AI. One of the things he did is he created this matrix. The humans did this, not the AI, and it said if you looked across . . .  Think of a portrait of a graduate, Kecia. You looked across those things and then you broke them down in discrete skills.

So, you took critical thinking and you broke it down into four or five discrete skills. He had this matrix and essentially it said is AI already better than the human? Can AI and the human kind of work together today? And in five years, will AI surpass the human? Again, as humans, they filled this thing out.

I did a similar thing with Claude, and I took North Carolina’s profile of a graduate and I put it in there with the matrix. And the thing when I show this to groups that I think is interesting is on critical thinking, which we consider a very uniquely human skill, the AI was like, maybe I’m already better than you in three of the four areas.

But empathy . . . the AI recognized I can sound empathetic, but I don’t have the capacity for empathy. So it was all “no”. Then when you looked in that middle category, can AI and the human work together? Literally every single skill, it was “yes”. I think what we’re using that in the conversation to say is, let’s think about assessment.

How does that change? How do we redefine original work when we’re saying to kids in the process, human and AI could work together? But maybe as the teacher, I’m going to say, AI is good for brainstorming here because I want to be able to assess the uniquely human part of this time. So don’t use it for this.

I know you can if you want to, but that’s not what I need to do as the teacher. I need to be able to assess that and build that trust with the kids in that process. 

Kecia: Help them see the benefit of using AI and then where they should say, I’m going to use it to get me to here, but not here.

I have a 16- and 20-year-old niece and nephew, respectively. They’re in high school and college. Both of them confess that they use AI as a prompt to get them going. If they have a paper that they have to write, they put the topic in and the requirements of the paper and they see what AI generates for them, just to get them going on their thinking.

Then, after they produce something, they run it back through to see, do I have all my grammar right? Where do I need to make corrections? Where does it not flow, et cetera, et cetera. I asked them, do you feel like you’re cheating? They said, no, we feel like we’re more productive because we have a tool to work with.

I think back when I was teaching writing and the writing process back in the day. We always paired kids with a writing partner. I don’t know if that’s how they do it today, because that’s been a long time. But I still think that is a good way. Even when I write, I go to writing groups. I’m part of a writing group. There’s six other authors there. We meet. We look at each other’s stuff. It’s a very productive way to write. I think pairing yourself with a digital tool isn’t a bad thing for writing. I think that’s a perfect place for AI to be of all the places. What are your thoughts on that, my friend? 

Adam: Let’s break that down into two different things.

First of all, cheating. I did a keynote for high school teachers in a district and I started out with that slide that says 35 percent of the teachers in that room think that this will do more harm than good. And they were smiling and they thought it was funny. I was like, look, I know the room I’m in, right?

Let’s talk about it. We need to change the narrative here that this is about academic integrity. We know what the Stanford study said, and it didn’t change after, 60 to 70 percent of kids cheating. Let’s get to the root cause of that. By the way, I get it. This could make it a lot easier. 

But, kids have been using Photomath for years. It’s the number three most downloaded app in the App Store. The kids have been using AI. They’ve been cheating. Look at what happened to Chegg. Down 99 percent of its revenue since ChatGPT was released. Let’s flip the script.

Let’s say kids want to learn, and those that are cheating, they will be outliers. We’ll ensure that we have processes and policies in place to help them understand why it’s not beneficial to them. If they continue to break those rules, that’s why we have the policies we have. But we can’t start with zeros right off the bat.

That’s not going to teach anyone anything. Building out those pieces are super important at a system level. On the writing stuff, I’ve not had any pushback yet, but I’ll say, Okay, I get it. You want to talk about productive struggle. You want to talk about writing. How many of your classrooms do you walk into or how many kids leave your system as proficient or amazing writers? No, they write in formulaic ways that no one uses in the real world. In fact, we laugh at these AI things because they’re so formulaic. That’s what you’re teaching the humans. Why do you think they’re so formulaic? Because most of the writing on the web looks like that crappy stuff, right?

My whole point is how do we use this for those kids that stare at the blank screen, that can’t get started? How do we label this stuff the right way? OpenAI actually had released something last week for universities or the writing process in all the ways. So I took it and I re-swizzled it for K–12.

I sent it out and said, Hey, folks, here’s a good way to look at ways to help kids use it during the writing process, and labeling it, and showing them this is the proper way to use it. That’s the only way we’re going to get around this. To ignore the fact that they can use these tools to just write the whole thing and then bring them into things like undetectable AI to get around it is really just sticking our head in the sand. 

Kecia: And some of those tools don’t work. I was clipped for . . . one of my editors got me. Said I was using AI, which with this particular publication is not allowed. They said, you’re using AI. I said, what are you talking about? They said this language is the same as something that is found on the internet.

I said show me. It was my piece. It was my piece on the internet. I basically was in trouble for being me. I was crazy. I was livid. I published it out there and I’m talking like myself.

Adam: That’s a great story. One other thing, just given your background. The thing that I want to also be clear about in the work that we’re doing, that you’re doing, we want people to recognize there is the ability to use these tools in ways that will completely help with innovation in a school system. However, even if you have the most innovative superintendent on the planet who wants to do things and blow up the system, there still needs to be movement at the state level on some of these assessments. 

The way people can use money and fund things for these models to actually augment in ways that teachers will feel comfortable to be able to be innovative. That school leaders will feel comfortable. There has to be some movement in that system or we are going to be in a place where we have pockets of excellence and we don’t have true transformation of the system.

Kecia: I think we’ve over corrected on assessment anyway. Assessment, in its original design, when we incorporate it into our process of teaching and learning, was a checkpoint for teachers to know if what they were doing for instruction was hitting the mark.

It wasn’t designed initially to be an evaluation of a student. It was is the teacher doing what they need to be doing so that the students are understanding what this concept or skill is. It’s part of the instructional process, not the learning process, but we use it to measure. That’s #KeciasPetPeeve.

Adam: This is what I share with districts. Imagine, not imagine a year from now, imagine tomorrow. That you just walked into a classroom with kids and you said look, as you’re learning, when you think that you’re ready to do some sort of formative assessment, here’s how you prompt engineer this thing and just ask it, right?

We’ll have systems really soon that all that information can flow back to the teacher. We don’t want it to just be in the ether. But the reality is right now, formative happens when the teacher thinks the average kid needs to be assessed. It’s not based on when kids on their own journey need to be assessed.

When we start empowering kids in that formative journey, that’s going to open up the floodgates. Because then you can get to performance-based things and competency-based models much easier because less of a lift is on the teacher and more of the lift is on the learner, which is where it should be in the process anyway.

Kecia: In their own way, in their own time. When I’m confident to be assessed or measured, then I’m going to do it. Ella, my sweet little daughter, she’s in kindergarten. I’ve been working with her on reading. And I’m thinking my poor child cannot read a thing.

I don’t know what the deal is. I’m doing all the phonics. Her teacher’s working with her, but she won’t read. And then the other day we’re driving down the road. She reads a whole sentence on a billboard. I’m like, what the heck? And here’s what I found out. I literally just found this out yesterday.

She won’t read aloud because she doesn’t know every word. 

Adam: Oh, so it’s being vulnerable in that process of . . . interesting. 

Kecia: Yeah. So, she’ll say, I can’t. 

Adam: This whole internal thing going on. Oh, wow. 

Kecia: Yes. 

Adam: I just saw a new tool that’s about to be released where it’s an AI tool in the reading process for prekindergarten and kindergarten, where they’re just talking to the AI.

I wonder if something like that actually for someone like her, because it’s not human to human, might be an interesting way to bring some of that out. Wow. 

Kecia: I know. 

Adam: Super interesting. 

Kecia: And then yesterday she was reading words. I said, what is this word? She said today, remote. Huge words.

She will not read the book. 

Adam: That’s crazy. 

Kecia: Unless she knows every word in a sentence. She’s in a virtual school, by the way. I couldn’t imagine if she were in a traditional school, somebody picking that up. No shade on our amazing teachers, but you’ve got so much. You’ve got 20 kids in a classroom. Ella’s well behaved.

You’re not going to pick that up from her. You’re going to have a conference with me about how she’s a non-reader. 

Adam: Yeah. What you’re describing though, is maybe one of the ways moving forward, to lessen the load on the teacher, but find and diagnose these things a little bit earlier so she’s not labeled that way. And now the teacher is focused on strategies to get her to read aloud. 

Kecia: Yeah, I think AI is a phenomenal tool for education. I agree with you. We need to be careful because humans create it and sometimes at some of our cores, there’s greed. And things are put out into the universe that aren’t good AI. Those things we need to not adopt. But having you out there and so many of the other people that I’ve interviewed out there, educating people on the good, the bad, and the ugly around the use of AI, I think will help us adopt it in such a way that’s going to really be transformative for the future of education.

And if systems can just be patient enough to go through the struggle that you are walking them through in your work, I think they’ll be better for it. 

Adam: Can I just say two things to wrap some of this up? One thing is on the tools thing, I agree with you 100 percent. Not going to name tools.

I am going to say that what I’m trying to get people focused on is the transferable skills. For questioning, prompting the right way. Now, you and I both know the way we prompt engineer today might not be what we need to do two years from now. However, just bringing questioning back into the learning process is huge.

Helping people be curious, questioning, not buying tools today that might not be around in six months. All of those things, and getting very focused on a few things that they’re probably already paying for. Getting people to learn those skills of natural language processing, I think, are more important than having the . . . Remember Web 2.0?

Just crazy. Fifty-seven different tools. You don’t need that. That’s my top track with school districts is please don’t spend any money for the next six months unless you find something you 100 percent have to have, because it solves the problem for you and you can’t get it through anything else you’ve already purchased.

The other thing I’ll say is the thing that we’re tackling, I think that most folks are not right now, is rural education as well. I learned a ton working with these small districts. What we do is we bring them together in collaborative, so they work eight or nine districts together. Sometimes we’ll bring them in with large districts or medium-sized districts as well.

The reality is they have less resources, but they have the same amount of work that has to be done. Combining their people across districts to build things and then helping them localize it is super important. They are just as passionate about their kids. They have really intelligent people that want to do the right thing.

They just don’t have a lot of resources. So, my kind of push on this is to remember there are models we can build that will help folks in these smaller districts do this in safe and secure ways and be innovative and creative, and they want to be. Let’s not leave them out of the conversation as we move forward.

Kecia: Yeah, I love that. Everyone needs an opportunity to succeed no matter where they live or where they come from. Everyone. Education can be the great equalizer if we allow it to be, and if we empower our youth through educational processes. But, we’re in a new day in a new time, new era, so to speak.

It’ll be interesting to see where we are over the next few years. I am so thankful for your work though, and the pivot that you made from working. Not that you didn’t do a fabulous job in the industry lane, but you are a natural born teacher and leader. And I think that your work here in the space that you’re in now as a consultant is exponentially valuable to the people you are working with and I appreciate it. And I appreciate you being on my little podcast here so that everybody can hear what you’re doing and we can expose them to some of your thinking and your work around AI, because it’s really super beneficial for districts.

Adam: Thank you for your kind words, Kecia. I appreciate it. 

Kecia: I appreciate you. Thank you so much. And everybody, you guys check out our other videos and make sure that you’re plugged in to the learning cycle on AI. I’ll talk to you later.

***

Prepare students for lifelong literacy with Writable, a program designed to help students in Grades 3–12 become proficient writers through AI-powered writing feedback and daily practice.

Related Reading