EdTech

Policy Perspectives on AI: A Conversation with Dr. Kecia Ray and Susan Gentz

6 Min Read
WF2224800 Shaped 2025 Blog Post HMH Labs AI Video Blog with Angela Maiers ep4

Note: This is part of a 7-video series with Dr. Kecia Ray in conversation with industry experts on using artificial intelligence in the classroom.

Welcome back to AI Insights: Conversations with Dr. Kecia Ray where we connect with experts in the educational technology field to stay on top of trends in artifical intelligence (AI).

On today's episode, Dr. Kecia Ray is joined by Susan Gentz, an education policy expert and founder of BSG Strategies, with experience as a legislative aide in the Iowa House of Representatives, a staffer in the U.S. Senate, and as Deputy Executive Director for the Center for Digital Education.

A conversation on policy perspectives with Dr. Kecia Ray and Susan Gentz

A full transcript of the episode appears below; it has been edited for clarity.

Discover best practices for integrating AI in the classroom.

Kecia: Welcome! We are here with the incredible Susan Gentz, who has a different slant on AI because her background is policy. So she is going to focus on the policy lens of AI and what she’s seeing across the country with AI with regards to policy.

Welcome, Susan. We’re so glad you could be here with us today. 

Susan: Thank you so much for having me. I love to talk about policy with anyone who will listen.  

Kecia: I know you do. Susan happens to also be my beloved partner with K20Connect. Tell them a little bit about your background, because you have a really rich background in policy.

Susan: I didn’t come into education through the traditional route of going through the classroom or through moving up through central office or any of that fun stuff. But when I was in Washington, D.C., I worked in the United States Senate where the Every Student Succeeds Act was going through. And I got so excited because I was a naive young staffer and thought, this law is going to change everything!

We are doing so much good for all of the educators and students in our country. I quickly learned that it takes a little more than policy to truly have systemic change. It’s an important piece and lever, of course. You need to have the right policies in place to innovate and to use the emerging technology in great ways, but it certainly did not transform the entire system the way my 22-year-old mind thought that it was going to.

Kecia: I can only imagine. I know during that time, I also was like, “Oh, this is going to be so great. Cause we’re moving away from a rigid structure, if you will, of support and accountability to a more flexible one.” But that didn’t happen. 

Susan: It did at least give some people a pep in their step. We’re like, okay, we can keep doing this. We’re not going to get in trouble. 

Kecia: Speaking of doing things out of the norm and trying to have a pep in your step, our conversation has been around AI with our amazing community of learners.

One thing we’re trying to wrap our arms around is AI in general and the impact it’s had in education in totality.

So with students, parents, teachers, building district staff, do you see a response to AI from stakeholders and can you differentiate between the stakeholders: who loves it, who doesn’t, who’s worried, who’s not? 

Susan: I guess we can start with maybe the public sector people, going from district and state level and even federal policy, right? 

Those are the people who I think for the most part can see that this is a great tool to use, but are worried about bad headlines. It can be very scary to use AI tools without knowing what some of the unintended consequences are, or if you even have the right policies in place.

So I would say, from that public sector side, there is a general feeling of we have to be preparing our students to use this and we see where the workforce is going. You see where those trends are, but at the same time still being a little bit cautious because there is a lot of unknowns out there. They have high stakes when it comes to how public outcomes are, and that can be a scary tension.

Kecia: I know. Some of the people that we’re interviewing are superintendents that are taking this work on at the district level and they are starting with establishing. . . I’ll call them protocols, processes that they’re using to govern what stakeholders are going to be required to do in order to bring AI into the district.

And that would include a teacher, a student, whomever. When they’re talking about it, they’re talking about how they’ve engaged their stakeholders in the process. But I’m wondering what preparation stakeholders might need if they’re going to be part of that conversation. I know some people come with a deep understanding of AI and some come with a shallow one. I know a lot of people I talked to don’t realize AI has been around since the sixties.

This isn’t a new thing and plenty of processes and legislation has been passed along the way related to AI. We just didn’t call it that at the time. 

Susan: Yeah, for sure. Where we’re seeing states start with this is task forces, because they don’t want to come out guns blazing—this is how we’re going to do it and this is what we’re going to do. And so they’re trying to engage their own stakeholders through these task forces, and maybe even a couple of pilots we’re starting to see on different things and how policies work when they’re actually implemented and put into place. But really, if you track legislation across the states for the last couple of years on AI, it’s been a lot of preliminary, let’s lay the groundwork, get the lay of the land before we even try to regulate anything.

Just a lot of questions and a lot of different perspectives on it. 

Kecia: One thing that I talk about, as you well know, is it’s hard to regulate technology because it changes all the time. You can pass a law this year that will make no sense next year if you focus purely on the technology. What recommendations might you have for districts or even state leaders that are grappling with trying to regulate this new technology of AI and what they might need to consider before doing that?

Susan: My biggest recommendation is to definitely avoid those buzzwords. Because something that wasn’t political can become very political with a word. And if you have that in your legislation, all of a sudden what was once a nonpartisan bill or even a bipartisan bill now has lost support from one side.

And so if you’re trying to regulate this, you need to make sure you’re using words that are very well defined, and also ones that can’t just change to be a one-sided issue. Because certainly we need good governance for everyone with this emerging technology. Then the other thing I will say too, is don’t use specific brands. I would just really hate to see it all said ChatGPT or something that is very well common, but those systems can change things overnight.

You want to make sure you’re doing something that can be consistent no matter what the provider is. It’s hard because I just said you want very well defined, but to some extent you also want some broad things too. That is a really hard thing to find, which hopefully your task forces are making those decisions on which things need to be crystal clear and which ones have a little bit more room on either side.

Kecia: What would a superintendent or state leader need to do to prepare that task force? What kind of homework might you recommend for them? 

Susan: First of all, talk with the companies in your state. Ask them how they’re using AI. What should students be using it for in the classroom?

That would be helpful to prepare them for a future career path. And I would also say when you’re building your task force, making sure that the voices aren’t the same voices that have been for a long time. Get some new fresh voices in there that are dealing with AI in a more . . . maybe modern sense of the word, or generative or, they’re just using it in a different way than maybe people who have been on previous technology task forces. How they’re seeing it used every day. I think getting that broad view, with many different perspectives that are new and fresh, would be good. And also I would say as you require legislators to be on it, find legislators who maybe aren’t so excited about it and then find ones that are because if you can find something that both agree on, it’s probably a good policy.

So, just making sure that the voices you have on the task force are a little bit different than you had before. 

Kecia: Can you think of any states that might be ahead of the curve with their adoption or acceptance of AI and maybe other states could look to as a best practice, or at least an effective practice?

Susan: Yeah. So far today, as of right now, so whenever you’re watching this may no longer be true, but as of today, there are 22 states that have some sort of state policy on AI, and they are very different. Some of them, they don’t actually say AI. Like Florida has a sextortion bill, which says you can’t use deepfake people, which is what their law is going after.

But if you look, that’s not technically listed as an AI bill, but it really comes from using that. I would say, just depending on what you’re trying to regulate, you would look at different states, because of how they’re doing that. What’s really interesting to me is that out of the 22 states that have a state policy, a lot of the ones that we look to, like the bigger states—Texas, Florida, New York, Pennsylvania—Florida has that sextortion one, but that’s it.

The rest of them, it’s still an open slate, which again, if people look, there’s task forces and there’s different things happening there. Overall, there is still a lot of room to grow with states and having anything at the state level on the books. Of course, if you look down at the district level, school boards are trying to address this challenge in different ways but the technology is changing so quickly that they may have something on the books that doesn’t actually cover the extent of what it needs to at this point.

Kecia: I know you mentioned school boards. The Missouri School Board Association and the Tennessee School Board Association are trying to develop handbooks and guidance for district leaders around that. You might want to check out their websites to see if it’s posted publicly.

If not, you can message me on LinkedIn and I’ll try to make that resource available to you. Let me ask you one final question, and then you can have any kind of closing thoughts that you might want to share with everybody. If a district is wanting to develop a policy specific to AI, where would you suggest they begin?

Susan: I would suggest looking at some of those policies that have been developed years ago. There is no reason to start from scratch on this. Even though we do talk about it as a new technology and emerging technology, there is still something that you can build on. And that is something that has been already set through legal.

It’s already been voted on. I would definitely start with some in the past to look at. There’s no need to recreate the wheel. The other thing I would do is start looking at other states and seeing either through their school board associations, their state school board association or whatever, see if there’s anything public there that you could also take that would fit your district. The other thing I would do is, there are a few people now that are offering a lot of AI ethics courses. I think that’s really important for board members who are maybe not as familiar with some of these things to understand that. It’s like digital citizenship—making sure that everybody is understanding the impacts that it can have. I think those kind of policies are really student-first policies because you need to think about the AI that you’re using. Mental health is already a huge conversation for our students. We need to make sure that if they’re using even more AI tools that’s not making that a bigger issue than it already is. As you think about those, making sure it’s about students first and how it’s impacting them personally and academically.

I think those are some of the ways that you should really get started, but I guess my number one thing is just never recreate the wheel. Always try to find something that you can build off of. 

Kecia: We’ve been around long enough that we start from something. Is there anything you haven’t had the chance to share with our incredible audience that you would like to share with them?

Susan: Something that we didn’t really talk about much is funding. AI does have the ability to lessen some costs and make things more efficient for the district. And I think that’s something where we could really use this as a great tool.

But it’s going to take a lot of making sure you’re very careful to not get that bad headline. And so I think there’s a lot more in the area of finance and funding that we’re going to see that we can do with AI and budgeting. Even in terms of is our district actually using this product that we’ve paid for and finding different ways to open up different funding streams. I think that’s a use that we’re going to see a lot more, especially as ESSER funds just expired approximately 10 days ago, so we’re gonna . . . again, I don’t know when you’re watching this, so keep that in mind, but ESSER ending and falling off is huge for finances. I think AI is going to come in and really help districts reswivel and plan how they’re going to use the resources they have.

Kecia: Yeah, I agree. And I think the other piece is the students’ use of it, and helping students understand appropriate/inappropriate, like you said just like digital citizenship, but making sure they’re aware that when they copy something from AI, they need to note that the source began as an AI source and they massaged it. But we have just begun to see how we’re even going to be able to cite AI.

Those kinds of things, because it’s not going away, but if you pretend like you didn’t use it, that could border on plagiarism, maybe, or at least close to that? 

Susan: If it’s taking from data sets, then that data came from someone. So I agree. Is APA working on that?

Kecia: I know I’m working on a grant and we cite it. We cite it like we would cite a website. We don’t not acknowledge when we just used AI to generate notes from a meeting. We didn’t know a person created these notes.

This is created from machine learning. 

Susan: That’s a really, academic integrity question: how and when do you . . . what’s the standard for when you . . . is it if you copy so many characters?  Coming up with those policies will be interesting to see how that plays out and who gets to determine it.

Kecia: I know. 

Susan: Ask AI how it would like to be cited? Could it tell us? 

Kecia: I think we should. Thank you so much for taking the time out of your day. I know you’re super busy and I’m really appreciative of you sharing your amazing insight on policy, especially with our audience. To our audience, if you want to reach out to Susan, you can find her on the K20Connect website at k20connect.com.

 Susan: Absolutely. Thank you for inviting me. 

***

Prepare students for lifelong literacy with Writable, a program designed to help students in Grades 3–12 become proficient writers through AI-powered writing feedback and daily practice.

Related Reading

WF2239013 Shaped 2025 Blog Post What Are Open Middle Math Problems Hero

Robert Kaplinsky
Mathematics Educator, Long Beach, CA

WF2224800 Shaped 2025 Blog Post HMH Labs AI Video Blog with Angela Maiers ep3

Zoe Del Mar

Shaped Executive Editor