On Monday, students will fill my high school classroom for the first time this school year. Each period the room will be brimming with the usual mix of emotions and experiences that 30+ juniors and seniors bring in the door.
I’ll be bringing in my own too. I’m feeling my usual emotional mixture: excitement about a fresh start and worry about making sure I do right by the young people in my care. (And my usual naive hope that somehow I’ll sleep enough this school year — though maybe it’ll happen in 25-26!)
Students and I will also be bringing something else into the room: our thoughts about AI. I know I’m considering questions like these, and I suspect students are too:
When should I use AI in my work? When should I not? When I do, how should I talk about that?
How much is AI going to change the future lives and careers of high schoolers?
What will my role be in my career field in the AI era we’re entering?
I have some answers to give students — or better, some ways to help them arrive at their own answers. (I also have an AI policy to share with them.)
But before we discuss those questions, we’re first going to focus on this one:
What is your why?
In other words, students are going to reflect on their purpose, in general and in my classroom in particular.
Why purpose first?
Purpose is the horse we need to put before the AI cart.
If we start the year with AI policies or guidelines — or even thoughtful AI conversations — we’re putting the cart before the horse.
And we won’t make the progress with AI we want to, and need to, this school year.
Or, to use a metaphor that works better here, purpose is the compass that can help our students navigate their coursework — and their lives — and therefore help them direct their AI use towards the ends that make most sense for them.
Purpose for high schoolers looks different in different grades and classes, not to mention different parts of their days and lives.
But one thing remains the same across those contexts:
As educators, we need to make sure students have the best possible purposes in mind for whatever they are doing with us. That’s all the more true in the age of AI.
If students have meaningful purposes in mind, they will make meaningful use of their time with us and the tools at their disposal, including AI tools.
If they have superficial purposes, it’s no wonder they sometimes make superficial use of both time and tools.
Here’s how I think about helping students develop purpose at the start of the year.
An example from my English class
I taught English for 18 years before shifting full-time to the purpose- and career-exploration program I’ve been running for 6 years (hence the kinds of posters now hanging in my classroom above).
For my final few years of English I designed and taught a new 12th grade course. I’m using that class as the context for this example, in the hopes that makes it easier to translate these ideas into your own class and context.
First semester of that course we read James Baldwin, bell hooks, Robert Hayden, Gloria Naylor, Audre Lorde, and Lucille Clifton, and we watched the film Pariah. Second semester we read Junot Diaz, Elizabeth Acevedo, Gloria Anzaldua, Juan Rulfo, and Denice Frohman, and we watched the film Before Night Falls. (The class was a lot of fun to teach. Just imagine the conversations 17-year-olds had about those authors.)
If I was still teaching that course, here’s how I would couple a conversation about purpose with a conversation about AI:
Welcome to our 12th grade English class! I’m glad you’re all here.
Before we talk about our syllabus or our first novel — and before we talk about how to think about AI in our class — I want to talk about you.
Ideally, your experience in this class is about you. By you, I mean your life and your goals. And since we all have lives and goals in this room, by “you” I also mean “us.” So I also want to talk about our experience in this class.
My first goal for you and us is that we leave this room at the end of the year having asked some important questions and considered a variety of answers. We will do that in conversation with each other, and in conversation with the authors we’ll be reading in our course. That’s why I put their pictures on the walls of our classroom — they’re here with us too, in spirit and in their words.
And my second goal is that we leave this room at the end of the year, after all those conversations, not just better readers of books but also better readers of our world, not just more ready to talk about life but also more ready to live it.
To accomplish both those goals, we need to bring our full humanity into the room. That’s what these authors around us are asking us to do; they’re showing us how by bringing their full humanity into their books. We need to be as real as we are ready to be, with each other and with ourselves.
I’m confident you can do that. I commit to doing it with you.
Here we go!
That’s the purpose part. I’d follow that with related questions for students to discuss:
What are your personal hopes and goals for the future?
What do you hope to get out of this class and these books?
What is the difference between talking about life and living it?
Then, whenever the time was right to roll out the class AI policy, it’d be easier to do:
Think back to our conversation the first day of class. What were the two goals we talked about? What commitments did we make?
I’m about to share our school/classroom AI policy with you. Before I do, please turn and talk about the same questions we talked about before. Let’s see if your answers have changed (and it’s fine if they haven’t):
What are your personal hopes and goals for the future?
What do you hope to get out of this class and these books?
What is the difference between talking about life and living it?
Now, as we read through our AI policy, ask yourself:
How does this framework for using AI help me accomplish my goals, get the most out of these books, and learn to live my best life?
Now that you’ve seen the AI policy, please share your thoughts on the question above with your partner. Then we’ll discuss as a whole class.
Reflections for your class (or school)
I’m not saying our students will all magically agree that our AI policies are ideal and will help them learn and live as well as possible. Some will disagree, and may well raise legitimate concerns. (And in some cases, we may want to take their ideas into account and revise our AI policy.)
What I am saying is that it becomes much easier to explain and discuss your AI policy when students have considered the purpose of your class and their purpose in it.
Then we’re all, educators and students, pointing in a similar direction and talking about the best way to use — or not use — AI to get there.
Some of our disorientation around AI in our classrooms stems from our deeper disorientation around the purpose of our classrooms in the AI era.
The good news is we already have stars to navigate by in these uncharted waters. They’re the same ones that, at our best as educators, we’ve always had in mind.

When you want to find those stars, just ask yourself:
What are your personal hopes and goals for the future? How does your work as an educator connect to those hopes and goals?
What are your hopes and goals for your students? How will your class and your content help them achieve those?
Once you have those answers in mind, it’s much easier to decide and explain how AI should and shouldn’t be used in your course.
Because when we have purpose in mind, it becomes easier to see AI for what it is.
AI is neither the end of education nor an educational be-all and end-all.
It’s just a new means to our evergreen ends.
Love the suggestion of grounding AI policy in the classroom first in the student's learning purpose. I've been playing with a similar concept at the school / district level. I'm calling it a "values mirror". All districts and schools have stated values for how they approach education e.g., preparing students for the future, integrity, student empowerment, etc. The values mirror compares the AI policy to those values. If equity is a value, does the policy promote equal opportunity or a more fragmented approach with only islands of AI innovation? If student empowerment is a value, how does that align with a policy that is rooted in distrust? If preparing graduates for future success is a pillar, how can integrating AI into learning not be an urgent priority.