Two AI adoption principles for principals (and all educators)
The elephant in the classroom & the new transparencies we need
Yesterday I walked into my classroom for the first time since June, slid my desk into its usual spot after our school’s summer cleaning, reorganized my books, sat down — and immediately got that I-need-to-decorate-my-classroom feeling.
If leaves coming down means fall for trees, laminated posters going up (and then going up again when you realize they weren’t correctly spaced the first time) means fall for teachers.
This fall also presents a now-familiar question to educators, one that is more acute than last year:
How should we approach AI in our schools this time?
Here are two principles guiding my work with AI this year.
1. Address the elephant in the classroom
Many of us used AI tools in our personal and academic lives last school year. Many of our students did too.
All of us, educators and students, need to start this year by addressing that elephant in the room.
Addressing it doesn’t require us to stay up on the latest AI research (though if you want to, try Stanford’s Gen AI for Education Hub) or listen to a daily AI podcast (though if you want to, try The AI Daily Brief).
For example:
Last week I led an hourlong class on professional email writing for a group of 50 or so rising high school seniors who are starting an internship program. I opened with a few framing remarks. One of them was this:
We all know AI can write work emails for you. Some of us have probably already experimented with doing that. Which is fine! You should be exploring what AI tools can and can’t do you for heading into this school year.
But this a case where even if AI can do it for you, you shouldn’t let it. You want to develop your own professional voice. Every email you send is helping to create your professional brand. AI-generated emails will make your brand feel AI-generated.
It’s important to keep in mind that AI tools, LLMs in particular, are designed to give the most probable answers to our questions. That means they will make your work email as generic as possible.
I know work emails all look pretty generic to teenagers. But believe it or not, there is actually a lot of nuance to them. My work emails sound a little different than other people’s. They sound like me, Mike, just like someone else’s emails sound like them. And that means my emails give other people a sense of me as a professional; they are one important set of pixels in the larger image people have of me at work.
Here’s another way to look at it. Think about how you write in your group text chat with your friends, how you write when you’re DMing your cousin on Instagram, and how you write when you’re asking your mom for a favor (or telling her you’re running late). Those are all different messages, but they’re all you.
Which means you already know how to do this! You know how to adjust your tone and style to fit the environment you’re communicating in and stay true to who you are. Now you just need to learn how to do this for the professional environment you’re about to enter. You just need to find your professional voice.
AI can’t do that for you. It can help you catch grammatical errors, or flag unprofessional language or formatting for you to change. But if let AI write your emails, not just edit them, your professional brand will quickly become generic — when your strength is that you’re unique.
School leaders and classroom teachers should be preparing their own version of that framing and then sharing their thoughts with staff and students as the year gets underway. To me the key elements of that message are:
AI is out there and can be tempting to use in some situations
Here’s when and why not to use AI
Here’s how and why to use AI correctly
Your voice/insights/creativity are more vital and important than ever in the AI era
If we don’t convey that a few times to start the year, we’re not modeling the kind of reflection and intentionality around this topic that our students need to see.
2. The new transparencies we need
Speaking of modeling:
We should be telling staff and students when and how we’re using AI tools, the same way we allude to the books that float in the background of our PDs or syllabi.
(I used Midjourney to make that video, and Gemini to make the elephant video above.)
We of course shouldn’t do this in every PD or class. But at key moments, especially as the year starts, we need to model AI transparency. That helps set a norm around transparency and allows us to demonstrate what effective and ethical AI use looks like.
Here’s what I’m planning to say to the 11th and 12th graders in my class during the first week of school:
This is the 10-day “learning loop” we’ll be using this year. Each loop will take us two weeks, or 10 school days, and in that sequence we’ll have time for all the major activities in our class.
I had this idea for a 10-day learning loop last spring, when we were wrapping up the year and I was reflecting on what I wanted to improve for the fall.
To build our loop, I first brainstormed all the parts of class that had worked best last year. I then added to that list all the new ideas I’ve been meaning to try, from my own thoughts to ideas I’ve gotten from books like the ones on my desk over there. I ended up with one big list of ideas.
I then put that list in Claude, Gemini, and GPT, and prompted each of their “deep research” tools to do the following:
“Consider both the ideas on this list and other pedagogical ideas they imply. Then look for examples of similar ideas in practice in successful educational models that have similar goals as my class. Consider both US and international models, and models from earlier historical periods. Then suggest ideas I should emphasize, deemphasize, or add to my original list.”
I then put all three deep research reports in Claude Opus 4 (this was before 4.1 was out) and asked it to synthesize them into one list of ideas.
I then compared the AI-generated list of ideas to my original list and came up with a list that combined the best of both.
I then put that new list back into all three LLMs and asked them to generate ideas for a 10-day learning loop based on the length of our class periods every day. When they were done, I told them each to “make it better.”
I then considered all the options the LLMs generated, picked one that made most sense for us, and revised it myself with our goals and our school in mind.
I’ll then close with this:
We’ll be using AI tools a fair amount in our class this year, and that’s how I hope you use them. Take advantage of what AI tools can do, but always remember you’re the one guiding them and you’re the one making the final decision based on what you learn from them.
At the end of the day, you’re responsible for the quality of your work, just like I’m responsible for the quality of our class.
AI is just one arrow in our quivers as we all aim at excellence this year.
I’ll say something similar to staff when I lead PDs on AI, explaining which PD elements were supported by AI tool use — and how I made use of those AI tools.
Hidden or secret AI use won’t just create subpar AI culture, it will also create subpar AI users. Transparency around AI use will do the opposite, helping everyone in the building be on the same page (floating or otherwise).
When I started teaching, we still had projectors and transparencies. Keeping your transparency pile in order and cleaning transparencies between back-to-back classes should have been Olympic sports. Not to mention pivoting in the middle of a lesson when the projector bulb burnt out…
The point of transparencies was to help everyone see how a certain concept or procedure was supposed to look. We often used them to show how an idea or process unfolded over time, as students and teachers collaborated to decide what came next.
This school year, and the AI era more broadly, will require both new kinds of transparencies and many of those same old goals.
The "when to use (or ok to use with the caveat that everything is thoroughly proofread)" and "when not to use" conversation is so important. Unless we can convince students - with real arguments that are grounded in their experience and resonates with them - why AI is often not the best default option, I don't think simply saying "it's bad" or "it doesn't work" or whatever simplistic negative response is, will work. Excellent walk-through of an important skill.
I really appreciate the time you spent on framing “why” writing emails in your own voice is so important. Another thought is that writing helps us think and process our ideas, so LLMs can also rob us of the opportunity to develop our own thoughts.
“Why” is the most important question we all need to be constantly asking so we can make better decisions. Followed by “how” which is informed by “why”. The instructor’s modeling of “how” they are using AI should be considered with the how and helps students follow that pattern of transparency and integrity.
More than ever students will need to state their intentions and goals and show their work inclusive of leveraging AI.