- READ TIME: 26 min, 23 sec
April 1, 2026 Posted by: Tharani Asokan

AI Strategy In Education: Why Readiness Comes Before Rollout

AI Strategy In Education: Why Readiness Comes Before Rollout

A student walks out of a graphic design class where the professor just spent an hour teaching them how to critique AI-generated visuals. They're energized. They open their laptop in the hallway, pull up ChatGPT, and start experimenting.

Then they walk into their creative writing class. The syllabus says: No AI tools. Period.

Two hours later, they show up at their internship. The manager says, "Why are you holding back on AI? Push the limits."

Three classes. Three completely different rules. One confused student.

This is the quiet chaos inside American higher education right now. Institutions are still figuring out what AI means for them, trying to manage it one classroom at a time.

In AI Talks #1, we sit down with Dan Arnold, Provost Fellow for Artificial Intelligence and Director of Support & Innovation for Online Learning at Oakland University. His mandate: figure out how to strategically introduce AI across the entire university—for faculty, staff, researchers, and students—without breaking what already works.

Dan Arnold

Dan Arnold

Dan Arnold is Director of Support & Innovation for Online Learning and Provost Fellow for AI at Oakland University, where he leads institution-wide AI adoption, staff and faculty enablement, and governance, drawing on nearly 20 years across financial aid, recruitment, and EdTech.

Sathish Kumar Mariappan

Sathish Kumar Mariappan

Sathish Kumar Mariappan is Co-Founder of Drupal Partners, an Atlanta-based certified Drupal agency. He helps education, government, healthcare, and enterprise organizations with Drupal development, migration, AI consulting, and digital strategy, building secure, scalable digital experiences across complex ecosystems.

Episode Highlights

  • Universities should not begin AI adoption with tools. They should begin with readiness, governance, and real institutional pain points.
  • A campus-wide AI strategy cannot be one-size-fits-all because students, faculty, researchers, and administrators all have different needs.
  • Students are not only worried about cheating and policy. They are worried about jobs, career relevance, and whether they are preparing for the right future.
  • Some of the earliest AI wins in higher education may come from operations, support workflows, and retention efforts rather than classrooms.
  • Agentic tools and AI-enabled browsers are expanding cybersecurity and data risks faster than most institutions can govern them.
  • The future AI-enabled campus should be more adaptive and more supportive, but it should still keep humans at the center.

The Starting Point: Institutional Readiness Before AI Ambition

Most universities hear about AI wins at other institutions and want to replicate them immediately. Dan's argument is that ambition without infrastructure creates more problems than it solves.

Sathish: How prepared is your organization for this AI change? Is this something you guys are very anxious about or excited about?

Dan: "I'm a big believer in institutional readiness. Before we implement a change, we have to drive people toward the direction of that change. And the bigger the organization, the more complexity. Where I think higher education has been different than most industries is that we also have our teaching and learning, our critical thinking responsibilities. And how do we address those in an era of AI when agentic browsers can do a heck of a lot of damage in an online course."

The message is clear: readiness must come before rollout. Oakland is moving toward pilot programs this summer to test AI in controlled settings, but Dan is firm on the sequencing.

Dan: "I liken it to a couple of examples I like to use. It's like building a car without wheels or building a house and starting with the attic. It makes no sense if we don't have the foundation first."

The foundation he describes goes beyond the technical. It's cultural, ethical, and procedural. It includes understanding what your tech stack can actually support, what data protections need to be in place, and whether the institution even has a shared vocabulary for what "using AI" means across departments.

Most universities are planning AI pilots without the foundation to support them.

Dan Arnold breaks down why governance, intake processes, and institutional readiness have to come first, and what happens when they don't.

The Foundational Pillars: Governance, Intake, and AI Literacy

Sathish: When you're designing a campus-wide AI strategy, what do you think are the foundational pillars?

Dan: "Certainly governance. We need to understand the broader implications for this from an IT perspective alone. What kind of tech stack do we need to bring in to ensure that we've got a secure environment for our faculty, our faculty researchers, and our student researchers as well to experiment, to play, to have those opportunities in a more controlled environment that protects our institutional data, our personally identifiable information."

Governance alone is just the starting point. Dan outlines three pillars working in tandem:

Governance and Data Security

Understanding how to govern AI, IT infrastructure, and institutional data. Ensuring compliance with FERPA, HIPAA, and internal data standards before any tool is deployed.

An Intake Process for Use Cases

A structured business case process where any department can articulate a problem statement, define the pain point, and propose how AI might help, rather than having solutions imposed from above.

Dan: "We want to make sure that we're identifying the problem and then matching the tool to the problem. Rather than having this tool, be it a large language model, and now let's go looking for problems. And to me, that's a backwards approach."

AI Literacy Across Every Stakeholder Group

Tailored literacy that recognizes different needs across different audiences.

Dan: "On February 13th, the Department of Labor released an AI literacy framework that I think is very interesting that could serve as a model for almost any organization looking to start dipping their toes more seriously into artificial intelligence."

Problem-First, Then Tool: Matching AI to Real Pain Points

This is the throughline of the entire conversation. Dan returns to it again and again: start with the problem.

Sathish: Can you talk about how you're helping departments identify where AI fits?

Dan: "Every unit, every organization, every department throughout our enterprise has got some kind of pain point. They have got, but maybe they haven't had an opportunity to help develop that business case to help understand what that efficiency gained would be."

The intake process Oakland is building asks departments to clearly define their problem statement before any AI tool is evaluated. This prevents the common trap of acquiring technology for its own sake.

Dan: "In addition to that governance, we also have to have an intake process to help us prioritize projects, to understand limitations, to start to get an understanding of what that short-term and potential long-term return on investment could be as well."

Two things make this approach effective:

  • Ownership stays with the department. Each unit unpacks its own challenges and identifies where value exists, rather than having someone from the outside tell them what AI could do for them.
  • It builds institutional readiness from the ground up. When people define their own problems and see how AI could address those pain points, they become advocates for adoption rather than passive recipients of change.

The Student Dilemma: Conflicting Signals, Real Anxiety

This is one of the most revealing segments of the conversation. When Dan started talking to students about AI, he expected them to be worried about cheating accusations. They had something entirely different on their minds.

Dan: "I originally thought they were just worried about cheating, being accused of cheating. And the first time I went and talked to the group of students, they didn't mention cheating once. They're talking about jobs."

Students are asking: Am I in the right degree program? What are my job prospects going to be?

Dan: "And those are hard things to predict. Listening to deans from schools of business and economists at our own university and others, nobody really knows. They suspect there'll be some job loss, but they also suspect that there'll be some job creation as a result. And I don't have the answer for that. And I'd be hesitant for anybody who does."

This uncertainty alone is enough to unsettle students. But the classroom experience makes it worse with a patchwork of contradictions:

Dan: "They go to, for example, a graphic design course and we have a professor here teaching graphic design who's finding cool ways to introduce AI and critique of AI graphic design. That's going to be a professor who has more wiggle room for experimentation and will push students using this technology. Now they may go to their creative writing class after that and it may be somebody who says you can't use this whatsoever."

Three forces are pulling students in completely different directions at the same time:

  • In the classroom (pro-AI): Faculty in disciplines like graphic design are finding creative ways to use AI as a teaching tool, encouraging experimentation and critical evaluation of AI-generated work.
  • In the classroom (anti-AI): Faculty in other disciplines restrict AI entirely, sometimes without clearly communicating the reasoning behind the restriction.
  • At the internship (full speed ahead): Employers expect students to lean into AI, push boundaries, and use every tool available.

The result is students who are eager to learn, full of ideas on how to use the technology, yet stuck navigating a system that still needs to align itself.

Faculty Adoption: Early Champions, Cautious Explorers, and Those Who've Dug In

Sathish: Are the faculty excited, reluctant, or resisting these changes?

Dan: "All of those things. It's a very personal choice. The way I like to frame it is that the rate of change of AI and technology in general, we have no control over that. We're really subject to the whims of what the tech industry will do. What I, as an individual, do have control over is my rate of adoption."

Dan describes the adoption spectrum every institution is seeing right now:

Dan: "In any kind of change, you have people who get it early. They understand it early. They're your early champions. And then you have those people who see, all right, maybe there's some juice here. I want to start dipping my toes into the shallow end of the pool. And then you have people who flat out refuse. They don't want to change."

Every institution has all three groups. The spectrum looks like this:

  • Early champions who understand the technology quickly and become advocates. These are the faculty you lean into for pilot programs and peer mentoring.
  • Cautious explorers who see the potential and are willing to experiment, but want to start slowly. They need safe spaces to learn and fail without judgment.
  • Firmly planted resistors who have dug in their heels for personal, professional, or emotional reasons. Fear, anxiety, and uncertainty all play a role here, and those responses are entirely human.

Dan's approach respects every position on this spectrum:

Dan: "I won't make anybody or require or mandate anybody do anything. I believe faculty should have control over what they do in their classrooms. However, your students are going to be demanding more. They want more information. They want you to help them learn these skills too. So it's a delicate dance."

The natural pressure, he argues, will come from students demanding AI literacy, from peers modeling success, and from the widening gap between early adopters and holdouts becoming impossible to ignore.

Cybersecurity: The Agentic Browser Threat Most Campuses Are Still Unprepared For

This is where the conversation takes a sharp turn into the risks most institutions have yet to fully process.

Sathish: How do you think AI will become a cybersecurity threat, like fake logins, spam, or compromised accounts?

Dan: "Really, I think the more obvious cybersecurity issue that we're facing is agentic browsers who have been given login credentials or agentic systems that have had their credentials given to them. And just how easy it is to hand that information off because it'll handle that schoolwork for us."

The risk is already real. Students are giving AI agents access to their university accounts to complete coursework. Those agents now have access to secured systems, potentially touching FERPA-protected or HIPAA-protected data.

Dan: "I feel for our CISOs and our security teams out there. There's a lot more coming at them and it's really a big period of learning for them too, because everything changed for them. And the nefarious actors, they're finding more creative ways, different ways to infiltrate using this technology as well."

And the regulatory landscape has yet to catch up:

Dan: "The large language model providers don't have to listen to anybody. They're highly unregulated. And I think that I hope to see industry partnering with education to help push back, to help establish guardrails, like keep your agentic browsers, keep your systems out of our secured, protected data."

Two layers of risk are compounding at the same time:

  • Student behavior: Credentials are being handed off to AI agents willingly, often without any understanding of how that data is being stored, shared, or accessed.
  • External bad actors: Nefarious actors are finding more creative entry points using the same AI technology, making the attack surface wider and harder to defend.

Dan's position on guardrails is nuanced. He does not believe in outright bans:

Dan: "I don't necessarily believe in outright bans. I don't think that that really promotes critical thinking. It would just be from an education perspective helpful if those tools couldn't so easily infiltrate our learning management system."

The responsibility, he argues, should be shared. Institutions need smarter defenses, but LLM providers also need to be more mindful of how their tools interact with secured educational systems.

Where AI Wins First: Administrative and Operational Use Cases

Sathish: Apart from classrooms, where do you think AI can play a significant role in university operations?

Dan: "I think some of the earliest ones will be on the administrative and operational side because we do have a bit more control over some of those things."

Dan shares a concrete example from a peer institution:

Dan: "A friend at another institution, their ticket intake for any tech troubleshooting, they built a RAG model using markdown files that basically just gave a brief explanation of the types of problems this business unit sees, some of the common questions. And then they built an agent that would help triage that and get that to the appropriate unit. It cut down the amount of response time. It cut down on tickets, frivolous tickets for a lot of people."

This is the kind of practical, operational win that builds confidence and proves value. The model works by:

  • Documenting common problems and questions in simple markdown files
  • Using RAG (retrieval-augmented generation) to match incoming tickets to the right knowledge base
  • Routing tickets to the appropriate team automatically, reducing response time and eliminating low-value tickets from the queue

At Oakland, they're exploring identity management and password resets as early candidates:

Dan: "When you've got 20,000 some users, you're probably getting a lot of those requests. There is an opportunity to take a number of tickets off of somebody's plate to free them up for higher-level tech issues."

These wins are less about flash and more about operational efficiency. They free people to focus on more complex, human-critical work, which is exactly the kind of value AI delivers when matched to real problems.

Budget for AI: Strategic Investment or Premature Spending?

Sathish: Some universities are allocating a dedicated budget for AI. Do you think that's the right approach?

Dan: "It depends on the institution. It depends on what they're trying to do and the direction they're trying to go."

Dan draws a clear distinction between well-resourced flagships and institutions working with tighter constraints:

Dan: "Setting aside budget for those schools who can and have an idea on what they want to try and accomplish, fantastic. But for many other schools who are working with pinched budgets and lean workforce, you have to be more strategic on the problems you're trying to solve and the potential gains from it."

He ties budget directly to measurable outcomes like retention:

Dan: "You might see a spend of X amount of dollars, but if you retain a hundred more students, 200 more students because of that spend, that's how we're going to have to weigh some of those things over a short-term and long-term period."

The business case approach matters here too. Without a defined problem and measurable outcome, AI budget allocation risks becoming a line item with zero return. The smarter path:

  • Start with a problem statement that has measurable impact (e.g., retention, ticket resolution time, advising capacity)
  • Estimate the short-term and long-term return on investment before committing spend
  • Scale budget as pilots prove value, rather than allocating large sums upfront

Evaluating AI Tools: Match the Model to the Use Case

Sathish: When you're trying to evaluate or buy an AI tool, what's your evaluation process?

Dan: "It depends on the problem we're trying to solve. For enterprise-grade rollout, there are companies that are bringing along essentially what they refer to as an operating system that hangs over top of your organization that can pluck data from all the different tools and databases that you have."

Enterprise rollout raises its own challenges, including environmental consumption:

Dan: "If we give everybody full access to 140 or so large language models that are out there with the ability to query six to ten of them or more at a time and generate images and all that, you can see how that could stack up quickly."

The smarter approach, Dan argues, is strategic rollout based on actual need:

Dan: "Our school of computer science and engineering, I can see why some of their students in artificial intelligence programs would need access to higher reasoning models or thinking models. Where, for most people, administrators included, those fast models, those mini models might make more sense."

The principle is the same one that runs through the entire conversation: match the tool to the problem, match the model to the use case. Give computer science students access to advanced reasoning models. Give administrative staff access to fast, lightweight models. Align the resource to the need.

Online vs. In-Person: Room for Both, But Flexibility Is Winning

Sathish: Do students prefer online education or a traditional classroom?

Dan: "I think there's room for both. Online is getting hit hard right now because of what AI can accomplish on a student's behalf, but I think there's room for both and we have to rediscover what that might look like."

Oakland already offers hybrid, fully online, and HyFlex programs. Students are gravitating toward flexibility. But certain disciplines demand hands-on components:

Dan: "There are still some students and there are things like labs or nursing labs, engineering labs where that hands-on component and aspect can really help elevate the learning. Connecting with your peers, connecting with your instructor, being on campus, those are all great things."

The challenge is designing learning experiences that work in both modalities while ensuring AI enhances the learning rather than replaces it. The modalities Oakland is already working with:

  • Hybrid: A day in class and a day online
  • Fully online: Complete programs delivered remotely
  • HyFlex: Students can attend in person, live online, or watch recordings later

Each modality has its place. The key is intentional design that preserves the learning outcomes regardless of delivery format.

Collaboration Over Competition: A Different Kind of University Strategy

Sathish: If all organizations move towards AI-enabled campuses, could any university compete with elite universities in three to five years?

Dan reframes the question entirely:

Dan: "I don't want to compete with the University of Michigan. I want to collaborate with the University of Michigan, Michigan State, Western Michigan, and Central Michigan. If we're too worried about how green the grass is in somebody else's yard, we're not going to be tending to our own."

He points to practical collaboration already underway:

Dan: "We've been establishing a Michigan higher ed AI network or cohort where it's many of the tech people getting together to discuss those kinds of implications."

The productive direction is defining who you are as an institution and building partnerships that make everyone stronger. Three principles emerge from Dan's perspective:

  • Focus on identity over imitation. Ask "who do we want to be as a university?" rather than "how do we catch up to the flagship?"
  • Collaborate on shared challenges. AI governance, cybersecurity, and workforce readiness are problems every institution faces. Solving them together is faster and more efficient.
  • Build regional networks. The Michigan higher ed AI cohort is a model. Tech leaders from multiple institutions sharing learnings, comparing notes, and solving problems collectively.

Advice for Other Institutions Starting Their AI Journey

Sathish: What advice would you give other institutions thinking about these strategies?

Dan: "Keep talking. Talk to your students, talk to your peers, look outside your institution, talk with your industry partners. I have learned so much about what's happening out in industry just by asking and being present and talking with them."

And listen to your students, truly listen:

Dan: "We often say we're here to serve our students and that our decisions may serve our own ends or our own goals. Listen to the students, hear what they're saying. Because for me, I originally thought they were just worried about cheating, being accused of cheating. And the first time I went and talked to the group of students, they didn't mention cheating once. They're talking about jobs."

His closing thought brings it home:

Dan: "Nobody asked for AI. They did, but it happened. Give yourself credit for at least trying to explore. It went from, we're going to go play, to we've got to go to work. As long as we have an understanding of the types of information we should not be putting into AI and protect our personal data, go play and learn, experiment, fail, fail often. Failure is learning."

Dan: "Just know that we are all in this together. We are all still figuring this out at every level of the organization. And that's okay right now, but we're working on it and we need everybody's help to really help push the ship along."

Four takeaways for any institution beginning this journey:

  • Keep talking. With students, peers, industry partners, and anyone outside your institution who is further along the path.
  • Listen to students first. Their concerns will surprise you. They will shape your strategy in ways you would never anticipate from an administrator's desk.
  • Give yourself permission to experiment and fail. Failure is learning. The institutions that will lead are the ones willing to try, learn, and iterate.
  • Remember that everyone is still figuring this out. At every level. At every institution. That shared uncertainty is actually a strength if you lean into it together.

What Higher Ed Leaders Should Do Now

  • Set governance before tools.Review your tech stack, data protections, and compliance posture first. Dan compared skipping this to building a house starting with the attic.
  • Start with problems, not tools.Let departments define their own pain points. Match the solution to the problem, never the other way around.
  • Tailor AI literacy by audience.Students, faculty, researchers, and administrators all need different support. One program applied uniformly will miss most of them.
  • Listen to students first.Dan assumed they'd raise cheating. They raised jobs. Let their real concerns shape your strategy.
  • Take agentic browser risks seriously now.Students are already handing credentials to AI systems. Security teams need to act before the exposure widens.
  • Start with operational wins.IT triage, password resets, and retention tracking are low-risk places to prove value and build institutional confidence.
  • Collaborate, don't compete.Dan's Michigan higher ed AI cohort is the model. Shared learning moves everyone faster.
  • Give permission to fail.Nobody asked for AI, but it arrived. The institutions that lead will be the ones willing to experiment openly.
AI Talks

Students aren't worried about cheating. They're worried about jobs.

Dan shares what he learned the first time he actually talked to students about AI, and what concerns should be driving your strategy.

Listen to the full episode on Spotify →

Ready to Build an AI-Ready Digital Campus?

At Drupal Partners, we help universities and institutions navigate AI-driven digital transformation with secure, scalable solutions:

  • Building AI-enhanced Drupal platforms that support institutional readiness and governance frameworks
  • Developing secure, high-performing digital experiences for education, from student portals to faculty systems
  • Integrating AI-powered tools into existing Drupal ecosystems for operational efficiency and personalized learning

Book a Call →

Keep the Conversation Going

This is AI Talks #1 by Drupal Partners—inside stories and strategies from leaders navigating the shift to AI, automation, and the digital infrastructure they require.