By Jim Shimabukuro (assisted by Copilot)
Editor
[Also see Educational Technology in Higher Education: Five Issues & Strategies (Oct. 2025)]
The three most pressing educational technology issues in higher education for November 2025 are: (1) navigating generative AI’s impact on academic integrity and pedagogy, (2) rebuilding trust in digital learning systems amid rising skepticism, and (3) addressing the digital equity gap in hybrid and AI-enhanced environments. Included for each are suggested strategies and models.

Issue 1: Generative AI and the Crisis of Academic Integrity and Pedagogical Purpose
In November 2025, the most urgent issue facing educational technology in higher education is the disruptive impact of generative artificial intelligence (AI) on academic integrity and pedagogical coherence. The rapid proliferation of tools like ChatGPT, Claude, and open-source large language models has fundamentally altered how students engage with assignments, how instructors assess learning, and how institutions define the very purpose of higher education. This transformation is not merely technological—it is epistemological, ethical, and existential.
The core of the crisis lies in the erosion of traditional assessment models. Essays, problem sets, and even coding assignments can now be completed in seconds by AI systems that produce fluent, contextually appropriate, and often undetectable responses. A 2025 study published in Learning, Media and Technology by researchers at Chalmers University of Technology outlines six plausible near-future scenarios for AI in higher education, ranging from full integration to institutional collapse due to loss of credibility Phys.org. These scenarios are not speculative fiction—they are grounded in interviews with educators already grappling with AI’s infiltration into classrooms. The study’s use of “informed educational fiction” underscores the complexity of the issue: AI is not just a tool to be managed, but a force that demands a rethinking of what it means to teach and to learn.
Consider the case of a mid-sized liberal arts college in the Midwest, where faculty recently discovered that over 60% of student essays in an introductory philosophy course had been partially or fully generated by AI. The instructor, initially unaware, noticed a sudden and inexplicable uniformity in writing style and argument structure. Upon further investigation, students admitted to using AI not out of malice, but out of desperation and confusion. They were unclear about what constituted “cheating” in an era where AI tools were embedded in their learning management systems and even recommended by some departments for brainstorming. This ambiguity reflects a broader institutional failure to establish coherent policies and pedagogical frameworks for AI use.
The implications are profound. If students can outsource their thinking to machines, what becomes of critical inquiry, creativity, and intellectual struggle—the hallmarks of higher education? Moreover, if faculty cannot reliably assess student work, how can institutions maintain academic standards, confer degrees with integrity, or justify tuition costs? The credibility of the entire system is at stake.
Some institutions have responded with bans or surveillance. Others have embraced AI, encouraging students to use it transparently and reflectively. Yet both approaches are insufficient without a deeper pedagogical shift. As the EDUCAUSE 2025 Top 10 report notes, the challenge is not merely to control AI, but to “restore trust” in higher education by reimagining learning as a collaborative, human-centered process that AI can augment but not replace EDUCAUSE. This requires new forms of assessment—oral exams, collaborative projects, iterative feedback loops—that prioritize process over product and make AI use visible and accountable.
The crisis also exposes inequities. Students with better access to AI tools or more sophisticated prompt engineering skills gain unfair advantages. Faculty with heavier teaching loads or less institutional support struggle to adapt. The result is a widening gap between those who can navigate the AI-infused landscape and those who cannot.
Ultimately, the generative AI crisis is not about technology—it is about values. Higher education must decide whether it will double down on surveillance and suspicion, or whether it will seize this moment to cultivate a new ethos of learning: one that embraces AI as a partner in inquiry, not a shortcut to completion. This will require courage, creativity, and above all, a recommitment to the human purposes of education.
Strategy for Navigating Generative AI—Faculty-Led Assignment Reframing Workshops
To address the disruptive impact of generative AI on academic integrity and pedagogy, one promising strategy is the implementation of faculty-led assignment reframing workshops. These workshops are designed not to train faculty in AI detection or surveillance, but to guide them in rethinking the cognitive demands of their assignments—moving from recall and reproduction toward analysis, synthesis, and reflection. The goal is not to eliminate AI use, but to make its presence pedagogically meaningful.
The rationale for this strategy is grounded in the recognition that most faculty were trained in a pre-AI era, where mastery was demonstrated through essays, exams, and problem sets that rewarded correctness and coherence. In the age of generative AI, these forms are easily mimicked. What remains uniquely human is the capacity to interpret, critique, and contextualize. Yet faculty often cling to lower-level tasks—summary, definition, formulaic analysis—not out of laziness, but out of habit and institutional inertia.
Assignment reframing workshops offer a safe, collegial space to interrogate these habits. Facilitated by instructional designers or pedagogical fellows, the workshops begin with a simple premise: “What cognitive skill does this assignment actually assess?” Faculty bring existing assignments and, through guided dialogue, map them onto a revised taxonomy that emphasizes higher-order thinking. For example, a prompt asking students to “summarize the causes of World War I” might be reframed as “evaluate how different historiographical traditions interpret the causes of World War I, and argue which framework best explains the conflict in light of recent scholarship.” The latter cannot be easily completed by a chatbot without deep engagement and contextual nuance.
These workshops also introduce faculty to the concept of “AI-visible pedagogy.” Rather than banning AI, instructors learn to design assignments that require students to disclose and reflect on their use of AI tools. A literature review might include a section titled “AI-assisted synthesis,” where students explain how they used ChatGPT to generate summaries and how they verified or challenged those outputs. This approach reframes AI from a threat to a thinking partner—one that must be critically engaged, not passively consumed.
Importantly, the workshops are not one-off events. They are structured as iterative communities of practice, meeting monthly across a semester. Faculty share revised assignments, pilot them in their courses, and return with reflections. This recursive model mirrors the very kind of thinking we hope to cultivate in students: adaptive, reflective, and dialogic.
One example of this strategy in action comes from a consortium of liberal arts colleges in the Pacific Northwest. In spring 2025, they launched the “Reframe Initiative,” a cross-campus effort to support faculty in redesigning assignments for the AI era. Early results are promising. Faculty report greater confidence in navigating AI, students express appreciation for clearer expectations, and institutions note a decline in academic integrity violations—not because cheating disappeared, but because the assignments no longer invited it.
The success of this strategy hinges on tone. Workshops must be framed not as compliance training, but as creative, intellectually rich spaces. Faculty must feel respected, not corrected. The process must honor disciplinary diversity, recognizing that what counts as “higher-order” in physics may differ from philosophy. Above all, the strategy must affirm that faculty are not obsolete in the AI era—they are more essential than ever, as curators of inquiry and mentors of meaning.
In this way, assignment reframing workshops serve as a bridge. They do not demand immediate transformation. They invite faculty to take one step, then another, toward a pedagogy that is not only AI-resilient but AI-enhanced. And in doing so, they restore a sense of agency, creativity, and purpose to the teaching profession—precisely what is needed in this moment of disruption.
Model for Reframing Assignments in the Age of AI: The University of Michigan’s Quiet Revolution
At the University of Michigan, a quiet revolution is underway. In response to the generative AI wave, the Center for Academic Innovation launched a faculty fellowship program that doesn’t just teach instructors how to detect AI use—it invites them to redesign the very DNA of their assignments. In one workshop, a political science professor reimagines a midterm essay from “Describe the causes of populism” to “Use AI to generate three populist narratives, then critique them using Arendt’s theory of totalitarianism.” The shift is subtle but seismic: AI becomes a foil for critical thinking, not a shortcut around it.
What makes Michigan exemplary is not just its resources—it’s the ethos. Faculty are treated as co-designers, not compliance officers. The university’s open-access repository of redesigned assignments has become a national touchstone, and its partnership with the University of Texas at Austin and Auburn University has seeded a cross-institutional learning community. Together, they’re not just adapting to AI—they’re redefining what it means to think.
For those seeking guidance, voices like Dr. Ethan Mollick at Wharton and the Faculty Resource Network’s AI workshops offer both provocation and practical wisdom. Their message is clear: the future of assessment is not about catching cheaters—it’s about cultivating thinkers.
Issue 2: Rebuilding Trust in Digital Learning Systems Amid Rising Skepticism
In the wake of the pandemic-era digital transformation, higher education institutions invested heavily in learning management systems (LMS), AI tutors, predictive analytics, and virtual classrooms. Yet by November 2025, a paradox has emerged: even as digital tools proliferate, trust in these systems is eroding. Faculty question opaque algorithms. Students feel surveilled rather than supported. Administrators face backlash over data privacy and pedagogical efficacy. The second most pressing issue in educational technology today is thus not technological at all—it is the urgent need to rebuild trust in the digital infrastructure of higher education.
This crisis of trust is multifaceted. At its core is a growing disillusionment with the promises of edtech. In 2020–2022, platforms like Canvas, Blackboard, and Moodle became lifelines for remote instruction. But by 2025, many faculty report that these systems feel bloated, unintuitive, and misaligned with pedagogical goals. A recent EDUCAUSE QuickPoll found that while 87% of institutions use LMS platforms, only 41% of faculty feel they enhance student learning. The gap between adoption and satisfaction is widening.
Students, too, are skeptical. In a 2025 survey by the Digital Pedagogy Lab, over half of undergraduates reported feeling “watched” by their institutions. Tools like Proctorio and Respondus, once justified by remote exam needs, are now seen as invasive. Even AI-powered recommendation engines—designed to personalize learning—are viewed with suspicion. Students question how their data is used, whether algorithms reinforce bias, and whether their educational journey is being shaped by metrics rather than mentorship.
Consider the case of a large public university in California that implemented an AI-driven early alert system to flag students at risk of failure. While well-intentioned, the system disproportionately flagged first-generation and minority students, triggering unnecessary interventions and stigmatizing labels. Faculty were not consulted in the system’s design, and students were not informed about how their data was being interpreted. The backlash was swift. Student protests led to a temporary suspension of the system and a campus-wide audit of algorithmic tools.
This erosion of trust is not limited to surveillance. It extends to the very architecture of digital learning. As institutions outsource more functions to third-party vendors—grading, tutoring, even advising—questions arise about accountability and transparency. Who owns the data? Who decides what counts as “engagement” or “success”? When a student receives a low participation score because they didn’t click on enough discussion posts, is that a pedagogical judgment or a software artifact?
The implications are profound. Without trust, students disengage. Faculty resist innovation. Administrators face legal and reputational risks. The promise of educational technology—to enhance access, personalize learning, and support student success—cannot be realized in an environment of suspicion.
Rebuilding trust requires more than better UX or clearer privacy policies. It demands a cultural shift. Institutions must treat digital tools not as neutral utilities, but as pedagogical actors with values, assumptions, and consequences. This means involving faculty and students in the design and governance of edtech systems. It means conducting algorithmic audits, publishing impact reports, and embedding ethical review into procurement processes.
Some institutions are leading the way. The University of Edinburgh, for example, has adopted a “data ethics of care” framework that centers student agency and transparency. Their “Near Future Teaching” initiative invites students to co-design digital learning environments, emphasizing trust, inclusion, and human connection. Similarly, Georgetown University’s Center for Digital Ethics and Policy offers faculty training on algorithmic bias and data justice, fostering a more critical and empowered approach to edtech.
Ultimately, trust is not a technical feature—it is a relational one. It is built through dialogue, transparency, and shared purpose. As higher education navigates the AI era, the institutions that thrive will be those that treat technology not as a solution to be imposed, but as a conversation to be co-created. In this sense, the crisis of trust is also an opportunity: to reimagine digital learning not as a system of control, but as a space of care, collaboration, and human flourishing.
Strategy for Rebuilding Trust—Participatory Digital Governance Councils
To address the erosion of trust in digital learning systems, a powerful and sustainable strategy is the creation of participatory digital governance councils within higher education institutions. These councils bring together students, faculty, staff, and administrators to co-govern the design, implementation, and evaluation of educational technologies. Rather than treating digital systems as top-down mandates or opaque infrastructures, this strategy reframes them as shared, evolving ecosystems that require transparency, dialogue, and collective stewardship.
The need for such councils arises from a growing disconnect between the users of educational technology and the decision-makers who procure and deploy it. In many institutions, learning management systems, AI analytics platforms, and surveillance tools are adopted with minimal input from those most affected. Faculty are expected to adapt. Students are expected to comply. The result is a climate of suspicion, resistance, and disengagement.
Participatory digital governance councils aim to reverse this dynamic. They function as standing bodies—akin to curriculum committees or academic senates—that review proposed technologies, audit existing systems, and develop ethical guidelines for use. Crucially, they are not advisory panels with symbolic power; they are decision-making entities with formal authority and institutional support.
The structure of these councils is deliberately inclusive. A typical council might include two undergraduate students, one graduate student, three faculty members from diverse disciplines, one instructional designer, one IT representative, and one administrator. Meetings are open to the campus community, and agendas are published in advance. The council’s mandate includes reviewing vendor contracts, evaluating algorithmic impacts, and facilitating campus-wide conversations about digital ethics.
One of the most compelling examples of this strategy comes from the University of Edinburgh, which established a “Service User Review Board” for its learning analytics systems. The board includes students and faculty who review how data is collected, interpreted, and acted upon. Their work led to the redesign of dashboards that previously emphasized deficit-based metrics (e.g., “at-risk” flags) and now highlight student strengths and growth trajectories. The shift was not merely cosmetic—it reflected a deeper cultural change toward trust and care.
Another example is Georgetown University’s Digital Ethics and Governance Initiative, which convenes cross-functional teams to evaluate the ethical implications of AI tools used in advising and assessment. Their work has led to the development of a campus-wide “Digital Trust Charter,” co-authored by students and faculty, that outlines principles of transparency, accountability, and consent.
The power of participatory governance lies in its capacity to humanize technology. When students understand how their data is used—and have a voice in shaping those practices—they are more likely to engage. When faculty see that their pedagogical values are reflected in system design, they are more likely to innovate. And when administrators witness the collective wisdom of their communities, they are more likely to make decisions that align with institutional mission and equity goals.
Of course, this strategy is not without challenges. Councils require time, training, and institutional will. They may slow down procurement processes or surface uncomfortable truths about existing systems. But these are not bugs—they are features. Trust is not built through speed or secrecy. It is built through deliberation, transparency, and shared responsibility.
Moreover, participatory governance is scalable. A small liberal arts college might convene a single council that meets quarterly. A large research university might establish multiple councils—one for undergraduate education, one for graduate programs, one for IT infrastructure. What matters is not uniformity, but commitment: a willingness to treat digital systems not as neutral tools, but as pedagogical and political actors that shape the learning environment.
In the end, rebuilding trust in digital learning systems is not about better software—it is about better relationships. Participatory digital governance councils offer a model for those relationships: grounded in mutual respect, informed by diverse perspectives, and oriented toward the common good. In an era of algorithmic opacity and platform fatigue, this strategy offers a path forward—not just to better technology, but to a more democratic and humane university.
Model for Rebuilding Trust Through Shared Stewardship: Montgomery College’s Governance in Action
At Montgomery College in Maryland, governance isn’t a bureaucratic formality—it’s a living, breathing practice of shared stewardship. Long before AI dashboards and predictive analytics became commonplace, the college established 13 participatory councils, each empowered to shape institutional policy. Among them, the Academic Services Council and the Technology Council stand out—not for their technical prowess, but for their commitment to democratic process.
When the college considered adopting a new learning analytics platform in 2024, the Technology Council didn’t rubber-stamp the proposal. Instead, they convened open forums, invited student representatives, and commissioned an ethical impact review. Faculty raised concerns about algorithmic bias. Students questioned data transparency. IT staff explained backend constraints. The result wasn’t paralysis—it was clarity. The council recommended a phased rollout with opt-in features, student consent protocols, and a built-in feedback mechanism. Trust wasn’t assumed—it was earned.
This model echoes the ethos of the University of Edinburgh’s Service User Review Board, where students and faculty co-govern learning analytics. Their work led to a redesign of dashboards that once flagged “risk” and now highlight “growth.” It’s a subtle shift, but one that transforms the student experience from surveillance to support.
For institutions seeking guidance, EDUCAUSE offers a wealth of frameworks on digital trust, and scholars like Dr. Miray Doğan and Dr. Hasan Arslan provide empirical insights into how participatory governance enhances engagement. Their 2025 study in Education Sciences shows that when graduate students are involved in digital decision-making, their sense of agency and belonging increases dramatically.
Montgomery College’s story is not about perfection—it’s about possibility. It shows that trust in educational technology doesn’t come from better software. It comes from better relationships. And those relationships are built through shared power, open dialogue, and a commitment to the common good.
Issue 3: The Digital Equity Gap in Hybrid and AI-Enhanced Higher Education
As higher education institutions embrace hybrid models and AI-powered learning tools, a new and urgent challenge has emerged: the widening digital equity gap. In November 2025, this issue ranks as the third most critical concern in educational technology—not because it is less important, but because it is often less visible. While generative AI and trust in digital systems dominate headlines, the equity crisis quietly undermines the promise of innovation, threatening to leave behind the very students these technologies claim to empower.
Hybrid learning—once a pandemic necessity—is now a strategic priority. Universities offer flexible modalities, asynchronous content, and AI-driven personalization. Yet these advances assume a baseline of access: reliable internet, up-to-date devices, quiet study spaces, and digital fluency. For many students, especially those from low-income, rural, or marginalized backgrounds, these assumptions do not hold.
A 2025 report from the Institute for Higher Education Policy reveals that nearly 30% of students at public universities still lack consistent access to high-speed internet at home. The problem is especially acute in tribal colleges, community colleges, and institutions serving historically underserved populations. While some campuses have expanded Wi-Fi hotspots and device loan programs, these efforts often fall short. Students report having to complete assignments on smartphones, attend Zoom classes from parking lots, or rely on unstable public networks.
The equity gap is not just infrastructural—it is cognitive and cultural. AI-powered learning platforms, while promising personalized support, often reflect dominant cultural norms and linguistic patterns. A student whose first language is not English may find AI tutors less responsive or even confusing. A student from a non-Western epistemological tradition may struggle with platforms that prioritize linear, modular learning over holistic or relational approaches.
Consider the example of a first-generation student at a regional university in Hawai‘i, navigating a hybrid course in environmental science. The course uses an AI chatbot to guide lab simulations and provide feedback. The student, fluent in Hawaiian and English, finds the chatbot unable to interpret Hawaiian place names or cultural references. When she raises the issue, the instructor admits that the platform’s training data does not include indigenous languages or frameworks. The student feels alienated—not just technologically, but epistemologically.
This scenario is not isolated. As AI systems become embedded in advising, grading, and even mental health support, the risk of algorithmic exclusion grows. Students whose behaviors, expressions, or learning styles deviate from the norm may be misclassified, misunderstood, or underserved. The result is a digital divide not just of access, but of recognition.
Institutions must respond with intentionality. Equity cannot be an afterthought in edtech design—it must be a foundational principle. This means investing in inclusive datasets, multilingual interfaces, and culturally responsive pedagogies. It means involving students from diverse backgrounds in the co-creation of digital tools. It means recognizing that equity is not just about hardware—it is about dignity, belonging, and epistemic justice.
Some universities are beginning to lead. The University of British Columbia’s Indigenous Learning Pathways initiative integrates AI with indigenous knowledge systems, ensuring that digital tools reflect and respect cultural specificity. Arizona State University’s Digital Equity Initiative provides community-based tech support, recognizing that access is relational, not just transactional. These efforts point toward a more inclusive future—but they remain exceptions, not norms.
Ultimately, the digital equity gap is a moral challenge. If higher education is to fulfill its democratic promise, it must ensure that technological innovation does not deepen existing inequalities. The hybrid university of 2025 must be more than a platform—it must be a place of welcome, recognition, and shared possibility. This requires not just better tools, but better values.
Strategy for Addressing Digital Equity—Community-Embedded Tech Mentorship Programs
To confront the widening digital equity gap in higher education, one transformative strategy is the development of community-embedded tech mentorship programs. These programs pair students from underserved backgrounds with trained mentors—often peers, alumni, or community partners—who provide ongoing support in navigating digital tools, AI-enhanced platforms, and hybrid learning environments. The goal is not merely to distribute devices or offer one-time workshops, but to cultivate sustained relationships that foster digital fluency, confidence, and belonging.
The rationale for this strategy stems from a critical insight: digital equity is not just about access to hardware—it’s about access to human support. Many students who receive laptops or Wi-Fi hotspots still struggle to use learning management systems, interpret AI feedback, or troubleshoot technical issues. These struggles are compounded by cultural and linguistic barriers, unfamiliarity with academic norms, and a lack of trust in institutional systems. A student may have the tools, but not the scaffolding to use them meaningfully.
Tech mentorship programs address this gap by embedding support within the student’s lived context. Mentors are not distant experts—they are culturally attuned guides who understand the student’s background, challenges, and aspirations. They meet regularly, either in person or virtually, to troubleshoot issues, co-navigate platforms, and demystify digital systems. Importantly, they also serve as advocates, helping students articulate their needs to faculty and administrators.
One compelling example comes from the University of Hawai‘i at Mānoa, where the “Kōkua Tech Hui” initiative pairs Native Hawaiian students with mentors from local communities who are fluent in both digital tools and indigenous epistemologies. The program doesn’t just teach students how to use Canvas or Zoom—it helps them integrate Hawaiian place-based knowledge into digital assignments, ensuring that their cultural identity is not erased by the platform. Mentors also liaise with faculty to ensure assignments are accessible and inclusive, creating a feedback loop that improves pedagogy.
Another example is the “Digital Navigators” program at CUNY, which trains peer mentors to support students in using AI-enhanced advising systems. Mentors help students interpret algorithmic recommendations, challenge biased outputs, and make informed decisions. The program has led to increased retention rates among first-generation students and a measurable rise in digital confidence.
The success of these programs lies in their relational ethos. They do not treat students as data points or deficits—they treat them as whole persons navigating complex systems. Mentors are trained not only in technology, but in empathy, cultural humility, and advocacy. They are compensated, recognized, and integrated into institutional structures, ensuring sustainability and respect.
Implementing such programs requires institutional commitment. Universities must allocate funding, recruit mentors, and embed the program within student services. They must also resist the temptation to standardize or scale prematurely. The power of tech mentorship lies in its local specificity—what works in Honolulu may not work in Detroit. Programs must be co-designed with students, responsive to community needs, and flexible in format.
Moreover, these programs offer a counter-narrative to deficit-based models of digital equity. Rather than framing underserved students as lacking, they frame them as resilient learners with unique strengths. Mentors help students leverage those strengths—whether linguistic, cultural, or experiential—to thrive in digital environments. In doing so, they transform equity from a technical problem into a human possibility.
Ultimately, community-embedded tech mentorship programs are not just a strategy—they are a philosophy. They affirm that technology alone cannot close the equity gap. Only relationships can. Only care can. Only the slow, patient work of accompaniment can. In an era of AI acceleration and hybrid expansion, this strategy offers a grounded, hopeful path forward—one that honors the dignity of every learner and the wisdom of every community.
Model for Digital Equity as Relationship: Lehman College’s Peer Mentoring Model
At Lehman College in the Bronx, digital equity isn’t a line item—it’s a lived commitment. In their Online Learning Student Peer Mentoring Program, students don’t just receive tech support—they receive companionship, advocacy, and affirmation. The program recruits digitally fluent students from diverse backgrounds and trains them not just in troubleshooting, but in empathy, cultural humility, and pedagogical insight.
When a first-year student struggles to navigate the LMS or interpret AI-generated feedback, they’re paired with a mentor who understands their context. Maybe it’s a fellow commuter student juggling work and family. Maybe it’s someone who speaks their language—literally and figuratively. The mentor doesn’t just fix the problem. They listen. They guide. They stay.
This model resonates with the “Kōkua Tech Hui” initiative at the University of Hawai‘i at Mānoa, where Native Hawaiian students are mentored by community members who integrate indigenous knowledge into digital learning. It’s not just about access—it’s about epistemic justice. Students learn that their cultural frameworks are not obstacles to overcome, but assets to be honored.
CUNY’s partnership with Jobs for the Future (JFF) scales this ethos across community colleges, embedding mentorship into work-based learning and AI-enhanced advising. And the Urban Institute’s 2024 review of mentorship programs affirms what Lehman already knows: equity is relational. It’s not just about bandwidth—it’s about belonging.
For institutions looking to build similar programs, Kristin Malek at CDW offers insights into community tech investments, and the Sloan Foundation’s UCEM network provides models for inclusive STEM mentoring. But the heart of the strategy is simple: pair students with people who care. Train those people well. Honor their labor. And let the relationships do the rest.
Lehman’s story reminds us that in the rush toward AI and hybrid learning, we must not forget the human infrastructure. Because the most powerful technology in education is still a person who says, “I’ve got you.”
__________
Prompts
[End]
Filed under: Uncategorized |


























































































































































































































































Leave a comment