Only 8-18% of College Faculty and Administrators Using ChatBots Effectively

By Jim Shimabukuro (assisted by ChatGPT)
Editor

[Also see the reports from Dec 2025, Oct 2025July 2025]

Prompt: I’m curious. What percentage of college professors and administrators personally use chatbots in optimum ways to facilitate their own professional development, research, writing, and job responsibilities? I think this is an important question because they, as a group, are responsible for crafting AI program decisions in their institutions. -js

Image created by Copilot.

ChatGPT’s Response

There’s no direct measurement of “optimal” chatbot use. Based on multiple large surveys of faculty and administrators, perhaps 30–45% have used chatbots at least occasionally for professional tasks, and of those only a minority are likely using them in disciplined, “optimal” ways. A reasoned estimate is about 8–18% of professors/administrators today are using chatbots in ways that approximate best practices for research, writing, and professional development — but with good training and policy changes that share can rise quickly. EDUCAUSE Review+3Inside Higher Ed+3Campbell Sites+3.

For a survey to be reliable, “optimum use” would have to be defined and measured. Examples of measurable criteria might include whether the user:

  • prompts in ways that produce reliable, reproducible results;
  • verifies and cites AI outputs;
  • uses AI for legitimate productivity (lit reviews, drafts, code scaffolding) while avoiding misuse;
  • integrates AI into workflows that improve quality/productivity without harming academic integrity.

No major survey to date asks that detailed set of checklist items and then scores users against it. Instead, surveys mostly record whether people use AI and for what tasks, and how frequently. So we must infer the “optimum” fraction from patterns in usage, self-reported skill/confidence, and institutional supports.

The optimal use estimate of 8–18% is small for these reasons:

  • Training and incentives are uneven. Many institutions haven’t offered systematic professional development on GenAI. Faculty who haven’t had hands-on training tend to use tools superficially. Inside Higher Ed.
  • Policy friction and risk aversion. Ambiguous academic-integrity rules make some faculty avoid deep adoption. Chronicle of Higher Education.
  • Disciplinary differences. STEM faculty who use code-oriented tools may adopt differently than humanities scholars working on prose. MarketWatch+1.
  • Administrators often feel institutions aren’t ready. That reduces coordinated rollout of training and best practices. EDUCAUSE Review.

Here’s a one-page rubric designed for faculty/administrator self-assessment or peer observation. The rubric defines what “optimal” chatbot use looks like across four key dimensions: Prompts, Verification, Documentation, and Integration into Workflow.

Rubric for Optimal Use of Chatbots in Academic Professional Practice

Purpose: To evaluate whether faculty and administrators are using AI/chatbots in ways that are effective, ethical, and sustainable for professional development, research, writing, and administrative tasks.

1. Prompting Practices

  • Emerging (1): Uses vague, one-line prompts; accepts outputs at face value.
  • Developing (2): Uses moderately detailed prompts; may ask follow-ups, but results are inconsistent.
  • Proficient (3): Crafts clear, structured prompts with context, role assignment, and constraints; iterates systematically.
  • Optimal (4): Uses advanced prompt strategies (multi-step, chain of thought, style transfer, persona-based); maintains a reusable prompt library tailored to tasks.

2. Verification & Accuracy

  • Emerging (1): Rarely checks AI outputs; assumes correctness.
  • Developing (2): Spot-checks outputs occasionally; relies on intuition.
  • Proficient (3): Consistently verifies facts, citations, and logic against primary sources; detects and corrects hallucinations.
  • Optimal (4): Embeds verification as a formal step (cross-checks with databases, literature, style guides); maintains reproducibility by rerunning/iterating prompts for consistency.

3. Documentation & Transparency

  • Emerging (1): Does not record AI use; treats chatbot contributions as unacknowledged.
  • Developing (2): Mentions AI use informally; no clear process for disclosure.
  • Proficient (3): Documents AI role in research/writing (e.g., “draft generated with assistance from X”); tracks prompts/outputs in notes.
  • Optimal (4): Maintains a transparent, auditable record (prompt + output archive); discloses AI use in grant proposals, syllabi, and publications according to institutional/disciplinary norms.

4. Integration into Workflow

  • Emerging (1): Uses AI only experimentally; not connected to job responsibilities.
  • Developing (2): Uses AI sporadically for isolated tasks (e.g., summarizing an article, drafting an email).
  • Proficient (3): Regularly integrates AI into professional workstreams (literature reviews, syllabus design, administrative memos, writing/editing).
  • Optimal (4): Embeds AI as a systematic productivity partner: personalized tutoring for PD, structured lit reviews, data analysis drafts, grant-writing scaffolds, course design templates; regularly evaluates efficiency gains.

Scoring & Interpretation

  • 4–7 points (Emerging): Minimal, exploratory use.
  • 8–11 points (Developing): Basic but inconsistent use; needs training.
  • 12–15 points (Proficient): Solid practice; meets professional expectations.
  • 16 points (Optimal): Model user; could mentor others in AI literacy.

[End]

Leave a comment