Tegrity has announced its Remote Proctoring System. The purpose of the system is to allow online programs to assess students with the same level of security as would be found in the physical classroom. Before giving my response, I would like to look at the issue of academic cheating in general.
On the first day of school one year, I happened to be in the high school hallway when our calculus teacher walked by, a grim look on his face as he led one of the school’s most well known students, a highly regarded member of the honor society, to the office. Something had aroused his suspicions, and he had tested her, learning that her mathematical abilities were at the early algebra stage, roughly 8th grade. Throughout high school she had copied every math homework assignment and every test from friends. She had never been caught until then.
That girl was by no means unusual. The Educational Testing Service’s campaign to stop cheating cites statistics indicating that academic cheating has risen dramatically over the last few decades at both the high school and college level. Recent studies indicate that 75-98% of college students admit to having cheated. Another study said that 95% of students who admitted cheating said they had never been caught. It used to be that cheaters were the people just trying to get by, but today’s cheaters are just as likely to be the top performers in the school.
Those statistics all come from traditional, physical classrooms. If the goal of the new Tegrity Remote Proctoring System is to provide the same level of security found in those classrooms, then it has set a pretty low bar for its standard. A better approach lies in changing the nature of assessment itself, thus making the concept of proctoring unnecessary.
Donald Campbell created Campbell’s Law in 1976, stating that using a single metric (such as a high stakes test) to measure student achievement invites corruption: “When test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways.” More importantly, though, the kind of test on which students can cheat this way is not the best way to measure student achievement, and we would be far better off using different assessment methods.
The tests on which students cheat almost invariably are designed to test fact recall. Students access the information they need to select the correct answer (or another similar method) and copy the answers without understanding them. Such tests are at the very bottom level of Bloom’s Taxonomy, and I have to question the value of a class that makes such learning a critical component of assessment. In my own instructional design, tests like that are usually short quizzes, low stakes assessments early in the instructional process to be used as formative assessments of student understanding before we move on to the real learning. The real learning is when the student uses that information in a meaningful way in an assignment nearer the top of Bloom’s Taxonomy.
From Mary Forehand’s “Bloom’s Taxonomy.”
For those low stakes checks for understanding, which can even be totally ungraded, most learning management systems include safeguards that can be used to make cheating less likely, although not impossible. Campbell’s Law suggests that if a course uses multiple measures of student learning, and if such tests are not high stakes, the likelihood of cheating is minimized. It is reduced even more when the course uses policies that emphasize the formative nature of the assessment, such as the ability to retest when the first attempt shows a lack of understanding.
The various ways to assess the student’s ability to use information effectively, such as project-based learning, usually result in a product created by the student or a collaborative group of students. This product also has the potential to involve cheating, cheating of the kind that would not be detected by any proctoring service. That has always been true. I graduated from college 40 years ago, and every fraternity and sorority on campus had a file system for papers and projects that could be retyped and turned in by students who had never even read them. It is actually much harder to do something like that today since we have a variety of services, including Turnitin, that allow electronically submitted projects to be checked for even the slightest indication of plagiarism. Even that should be unnecessary, though, if good instructional practices are followed.
The teacher whose first view of a student project is its final draft has assumed the role of an evaluator and forgotten that there should be an instructional role as well. The purpose of such an assessment is to teach the all important process of working in whatever field of study is being assessed. Since assessment and instruction should be aligned, the purpose of instruction should be to teach that process. A well-designed project includes a number of milestones along the way, opportunities for the teacher to evaluate the progress being made and intervene instructionally as necessary. For many teachers, the final draft of a project is the minor last step in a well-monitored process, a step that only gets a small percentage of the overall grade. Under such a system, cheating is close to impossible.
The final step in eliminating cheating would be to adopt a complete standards-based grading system, but that’s another issue for another time.
The Tegrity Remote Proctoring System may well do what it is designed to do in replicating the security system of a classroom in high stakes assessments on fact retention, but students who have learned to cheat in the regular classroom—which is apparently by far most of them—will learn to cheat under that system as well. The final step of the Tegrity process requires the teacher to watch each student take the test, one at a time (at an accelerated speed, of course), for subtle signs of cheating. I can’t help but think that all that extra time could be better spent by providing the instructional guidance students need as they work through a high quality project on which they cannot cheat.
Filed under: Uncategorized |
I’ll skip over the issues surrounding the validity of Bloom’s taxonomy because it does not affect the thrust of this article.
Most emphatically, I’d like to second John Adsit’s suggestion that we abandon high-stakes testing. I can’t think of one positive for this approach. Not one.
To me, the worst part is the us-versus-them nature of the atmosphere that this sort of testing creates. While teachers are not supposed to be pals of the students, they should be on friendly terms. After the teacher should be helping students to gain skills that will allow them to succeed in life. High-stakes tests turn schools into a different form of prison with teachers as guards and students as inmates.
As John suggests, cheating is one of the worse symptoms of this wrong-headed approach to learning.
I don’t have to have anyone’s taxonomy to know that memorizing is a low-level skill and that real thinking is at a high level. However, just telling our educators to change doesn’t work for a variety of reasons. These reasons all come down to the difficulty in creating courses that emphasize real thought and understanding of the course topics. It’s hard work, and you don’t do it just once as with many course syllabi.
I’d like to have a neat solution. I don’t. You cannot just fire all of the teachers and administrators and start over hiring only those who will do their jobs properly. You can’t change resistant teachers with professional development. Too many regard PD as time off with pay.
It’s time for everyone to work to make educating our children function well. If we don’t, we all lose. I hope that lots of people reading this write their thoughts whether you agree or disagree with what I’ve written.
I really like this observation: “However, just telling our educators to change doesn’t work for a variety of reasons. These reasons all come down to the difficulty in creating courses that emphasize real thought and understanding of the course topics. It’s hard work, and you don’t do it just once as with many course syllabi.”
In my years as a curriculum director, I assure you it is more than hard work. It requires the work of someone who knows what it means and how to do it. Those people are hard to find. Most people have never known anything other than courses that focus on fact recall, and they cannot create a new version of something they have never seen in the first place.
I’ve two questions here:
How does this fit with attempts to reduce the cost of education?
and a related one:
If summative tests are well designed and genuinely test the learning outcomes of a course (competency based testing), are they useful, even if they are high stakes tests?
[…] Educational Technology and Change Journal · Entries RSS | Comments … Social Software in Educ. Teaching and Learning Conference at Elon University: Technology As a Prop – Not at Center Stage … All Sports at Any Time! […]
John,
I understand the other part. I was just assuming the best of our teachers — that they’re teachable — rather than the worst. Some will resist, and you may not be able to reach them. “After all,” they’ll say, “I’ve been doing this for xx years, and it works just fine.”
To Brian,
I’d like to see John’s response. I have to ask what the “learning outcome” is. If it’s just knowing lots of factoids, then send your students to Jeopardy! Keep them away from the real world.
Some teachers go further and teach a pattern-response approach. It might as well be memorization. See this pattern; use this memorized and practiced response. I’ve seen this happen in lots of math courses. It looks as though the students understands the material, but if asked for another way to solve the problem or if presented with a problem matching no pattern, the student is lost.
There are two parts to your question. The first I’ve answered. It depends on the stated learning outcomes. I believe that both John and I have answered the second in the negative but won’t know about John until he answers for himself. High-stakes testing is unfair, self-defeating, and a huge waste of time and money. The system inertia keeps us doing it along with the other problems that we’ve been discussing about test designers (including teachers) finding it easier and less mentally challenging.
I think that most of us know that learning to mastery with the help of a truly capable mentor is the best way to go. We’re also seeing new technology able to support that approach. It’s just a matter of time now. In this country (USA), we’re moving much too slowly.
Harry,
my question had an IF in it. Maybe you think it is a big IF and is therefore a theoretical question, but I still think you should answer it as such. If asked; if high stakes testing could do the trick would they be acceptable, and your answer seemed to suggest that they could not do the trick. That’s not really answering the question. There are many who disagree with you on this. Even in my final year of Civil Engineering in 1978 we did a 4 hour open book structural design examination – probably the most probing exam I’ve ever done. And I believe some people have got even better at these since then.
Of course they are rather expensive and that’s why I ask the question about costs. Sometimes we cannot afford perfection. We have to make do.
Brian
[…] Tegrity has announced its Remote Proctoring System. The purpose of the system is to allow online programs to assess students with the same level of security as would be found in the physical classroom. […]
John: “In my years as a curriculum director, I assure you it is more than hard work. It requires the work of someone who knows what it means and how to do it. Those people are hard to find. Most people have never known anything other than courses that focus on fact recall, and they cannot create a new version of something they have never seen in the first place.”
John, I’m not sure where you directed curriculum or where you studied curriculum and instruction, but “tests and measurement” as well as other ed courses that touch on testing have been a part of teacher training programs for decades. Furthermore, Bloom’s taxonomy is only one of many other models of intellectual and psychological development, and all are basic in teacher ed. To the contrary, the vast majority of trained teachers are well aware of the spectrum of testing options for formative and summative evaluations, and few if any would rely solely on rote memorization.
Harry, “high stakes” is a relative notion. To some extent, all tests have direct as well as indirect consequences for the learner and the learning process.
Tests on recall also cover a wide spectrum of possibilities, and rote memorization of irrelevant facts is just one example — and perhaps the least used by trained teachers. The ability to read and remember facts is a critical part of learning. Even in open book and unproctored “tests” of ability to apply knowledge and skills, students must be able to accurately recall the salient points of what they’ve learned as well as the full intent of the question they’re addressing.
Brian’s example of the 4-hour open book exam in engineering is a good example. The books and notes are there for reference, and a well prepared student, taking an exam written by a prof who is skilled in developing valid tests of competency, will probably never need to refer to them.
The point is that tests and testing aren’t necessarily bad. Yes, there are bad tests, but there are also many excellent ones, and the deciding factors are design and purpose — does it inform the learning and teaching process and does it accurately measure the desired learning outcomes.
Proctoring, too, isn’t necessarily bad. It serves a function. In some situations, it’s a means to verify the identity of the test taker. It’s also a means to limit the testing time frame for practical purposes. In some cases, it’s a way to ensure that the student is able to recall basic facts that are essential to a full understanding of key concepts.
For me, “high stakes” means much is riding on a single test — getting into a good college or having your school taken over by the state or just getting a good grade in an important class.
The initial point of this discussion, as captured in John Adsit’s article’s title, revolves around proctoring. Performing a proctoring service assumes that students will cheat. On the big state assessments, even teachers have been known to cheat.
The bigger the stakes, the more likely cheating will happen. Improving proctoring will simply result in more creative means of cheating. Perhaps, because I went to a college without any proctoring at all, I’m biased. Nevertheless, it seems axiomatic to me that the best way to break the escalation of proctoring and cheating is to change the rules of the system. Remove high-stakes testing, and the cheating problem drops significantly.
I have to add in the context of Tegrity that, in my own limited opinion, remote proctoring is impossible to enforce. Therefore, DE should find another way, not the old-fashioned “final exam” style of running courses.
Will cheating vanish? Probably not because some people just choose it, even if it takes more effort, rather than learn the material. To a few, it’s a game. They don’t seem to care that no matter how many points they score, they lose. We can, however, reduce cheating to “noise level.”
Most importantly, new technologies have made things such as grade self-determination in learning to mastery possible on a large scale. You can give anecdotes about your wonderful experiences with high-stakes testing all you’d like, but don’t expect everyone to share your joy.
Hi, Harry. Please consider writing an ETCJ article on this idea:
“I have to add in the context of Tegrity that, in my own limited opinion, remote proctoring is impossible to enforce.”
I’ll also take some time this morning to write one.
I’d also like to invite others to submit articles to ETCJ on this topic. Email them to me at. jamess@hawaii.edu
Harry, your observations that cheating is widespread seems reasonable. However, most of my colleagues seem to feel that there is less opportunity to cheat in summative examinations than there is in formative assessments such as essays and projects. So even though they believe that the formative assessments are better for learning, they like to have the summative final exams to corroborate the continuous assessment. In other words, they assume that students will attempt to cheat but come to the opposite conclusion to you – ie. that final exams are necessary to deal with this tendency.