skip links
Chabot College Logo
envelope iconStudent Email | College Index Search Bar Left Corner
Chabot College Logo

Menu Gradient Bottom

Center for Teaching and Learning

Focused Inquiry Groups (FIGs) - Title III: Critical Thinking

Critical Thinking - Members: Dennis Chowenhill

Critical Thinking Assessment Pilot Project

Four Trials for Applying Critical Thinking in an English 102 Class

Fall 2008



Forming an impression of what a text is going to be about, after having read only the first part of the text, requires a set of skills that are widely regarded as “critical thinking” skills.  Students need not only to understand the text they are reading at a literal level, but to apply their previous experiences—as readers and as observers of their environment—in order to construct the appropriate context for the text; to apply their understanding of how texts are shaped; and to imagine, given these factors, what is most probably going to follow. 

The point of this critical thinking exercise was to train students in a critical reading skill and assess their performances.  The critical reading skill addresses a common problem that readers of all levels have, of generating expectations of a text early in their reading of it that subsequently shapes their understanding of the text so that they do not notice when their expectations were inaccurate.  I have observed that once students create an impression of what a text is going to be about, they notice the parts of the text that support their initial impression but casually dismiss the parts that do not, even when those parts make up the core of the text.  I suspect that this problem arises from students’ misapplication of reading techniques that they have learned previously.  A common exercise in texts that encourage “active reading” is to have students read the introduction of a text and predict what will follow, then to have that prediction guide them in their reading (which activity prevents them from reading “passively”).  The tactic works well with the reading of standard textbooks, which begin their chapters with overviews structured to encourage this approach to reading.  But it fails with other texts (probably most of what students read at the university level), which employ introductions that have a wide range of purposes other than to give an overview of what follows.  (A classic, and common, example is the argumentative essay that begins with the opposition argument and does not introduce the writer’s thesis until later in the writing.)  


In the classroom I used the term prediction to refer to students’ initial guesses of what a text will be about.  I referred to the process of validating their predictions as “checking to see how accurate your prediction was.”


For the unit of instruction represented in this activity, I did not conduct a pre-instruction evaluation of how well students predicted from the introductory paragraphs of texts.   Consequently, I had no baseline for comparisons.  I began by discussing with students the common reading error of being misguided by inaccurate predictions, then conducted the first three trials, all during the same classroom session.  The students are enrolled in English 102, a course that prepares students for English 1A (the first college-level English composition course).  An advantage of conducting this exercise in an English 102 class is that the class has two-hour sessions, so that we had plenty of time without being rushed. 


Four trials were completed. 


The first three trials were conducted the same day.  For these, students read passages from the last chapter of the book (Out of Order, Thomas Patterson) that they have been reading this term.  This text is an expository/argumentative examination of the ways that news media contribute to and influence presidential campaigns.  The chapters of this text are divided into sections, unlabeled, and marked only by extra spaces between them in the text.  Thus, one has no indication of what a section is about until one begins reading it.  Patterson has a variety of ways of introducing these sections—sometimes with introductory anecdotes, sometimes with statistical data, sometimes with accounts of previous presidential elections, and sometimes with classic opening thesis statements that inform the reader of what the section is about.  This variety lends itself well to this particular critical thinking exercise.  The students had been reading and discussing this text for ten weeks, and had become familiar with the style of the writer, especially his strategies for presenting information and organizing it.


In the first trial, to demonstrate the concept of the exercise, I read aloud for the students the first two paragraphs of a three-page section of the chapter, then had the students, in a full-group discussion, decide on a prediction.  After we had reached a consensus about the prediction, the students wrote it down in their notes.  The students then read the rest of the passage silently and wrote, individually,  what they discovered.  The goal was for students to recognize whether the prediction was correct, and if not, what was wrong with it.


In the second trial, the students read aloud (as I called on two individuals) the first two paragraphs of a section of the text, then each wrote his or her own prediction, with no discussion.  After this, they proceeded as with the first trial, reading the rest of the passage silently and writing, individually, what they discovered about the accuracy of their predictions.


The third trial was like the second, except that the students did not read the first paragraphs aloud.  I gave them a few minutes to read the first two paragraphs of another section of the text and write their predictions.  After all had completed this, they were instructed to resume reading the rest of the section and write what they discovered about their predictions.


After each of these trials, I conducted a discussion of the results, having students read aloud their discoveries about the accuracy of their predictions, and observing how accurate the predictions were.  In Trial #1, which we had all done together, the prediction on which the class had reached consensus was inaccurate.  We noted how the error was made:  the introductory paragraphs of this section in the text did not serve to represent the main point of the rest of the passage.  I regarded this as a good beginning, since it gave the students the opportunity to see that the point was not to make perfect predictions—which cannot be done realistically since beginnings of texts do not always serve to give an overview of their main arguments—but to develop the skill and habit of checking one’s expectations as one reads.  This skill and habit are the goal of the lesson.


For the fourth trial, which was conducted the week after the previous trials, I distributed copies of a four-page article by Arundati Roy, “The Algebra of Infinite Justice.”  The students had never before read any of Roy’s work, nor had they ever discussed the topic of this essay in class, so they had less of a context for understanding it.  I followed the procedure of Trial #3, giving the students about five minutes to read the first four paragraphs (out of twenty-four) on their own, and write their predictions.  After all had completed this, they read the remainder of the article silently and wrote what they had discovered about their predictions.  Unlike the previous three trials, there was no discussion of the text at any point.  When the students had completed their work, they handed their notes to me.


Summary of Observations

The students’ written notes, which consisted of their stated predictions and what they found in the texts that either validated or conflicted with their predictions, were examined to see whether or not the students understood the principle behind what they were doing.  Whether or not their initial predictions were accurate was regarded as irrelevant.  The students’ notes were interpreted using the second Think Indicator, “Applying formulas, procedures, principles, or themes,” of the Rubric for the Analytical Assessment of Critical Thinking across the Curriculum.  See the Four Tables for the texts of this analysis.  Students’ written responses were evaluated accordingly:


Level 1:  Student didn’t understand the concept of prediction or how to check a prediction for accuracy.

Level 2:  Student understood the concept of prediction but did not check it for accuracy. 

   (This error occurred when students wrote paragraph summaries of the texts instead of checking

   for the accuracy of their predictions.)

Level 3:  Student understood the concept of prediction and checked for accuracy, but with errors.

Level 4:  Student understood the concept of prediction and successfully checked for accuracy.


Note:  Level Three identifies the problem that this exercise addresses.  Though for this exercise, Level Three indicates good performance, because at this level a student understands the concepts and the goal of the exercise, it also indicates the original problem:  interpreting a text without noticing that one’s initial expectations of it are inaccurate. 



Trial #1 (n=15)                                                                       Trial #2 (n=17)

One response at level 2                                                            Two responses at level 2

40% at level 3                                                                           29% at level 3

53% at 4                                                                                   59% at level 4


Trial #3 (n=16)                                                                       Trial #4 (n=22)

Two responses  at level 2.                                                        100% at level 4

63% at level 3

25% at level 4.




The Rubric for the Analytical Assessment of Critical Thinking across the Curriculum yields a useful graphic layout of the performances of the students.  This particular study did not include dramatic differences, but was nonetheless useful for confirming the impression that I had during these trials that the students understood the principles and were able to apply them.  The results also confirmed my belief that students do not always see the errors of their predictions as they read.  In Trial #3, 63% of the students were at level three, and in Trial #1, 40% were. 


The differences among the first three trials I do not regard as relevant. The texts are significantly different, as in this last chapter of Patterson’s book he is coming at his ideas from several different angles, and each of the three sections that the students read was unique in its outline.  I would have to examine the passages carefully to interpret the results I received in these trials; it is possible that the second passage that students read, for Trial #2, which had such different results, was a very different sort of text, perhaps with introductory remarks that are more useful to one making a prediction about what is to follow.  It is also possible that the other three passages that were examined had more complex material, perhaps applying more quantitative references, or more historical ones.


The results of the fourth trial were a surprise.  It makes sense that the students would do well after the intense training and time they spent at the previous class session working with these principles, but the text that they were given to read is significantly more difficult to read than the one they had used in the previous three trials, and on a topic that we have not been discussing this semester.  Roy’s essay is, in fact, a treatment of a topic—the Bush administration’s handling of terrorism in Afghanistan and Iraq—for which students have preconceptions, opinions, and emotions that could result in their misreading the text, looking for messages that they could readily anticipate.  Several of the students in the class are recent immigrants from the Middle East, and at least one has served in the U.S. military.  But despite these complexities of the text used for the fourth trial, the students all did the exercise excellently, making predictions and checking them for accuracy.  Many of the students prefaced their analyses with comments like, “My prediction was right,” or “I was wrong,” which clearly indicates that they knew what they were supposed to be doing.  This language did not appear in the first three trials, some of which were difficult to analyze because it was not clear whether students were misunderstanding the concepts or just not expressing their understanding of them in their writing.  For instance, when students examined their predictions by writing outlines of the passages they had read, it was not clear whether they regarded their outlines as evidence of the accuracy or inaccuracy of their predictions.  This was clear only in cases where students included with the outlines comments like, “this is a different idea from what I predicted.”  I suspect that the excellent results in the fourth trial may have been due to my own wording when I gave them the instructions for doing it.  I was not following a pre-scripted text for the instructions I gave, but spoke spontaneously.  It is possible that I said something like, “What you will be doing as you check for the accuracy of your predictions is checking to see whether you were right or wrong.”  I know that I did not use that language on the day of the first three trials.  If I conduct this analysis with students  in the future, I will use a pre-scripted text for giving the instructions, to avoid this problem. 


Bookstore Icon Library Icon
Footer Left Corner