Defining Classroom Research: An Introduction to Applied Research in Education (Part 0 of III)
This past semester, I tried something unusual: I let my students in a face-to-face course develop their own attendance policy. I sincerely hoped that doing so would help accomplish two goals for the course:
1) Students would feel more ownership over the class, and
2) Students would be more interested and more willing to participate in class activities.
There were a few limitations, of course. According to university mandate, I explained, we had to attend at least 51% of the scheduled classes. The remaining 49% could take place online or at some other location. The students discussed it, and they decided that they didn’t want to miss out on any learning opportunities by skipping face-to-face meetings, so they decided to meet every scheduled day of the semester. Tuesday meetings would be formal work days and Thursday meetings would be for going deeper into discussion or for individualized and one-on-one attention. The students reported feeling very satisfied about their decision, and about their role in determining what was, to them, a significant part of the course.
Now, how might you suppose that this attendance policy development strategy went over? In order to answer this question, you would have to decide what qualifies as success or failure. There is no definite right or wrong answer. Your answer will probably be different from mine, and our answers will be a little bit different from the answers given by each of our schools. The answer given by students might be different still. This means that the success or failure of my new attendance policy design strategy will depend on who I am and what my goals are. It will also depend on my university’s goals and the goals of my students.
Figure 1.1. Factors to Consider for Evaluating the Success of a New Attendance Policy Design Strategy.
In order to assess the success of my attendance policy design strategy, I would have to take all three items into consideration. This means that I would need to have some sort of outcome data to look at for each goal. Here is what that breakdown looked like for me:
1. My goal: When we met in person, student engagement would be high.
2. Student goals: (I asked, and they told me that) class time would be spent asking questions of interest to them, and working through problems they didn’t understand. (In other words, they would be in control of how class time was spent.)
3. University goals: According to university policy, 45 hours of course work must be conducted for the course, and at least 23 of these hours must occur inside the classroom.
I don’t need to take you through how I assessed each of these goals, at least not yet. All I am trying to show you at this point is that you and I, as teachers, conduct interventions like this all the time. We learn about a new teaching strategy or listening strategy on a podcast or in dialogue with a friend, and we wonder if it would work in our classrooms. After trying it out, we decide whether to keep doing it for future classes.
This process, which probably happens three dozen times each year in your own classroom, represents the most basic form of classroom research, which is sometimes called “action research.” In it, you try something new, and, in the end, you decide whether to keep doing it or discard it for something else.
Of course, classroom research can get very structured and highly rigorous. (“Rigorous” just means unbending or inflexible, which is what happens when you carefully follow a method with lots of control.) At the far end of rigor and control would be a randomized control trial where a team of teachers work together to assess a new teaching strategy. But don’t worry about that unless you plan on defending a dissertation at one of the top universities in the world. What I am asking of you throughout the course of this manual is to work at one simple goal: to improve your experience teaching and your students’ experiences learning.
Applied Research Defined
In this book, I define “applied research in education” as any systematic and a priori effort to use classroom[1]data to inform classroom practice. As such, I do not distinguish between applied research and action research—as both target real problems and solutions in classroom contexts. There are important differences between applied and action research, but, in my experience, these differences are inconsequential for the majority of teachers and administrators. I will describe this in more detail in the section labeled “Differences Between Applied and Action Research.”
I’m sorry about the technical words, especially the one that has been preserved in its original Latin uniform. Throughout this manual I have shied away from technical language, but I am afraid it is impossible with the definition I have given. Let me break these technical terms down for you:
Systematic
This means that you are following clear steps for your teaching intervention. It also means that you follow clear steps for collecting and assessing data.
An example of the opposite of systematic might be as follows: “I am going to try something new this semester to help with attendance. I’m not sure what it will be or what it will look like, but I’ll figure it out as I go along.”
As you can imagine, the un-systematic approach lacks structure, direction, and focus. Even if the class ends up being successful, the teacher will have no idea what they did or what they changed that contributed to the success.
In order to be systematic, you will want to have a clear understanding of the following in mind:
1. Your goal or goals.
2. The intervention strategy you will be using.
3. What you expect the intervention strategy will do.
4. How you will know if the intervention was a success or failure.
Adding the four of these up, “systematic” means that you are extremely prepared.
a Priori
This funny-looking Latin phrase is pronounced “Ah pry-OHR-ee.” It means that, before beginning your intervention strategy, you have some confidence that it will work. In traditional research, this typically means that evidence has already been collected by somebody else—perhaps a scientist in a laboratory—who says your strategy will likely be successful.
Consider my example strategy above—the one about letting my students determine their own attendance policy: what sort of evidence did I have that this strategy would work? Well, I knew that students took more ownership in their learning and found more fulfillment when they were allowed to participate in course design. This came from the huge area of applied psychology called self-determination theory (Deci & Ryan, 2017). This, all by itself, provides a priori support of my intervention. But I have been thinking about this for many years, and, as a good researcher, I am always looking for reasons why I might be wrong. In the process, I have discovered a number of teachers in the past who have done something similar and with good results (Knowles, 2012; Rogers, 1961, 1969, 1984; Spence, 2022).
You don’t need to write a twenty-page review essay demonstrating why your intervention will work. But you will want to have some defensible reason(s) to believe that it will work—something more than pure belief. That being said, I tend to err on the side of supporting teachers’ intuitions about their students and the most optimal conditions for learning. I would rather fan a teacher’s flame of inspiration than throw wet blankets on it. So, if you have a deep conviction that a certain strategy will work with your students, but are unable to find evidence to suggest that it does work, then I say go for it! Implement your strategy without a priori support. When you do so, cite me as your reference.
Classroom Data
This refers to any information that derives from your classroom and any classroom-related environments. More specifically, the data you will be most interested in will be the information that helps you decide whether your intervention strategy is helping or hurting your goals and your students’ goals.
There is really no limit to the kind or variety of data that you can collect. You might count student absences, the number of passing grades on a vocabulary quiz, how many classroom disruptions there were, or the number of hands that go up when you have asked a question. Or maybe you are interested in something more personal—such as how your students feel, or how beneficial a given class period seems to them. You could listen to the amount of noise or quiet in a classroom; you could watch student activity levels such as fidgeting. I can’t imagine how you might taste learning, but I’m sure there’s data there to be harvested somewhere and somehow.
There are two main categories of data that you might wish to use: quantitative and qualitative.
Quantitative data includes anything that has been transformed into symbols—typically numbers. Numbers are inherently easy to work with, because a lot of information can be expressed in a very little amount of time. A teacher might say, for example, that they counted “87 interruptions” during a week of class before beginning a classroom management intervention. Missing from this simple figure is all of the many forms and varieties of interruptions that occurred, the contexts in which they occurred, and how they were handled. Going into all of these details would take pages and pages of notes, whereas “87 interruptions during Week 1” takes up less than one line. Then, after introducing a new style of student participation, the same teacher counted “71 interruptions” during a week of class. This provides a simple and objective indication that a change has taken place. A principal who has little familiarity with what is happening in the teacher’s classroom can take a look at the numbers and understand that a change has taken place, and that it was probably a positive change.
Another advantage to quantitative data collection is that it opens up the door to a whole warehouse of statistical analytic procedures. At the very least, you can calculate averages and ranges. You can calculate statistical significance, standard deviation, and correlation coefficients. I have used many of these tests in my applied research, and I always feel a bit safe when sharing results from statistical analyses I have conducted. They are like little methodological shields that offer protection from criticism.
But numbers aren’t perfect, because they require a transformation of human behavior into symbolic representation. During this transformation, important information is often lost. For example, being interrupted by a student who is picking a fight with a classmate is very different from being interrupted by a student who is asking a relevant question. But both would likely be coded and counted as a single disruption. As a teacher, the type of interruption probably matters to you. So, unless you plan on going through each interruption and organizing them into categories such as “Good interruption,” “Bad interruption,” “Funny interruption,” and so on, you will be ignoring important information.
Qualitative data is rich with human feeling, meaning, perspective, and context. With qualitative data, there is no translation of human experience or behavior into numbers as is the case with quantitative data.
Even without any sort of training on how to evaluate the quality of a day of teaching, I bet you have a pretty good idea of what qualifies as good days or bad days. When I ask my wife, a physician assistant, about her day, she will always begin by telling me whether it was good or bad. “It was good!” she might say, and then she’ll go on to tell me the six or seven things that made it good. Good days, I have learned, tend to have a patient who has expressed some gratitude about their treatment at the clinic. Bad days, by comparison, usually have more than a few patient complaints or the delivery of a tragic diagnosis.
This doesn’t mean that “Tragic Diagnosis = 1 Bad Day.” It isn’t easily translatable into numbers. In order to understand this, I have to understand the goals my wife has for her work and for her patients. I have to understand what my wife values, and what qualifies for her as an achievement.
You, too, will have your own set of standards for what qualifies as a good or bad day, and what qualifies as good and bad teaching. This often isn’t easy to describe right away. It’s complicated. You might teach for a week and have more interruptions than ever, yet, because of the relationships you have developed with your students, and because of the quality, attitude, and form of the interruptions, it might have been the best week of class in your life. In order to understand this, you would have to spend some time talking about the different types of interruptions your students exhibit, the teacher – student relationships and how they change, student – student relationships, emotional intelligence, and so on. All of this can get sorted out in a qualitative analysis.
The downside of a qualitative analysis is that the principal might look puzzled when you share with them the results of your assessment. “How do you know it worked?” they will ask, and you will wonder if they could possibly believe your qualitative analysis. In my experience, however, I have found my colleagues and administrators—who all hold PhDs in their fields—to take seriously qualitative data that was gathered and analyzed seriously by me.
Classroom Practice
Classroom practice is the key to it all. Unless it includes or directly informs classroom practice, it isn’t applied research. This means that your applied research must begin and end in your classroom.
The classroom, of course, extends to any related environment. If you are preparing your lesson plan at home, but the lesson plan impacts what happens day to day in your classroom, then the at-home-planning stage could be the subject of an applied research study.
In the end, applied research should only be conducted with the purpose of improving your experience and the quality of your teaching or your students’ experiences and the quality of their learning. There is no end to the list of factors that might be important. But, if you are having trouble thinking of one, then I have provided a list in Table 0.1.
In Chapter 1, more details will be shared to differentiate applied research from basic research.
Table 1.1. Possible Factors to Examine During Applied Educational Research
1. Classroom management
2. Listening to student problems
3. Mediating between student arguments
4. Student unpreparedness (intellectual)
5. Student unpreparedness (emotional)
6. Student unpreparedness (material)
7. Lesson plan development
8. Motivating students to achieve
9. Talking with parents of problem children
10. Talking with parents of excellent students
11. Improving test scores
12. Improving reading/writing literacy
13. Getting control of the classroom
14. Transitioning between activities
15. Boosting student creativity
16. Boosting student emotional intelligence
17. Improving digital literacy
18. Increasing student psychological well-being
19. Supporting student autonomy
Conclusion: How My New Attendance Policy Went Over
In the end, I was satisfied with my new attendance policy. Because my students had chosen when and how to participate, they were more likely to take charge in identifying their learning goals and working towards their achievement. At the end of the semester, students reported that, owing to the freedom to design their attendance policy among other features of the course, they felt taken seriously as capable and competent adults, and that this spread over into their other classes where they felt more confident.
The university was happy, too, because the students’ policy required that I take regular attendance, which I reported in the online database (a practice I typically avoided doing).
The only problem, in retrospect, was my own reflections on how the class time was spent. Selfishly, I had hoped that class time would be spent doing what I personally thought was most valuable, which was practicing my already highly-developed academic skills of reading, writing, and methodological analysis. But my students weren’t ready for that, and their experiences in college hadn’t prepared them for that. I was reminded that my students had seldom been given the opportunity to grow their skills in writing, reading, speaking, and so on, and had seldom been given the freedom to develop these to their own level of satisfaction. Instead of learning, for example, how to express themselves through writing, they had learned how to write 20-page essays or summarize peer-reviewed journal articles for which had little interest. This practice leads not to creative development of achievement of potential, but to anxiety and fear. I felt their pain acutely, and I grieved with them. This meant that I left class often feeling sad and bummed out.
Would I do it again? Absolutely. But, the next time, I will begin with a better understanding of where they are coming from—that they are bringing some level of trauma with respect to the academic skills. Many students want to write better, for example, but the practice of writing has become so buried in teacher-expectations that they shiver in fear and become unable to say anything at all.
This, of course, is not your average example of applied education research. The way I conduct it is highly personal—to me and to my students. I have found this to be the best way to conduct it for my well-being and satisfaction, and that of my students. In this book, I hope you will develop your own style of applied education research. It might resemble mine, but don’t make that your goal. Perhaps you will prefer spreadsheets and statistical analyses. My only recommendation is that you choose the format that makes the most sense to you.
[1] I say “classroom,” but I have in mind any context that represents your role as teacher. This might be the cafeteria, playground, athletic field, distance learning platform, and so on.
Comments
Post a Comment