Monday, November 13, 2017

Why Earn a Doctoral Degree When NC Legislature Does Away with Doctoral Pay?

I’ve asked myself the question about why would I get a doctoral degree as an educator when our NC Legislature voted last summer to end pay for doctoral degrees? Who in their right mind would earn a degree that will perhaps have no immediate financial return? My answer? Me.

First of all, my reasoning is rather rebellious. Our North Carolina Legislature has made it rather clear through their work that they do not value education—from the K-12 level through college—they’ve been playing games and giving the appearance of funding education without really funding it properly. My earned doctoral degree is my way of shoving sheepskin in their face and a means of shouting, “You’re wrong!” My education is mine. It is the one thing of great value, in spite of their legislative ignorance, they can’t take away.  Education is important! It’s life changing and it matters and ultimately is more powerful than they give credit. Why else would politicians make a concerted effort to keep as many people in ignorance as our current state political leaders have?

Secondly, I firmly believe that there is still value that comes from learning for learning’s sake. Not every thing we learn has to have an immediate monetary return. Sure, I like earning a salary. I like having the ability to purchase things I want, but learning has value for it’s own sake, and it has value in ways we can’t foresee. Learning brings wisdom. It brings experience. It makes us better people. Ultimately, it creates people that can see what politicians are really doing. The bottom line for learning is that there is no bottom line.


Why did I earn a doctoral degree? Ultimately, I can hardly see why I would not have if I am communicating to the students in my school as principal that learning is an excellent thing and we can never get enough. While our North Carolina Legislature places little value in learning, as an educator and life-long learner, learning for me is like breathing; I’m going to do it until my time is no more. 

Sunday, November 12, 2017

Building a Better Teacher Through VAMs? Not So Fast According to Mark Paige's Book

As a part of my research explorations, I stumbled across a relatively new book published in 2016 about the problems with using value-added measures in teacher evaluations. This book entitled Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher Evaluation is a short and concise read that any administrator who currently encounters the use of value-added data in teacher evaluations should read.

Paige's argument is rather straightforward. Value-added models have statistical flaws and are highly problematic, and should not be used to make high-stakes decisions about educators. Scholars across the board have made clear that are problems with VAMs, enough problems that they should only be used in research and to cautiously draw conclusions about teaching. Later, Paige also provides advice to opponents to using value-added models in teacher education as well. Attempting to challenge the use of value-added models in teacher evaluations through the federal courts may be fruitless. According to Paige:
"At least at the federal level, courts will tolerate an unfair law, so long as it may be constitutional." p. 24
In other words, our courts will allow the use of VAMs in teacher evaluations, even if used unfairly. Instead, Paige encourages action on the legislative side. Educator opponents of VAMs should inform legislators of the many issues with the statistical measures and push for laws that restrict their use. In states with teacher unions, he encourages teachers to use the collective bargaining process to ensure that VAMs are not used unwisely.

Throughout Paige's short read, there are reviews of legal cases that have developed around the use of VAMs to determine teacher effectiveness and lots of information about the negative consequences of this practice.

Here are some key points from chapter 1 of Mark Paige's book Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher Evaluation.

  • VAMs are statistical models that attempt to estimate a teacher's contribution to student achievement.
  • There are at least (6) different VAMs, each with relative strengths and weaknesses.
  • VAMs rely heavily on standardized tests to assess student achievement.
  • VAMs have been criticized on a number of grounds as offending various statistical principles that ensure accuracy. Scholars have noted that VAMs are biased and unstable, for example.
  • VAMs originated in the field of economics as a means to improve efficiency and productivity.
  • The American Statistical Association has cautioned against using VAMs in making causal conclusions between a teacher's instruction and a student's achievement as measured on standardized tests.
  • VAMS raise numerous nontechnical issues that are potentially problematic to the health of a school or learning climate. These include the narrowing of curriculum offerings and a negative impact on workforce morale.
Throughout his book, Paige offers numerous key points that should allow one to pause and interrogate the practice of using VAMs to determine teacher effectiveness.


Using VAMs to Determine Teacher Effectiveness: Turning Schools into Test Result Production Factories

"But VAMs have fatal shortcomings. The chief complaint: they are statistically flawed. VAMs are unreliable, producing a wide range of ratings for the same teacher. VAMs do not provide any information about what instructional practices lead to particular results. This complicates efforts to improve teacher quality; many teachers and administrators are left wondering how and why their performance shifted so drastically, yet their teaching methods remained the same." Mark Paige, Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher Evaluation
Mark Paige's book is a quick, simple view regarding the problems with using value-added models as a part of teacher evaluations. As he points out, the statistical flaws are a fatal shortcoming to using them to definitively settle the questions regarding whether a teacher is effective. In his book, he points to two examples of teachers where those ratings fluctuated widely. When you have a teacher who rates "most effective" to "not effective" within a single year, especially when that teacher used the same methods with similar students, there should be a pause of question and interrogation.

Now, the VAM proponents would immediately diagnose the situation thus, "It is rather obvious that the teacher did not meet the needs of students where they are." What is wrong with the logic of this argument? On the surface, arguing that the teacher failed to "differentiate" makes sense. But, if there exists "universal teaching methods and strategies" that foster student learning no matter the context, then what would explain the difference? The real danger of using VAMs in the manner suggested by the logic of "differentiation" invalidates the idea that there are universally, research-based practices to which teachers can turn in improving student outcomes. What's worse, teaching becomes a game of pursuit every single year, where the teacher simply seeks out, not necessarily the best methods for producing learning of value, but instead, becomes, in effective a chaser of test results. Ultimately, the school becomes a place where teachers are simply production workers whose job is to produce acceptable test results, in this case, acceptable VAM results.

The American Statistical Association has made it clear. VAMs do not predict "causation." They predict correlation. To conclude that "what the teacher did" is the sole cause of test results is to ignore a whole world of other possibilities and factors that has a hand in causing those test results. Administrators should be open to the possibility that VAMs do not definitively determine a teacher's effectiveness.

If we continue down the path of using test score results to determine the validity and effectiveness of every practice, every policy, and everything we do in our buildings, we will turn out schools in factories whose sole purpose is produce test scores. I certainly hope we are prepared to accept along with that the life-time consequential results of such decisions.


NOTE: This post is a continued series of posts about the practice of using value-added measures to determine teacher effectiveness based on my recently completed dissertation research. I make no efforts to hide the fact that I think using VAMs to determine the effectiveness of schools, teachers, and educators is poor, misinformed practice. There is enough research out there to indicate that VAMs are flawed, and that there application in evaluation systems have serious consequences.

Friday, November 10, 2017

What Happens When Schools and School Districts Use VAMs to Make Decisions about Teachers?

Many school administrators are using value-added measures in making decisions about teachers as if these statistical measures represent the latest, settled and unquestionable science. Those who do this are making a grave error. Despite companies such as SAS, who peddle their EVAAS data systems as the salvation of public education, the science behind VAMs is not settled, and there is even enough doubt about them, that the American Statistical Association issued a strong statement in 2014 against their use in decision-making when it comes to teachers. In that statement, ASA reminds educators that:
VAMs typically measure correlation, not causation---positive or negative---attributed to a teacher may actually be caused by other factors that are not captured in the model. (ASA Statement on VAMs)
Yet, administrators still use VAMs to infer that the teacher causes those scores. SAS, who owns the EVAAS model that North Carolina pays millions of dollars for each year, arrogantly claims that it accounts for all the factors that cause student performance on test scores, even when psychometric experts caution that this isn't possible.

In addition, administrators, who use VAMs to make decisions about teachers, should know better than confuse correlation with causation, but any time they base decisions about teacher status using VAMs, they are automatically assuming that teachers cause test results. If teachers operated in a lab where they controlled all the conditions of learning and the subjects of their learning, then one could perhaps better make this inference.

But there are other concerns about VAMs too. In a recent study by Shen, Simon, and Kelcey (2016), it was found that "using value-added teacher evaluations to inform high-stakes decision-making may not make for a good teacher." Using VAMs to decide the status of a teacher may not have the long-term impact administrators desire. These researchers also recommend that VAMs not be used "to inform disincentive high stakes decisions," which are any decisions regarding the professional status of teachers.

Ultimately, though, I can't help but wonder if those who are sold on using VAMs in administrative decision-making aren't caught up in chasing short-term gains in a measurement that lacks any meaningfulness in the long-term. VAMs aren't settled science. Yet, administrators use that data as if it were. Any decisions made using this data should be balanced with other data.

Shen, Z., Simon, C., & Kelcey, B. (2016). The potential consequence of using value-added models to evaluate teachers. eJournal of Education Policy, Fall 2016.


NOTE: My just completed dissertation was on the practice of using value-added measures to determine teacher effectiveness. My plan is to share over the next several weeks and months my own insights and personal thoughts on this practice. This is the first of may posts I plan to share on this topic. 

Friday, October 27, 2017

In Education, What's Wrong with the "It's-the-Best-We've Got Rationale?"

Over the years, as the waves of new reform efforts, federal policy initiatives, and latest educational fads have ebbed and flowed, all of them have been met by critics who questioned their efficacy and their logic. I've been one of those critics myself. What has always fascinated me was the defense of these sometimes reform measures. Take value-added measures for example.

When the statistical wizardry of value-added measures emerged, I distinctly remember their being justified as "the best measurement we've got"when their efficacy was questioned. Does anyone else see the error in that justification? Being the "best we've got" doesn't necessary make it the most effective and best means to measure learning and teaching. Rubbing two sticks together to make fire was the "best we had" until someone figured out flint rocks work better. The "best we've got" rationale doesn't necessarily equate with being effective or even right.

The next time someone uses the "best-we've-got" rationale to justify an educational practice of any kind, we should immediately call them out.

Friday, October 13, 2017

Where Have I Been and Why I've Not Been Blogging

I realize my blogposts have dropped off precipitously lately, but there is a simple explanation: I've been working on my doctoral dissertation. The simple truth is that all the writing energy I could muster has been directed toward the creation of that document. Now, I am drawing closer to finishing it. My hope is to defend in November, and then graduate in December. It has been one of the most difficult things I have ever done.

I won't bore you with the details of my dissertation, after all, I am not entirely sure anyone would wish to read it anyway. But, I will say that it has been one of the most challenging things I've ever undertaken. With the end drawing near, I certainly hope to once again take up this blog again after this process is done. If anything, I might just have more to contribute to the conversation about public education more now than I ever have.

Sunday, August 27, 2017

Having Trouble Organizing Your Google Drive? Take Charge with These Tips

Managing documents in your Google Drive account can be problematic, especially if your district and your staff use Google Docs, Google Slides, or Google Sheets a great deal. What do so that you can access those documents you need to use the most? How can you organize those documents that are shared with you continuously? I think I have found a system that works, at least for me.

Since the premise of Google Drive is more collaborative that individual, that means you have to work with a product with this in mind, and carefully organize it to meet your individual needs. Speaking with others, the most common approach is create a series of folders to organize your documents, but if you use too many folders, then the problem of remembering which folder your put that inventory document or that course syllabus becomes enormous. Keeping folders to a minimum is a must. So, I designed my system with five folders that capture every single document I need ready access to. Here's those five folders and what I place in them.

Working Docs: I place any ongoing Google Doc that relates to a current project I am undertaking. It might be a presentation I am developing and will be using in staff development in the near future. It might be a letter to parent that I am writing, or a new schedule I am developing for my school in the coming weeks. The general rule here? These are documents under construction that I will be getting back to short-term.

To-Do Items: This folder is for those documents that require action in the future. It might be a request for information or a form that needs to be completed in the future. If someone shares a Google Doc with me requiring future action, I make a copy of it and place it here. These are all documents that correspond to a "To-Do" Item on my Google Keep To-Do List. (I'll do a blog post shortly that shows how I use this Google App.) If I start working on a doc in this folder and not finish, I move it to my Working Docs folder.

Templates: This is one of my favorite folders. I have begun to make Templates for those docs that I find myself recreating often, such as my Staff Memo or my Parent Newsletter. I simply make a copy of the template in my Working Docs folder when I am working on these items. Over time, I will develop a complete library of personalize templates for use on any occasion.

Current School Documents: In this folder I place those documents that are mostly complete but that I find myself referring to quite often. For example, I have my school master schedule, daily class schedule, bell schedule, and many other documents that I will most likely refer to at least once a day or week. The central office often asks for a copy of these documents. I don't have to search for them.

Archive: This is the folder of everything else. All documents end up here when I am no longer working on them or they are no longer needed. Because this is fully searchable, as is all of Google Drive, I can locate items here through the search function. The key here is: make sure your documents have unique names.

In addition to these folders, I have also set up a Team Drive for my school. In this folder, I and my staff can place most used documents and those documents under construction that are totally collaborative.

So far, I've found few problems with this system. If someone shares a Google Doc with me, I immediately make a copy of it, and file it in one of these folders.

Google Drive Folders and Team Drives