Thank you for visiting the ISU Ed. Leadershop. Our intent over the past few years has been to field-test community-engaged writings for PK-20 practitioner conversation -- quick, 5-minute "read's" that help put into perspective the challenges and opportunities in our profession. Some of the writings have remained here solely; others have been developed further for other outlets. Our space has been a delightful "sketch board" for some very creative minds in leadership, indeed.

We believe that by kicking around an idea or two and not getting too worked-up over it, the thinking and writing involved have even greater potential to make a difference on behalf of those we serve. In such, please give us a read; share with others. We encourage your thoughts, opinions, feelings, and reactions to our work and thank you for taking your time. You keep us relevant.

[Technical Note: If you find that your particular web browser does not allow you to view our articles for a full-text read, please simply select another browser and enjoy.]

Wednesday, September 12, 2012

Sharper Measurements for Smarter Schools

-->
Sharper Measurements for Smarter Schools

By Dr. Ryan Donlan
Department of Educational Leadership
Bayh College of Education
Indiana State University

Some of our most capable educational innovators are worried about their jobs.  Others are looking for other careers in which to exercise their ingenuity.  A few are quite content, of course, yet I wonder for how long.

As K-12 folks go, I’m relatively supportive of recent changes coming by way of legislatures and state departments across the country (“on-balance” … not necessarily “carte-blanche”).

What concerns me, however, is that many of our most creative have found their efforts devalued by narrow and restrictive methods of educational assessment. This is particularly true in the overuse of student high-stakes test scores in judging the programmatic quality of our schools. 

Consider what seems en-vogue to most everyone but our folks in the trenches:

Good Test Scores = Good Programs.
Bad Test Scores = Bad Programs. 
Bad Programs = Incompetent Adults Who Need to Be Fired.

At least, this is the way things seem to be trending.

This paradigm on program evaluation creates a systemic disincentive for educators to serve our most needy, as some of my graduate students shared with a State Official recently.  It certainly inhibits enthusiasm in risk-taking.

As a profession, have we ceded program evaluation (and thus our futures) to those who are limiting not only the ends through which quality is determined (i.e. via test scores), but also the means (i.e. prescribing how principals spend their time)?

I believe that we are remiss in not reframing what proper measurement should be.  Because of this, the public and their elected officials are not seeing the true quality that is present in our schools.

Instead, we’re using too much energy defending the status quo within the context of a limited, test-score discussion, and in doing so, we’re using too many terms that common folks do not understand (“formative” this, “summative” that … and my goodness … the ACRONYMS).  If Andy Rooney were still with us, he probably would have something to say.

We need to have a clear and frank discussion on how we are leveraging learning, given the needs of current students and families.  We need to be more authentic in how we conduct business.

Let’s start by discussing better targets for measuring schools.  Any principal starting a new program might want to take note, as what you do at the outset will follow you through to evaluation and quality determination.

Two critical components of any quality program evaluation model are actually quite simple: (1) STARTING SHARP and (2) MEASURING SMARTER.  School leaders may find in paying attention to these, student achievement takes better care of itself and a clearer message can be communicated to those watching closely.

STARTING SHARP

Before schools decide what is quality and what is not, they must sharpen their pencils and draft an overall program evaluation plan.   

Yes, a plan.  It must be written and shared.

A program evaluation plan ensures that a school’s evaluation strategies align with its stages of program implementation.  Reforms in their early stages need certain approaches to measurement; reforms later on need others – a variety of options exist, but careful selection among options is the only way to avoid measuring something inaccurately (Chen, 2005).  One wouldn’t use a ruler to weigh a textbook; same principle applies – get the right tool.

How often do we see new programs evaluated with student test scores, knowing full and darned well that increased achievement will take time?  Other indicators need measurement before the degree of academic learning kicks-in. Yet, test scores are often the only commodity shared with board meetings or the media.

The main issue not discussed currently in school innovation is the unavoidable fact that when reforms occur, we must for a time, cast away the old, wrong thing done well and replace it with a new, right thing done not-so-well (Black & Gregersen, 2003).  This point is often missed.  It is unflattering and inconvenient.  We’d rather measure test scores from our captive audience, even if it is no more useful than weighing a book with a ruler.

MEASURING SMARTER

Smarter means simpler!  Only the most direct influences on school quality should be measured in order to make decisions.

Academic achievement is one. It cannot be ignored.  Yet in targeting achievement, schools must ensure that they are measuring more than test scores. This whole notion of Academic Growth is a nice start, if it means what it is represented to mean, holding rich schools and poor schools, as well as high-achieving schools and low-achieving schools accountable, equitably.

Other critical, yet often-overlooked influences include:

1.     School Culture: School culture gives permission for certain things to be valued and others not to be valued.  It is a pattern of shared basic assumptions about how we perceive, think, feel, and act [in our classrooms, lounges, and hallways] (Schein, 2004). Cultures range from the toxic to the collaborative. Looking honestly at school culture is like holding up a mirror. It is a powerful indicator of an organization’s potential, as it will remain steadfast even if leadership changes (Gruenert, 2012), calling in question the current “oust-the-captain” mindset in school turn-around circles. We need to measure it and be evaluated upon it.

2.     Self-Efficacy: Self-efficacy in schools is our ability to be resourceful, persistent, and open in teaching and learning. It can be measured with precision, even in such specific contexts as mathematics or athletics. Self-efficacy determines the degree to which people believe that they can make a positive difference in their circumstances through hard work and effort.  Self-efficacy relates to academic success. We need to measure it and be evaluated upon it.

3.     Process Indicators: Process indicators, or what I call “look-for’s,” include details of what adults and children are actually doing that facilitate or enhance the process of teaching, learning, and school improvement. Observable communication, pedagogical proficiency, faculty collaboration, metacognition strategies, curricular alignment, and social capital acquisition are among them.  Process indicators speak loudly; they give leaders insight into the skillsets, habits, and needs of their staffs and students. This diagnostic ability and the resultant leadership interventions are in need of measurement and evaluation.

4.     Local Relevance: Innovation creates something special and unique about each program … something defined and measured that makes school a very unique place, reflective of and relevant to its community.  Whatever this is, we should respect it, value it, measure it, and evaluate it.  

Designing better measures for program quality in education is not overly complicated.  It simply involves starting sharp and measuring smarter.

It would be nice to get back on track.


References

 Black, J., & Gregersen, H. (2003). Leading strategic change: Breaking through the brain barrier. Upper Saddle River, NJ: Pearson Education.

Chen, H. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage Publications, Inc.

Gruenert, S. (2012, September). Organizational climate and culture: They are not the same thing. Presentation for Indiana State University Principal Interns in the Bayh College of Education, Terre Haute, Indiana.

Schein, E. (2004). Organizational culture and leadership. San Francisco, CA: Jossey-Bass.

_________________________________________________________________________________

Dr. Ryan Donlan would like you to add to this conversation or share new and innovative ways that you are empowering yourselves to determine quality program evaluation for your school or program by making comment on this blog, or by contacting him at (812) 237-8624 or at ryan.donlan@indstate.edu.  

1 comment:

  1. how about starting smarter and measuring sharper?


    and, do we want to hold an organization accountable for the efficacy levels of the faculty?

    ReplyDelete