ISTE 2010 Reflection: Google Culture

My Sunday session was an all day session with folks from CUE who are all Google Certified Teachers (a dream I have -if I could only work up the courage to make that video!)

You can read my session notes here if you are interested in what I took away in terms of the tools, but the bigger take away for me happened in the first 10 minutes of the day when Mark Wagner (@markwagner) talked to us about the Google Culture.  As a former English teacher, he made some great analogies about teaching and connected them to the Google philosophy.  It not only helped to set the tone for the day, it helped set the tone for the conference for me.

What was interesting to me was first, "Google believes no one should be more than 150 feet away from food."  I laughed at this as in my office, I think we have modified the rule to substitute chocolate for food.  Even our workshop participants are upset when we don't put chocolate out until the afternoons!!

On a serious note, the 20% "rule" was compelling.  Googlers devote approximately 20% of their work time to a project that might fall outside of their scope of duties but is a passion they have or something they want to fix with current Google tools.  Many of these make it to Google Labs (which is something everyone should check out) and many more become Google tools as we know and love them today. (Read here for an example that I found doing a quick (you guessed it) Google search!)

I wondered how education might be different if we allowed our students to pursue the 20% within our classrooms.  What if in my social studies class, I set aside one day for the students to investigate and create something related to the history we were studying.  Their choice on the topic and the way they would share the 20%.  I know, I know - grading.  But I think that might be easily addressed with a well thought out and developed rubric - one thought out and developed with the kid so that they had real ownership over the learning and the process.

What if in providing professional development to teachers, 20% of their day/year was following their own course.  If there are 8 hours in a work day, one hour (less than the 20%) wouldn't be about supervision or planning or the other things we do but in the pursuit of what we wanted to learn about.  I think the reason that collegial circles have been so successful in the district I worked in is that teachers chose what they wanted to work on and were given the time to do it.

What if in designing professional development, my team were able to devote 20% of their time to pursuing what they wanted to learn.  I certainly think they have this time and we have the resources available for them to do it -but I think they might disagree.  And the reason that my team would disagree and that students and teachers aren't doing what I suggest is that it isn't part of our culture.

Digging deeper into the Google website, I found their "Ten Things We Know to Be True" list that exemplifies their philosophy.  We all have mission/vision statements in our districts, but what if we created similar lists?

Digging further, I found another list related to design (also called "ten principles that contribute to a Googley user experience") - what if we created our own list that would be relevant for designing lessons and professional development?

Thinking about these things - I took a different view of what was being offered at ISTE.  How many sessions demonstrated that they had a different philosophy of education - one that encouraged design and creativity and play? (LOTS!)  How many teachers were willing to pursue their passions and learn more and learn differently? (LOTS)  So I am wondering about the changes the rumored 16,000 folks in attendance will make.  If each person makes one small change as a result of their learning here - that's a whole lotta change! And if each person were able to inspire change in just one more person who didn't attend the conference, that's a whole lotta change agents!

ISTE 10 Reflections: Leadership Bootcamp

Saturday was Day 1 of the ISTE conference for me and I attended the Leadership Bootcamp session. I have to admit that due to the altitude, I was a bit foggy for the whole thing.  As a result, I am not sure that I was able to engage the way that I would have liked to. That being said, the sign of a "good" learning experience for me is when I can walk way with some questions. 

1. How do we make technology use sustainable?
Before everyone jumps all over me for this - I am NOT talking about the tools and tool use for the tools sake.  I am talking about teachers becoming true learners and using technology as it evolves.  All too often, I see teachers put on the breaks when it comes to using technology even when it would make their life easier and have their kids more engaged.  Why? How do we combat the resistance? What kind of leadership does that take?

I am also interested in how we can use technology to sustain our learning.  In this case, there is a Ning available to continue the conversations and the organizers plan some follow-up sessions much like the pre-pre-conference session.  Chris Lehman advocated for holding Educon- like sessions locally during the same week and connecting.  Having tried to get an unconference going in our region - I know that this is easier said than done.  So - what can I do to continue/sustain my learning?


2. When are we really going to start teaching kids to be critical consumers of media?
Time and time again this came up as we discussed kids using content.  And when asked who is supposed to teach this - lots of different answers came out.  Shouldn't we all be teaching this? Not as a stand alone subject or unit, but integrated into every lesson we teach and every media we use.  Why isn't that happening?

3. How do we start moving past connecting to creating Communities of Practice?
During his really great talk, Scott Elias talked about the "strength of weak ties" and the power that connecting with others via Twitter and other social networks brings to our profession.  But so far, these are still only in the conversation stage.  We haven't really gotten to showing a change or improvement in our practice as a result of these conversations.  Sure - I could do this individually but I don't often share that with my network.  We don't house those ideas someplace and truly build off of them.  As teachers, we still very much have walls around our classrooms.  How do we start to break those walls down?  Once we start - how do we keep breaking them down?

4. Why are our policies getting in the way of our progress?
Dr. Scott McLeod shared this question at the start of his session and gave us some really great things to think about.  The biggest for me was thinking out Acceptable Use Policies (AUP) and what message it sends to our students, parents and community.  It made me really think about the policies that our district has in place, and how people just feel comfortable to state that "it is policy" in response to something.  How can we have thoughtful and meaningful conversations about policy that are productive and have learners in mind? How can we start with an assumption of trust?


These are rough and I have just started thinking about them but I needed to get them out of my head and onto the blog before the other learning this week fills my brain.  If you would like to see the backchannel from the conference and use Twitter, search #lbc10.  If you would like to see my unedited notes, look here.  If you would like to help me wrestle with these questions, please comment below!

In Defense of Rubrics

Learning doesn't happen just in schools.
Just because you're in school, doesn't mean you're learning.
A keystone of learning is self-reflection.
Learning is a fantastically messy business.

I think everyone who is engaged in discussions about teaching and learning would agree with the previous statements. At the end of the day, if you're engaged in the discussion, you've already committed to changing the stricture in whatever way works for you. As rubrics come up in discussion again, I am thrilled to engage in discussion with other educators who are committed to change.

As I've shared on my blog, I'm all about addressing my ignorance. I have no problems being corrected when I'm wrong, because the alternative just doesn't make any sense. However, when it comes to rubrics, I'm standing fast. Plus, I've got some pretty compelling strong support on my side.
* rubrics define the features of work that describe quality. (Arter & Chappuis, 2006)
* used to evaluate the quality of students' work. (Popham, 2006)
* a rubric identifies all the needed attributes of quality or development in a process (Martin-Kniep, 2001)

This is not a rubric.


-->

1 pt.
2 pt.
3 pt.
4 pt.
Sources
Student uses 1 source.
Student uses 2 sources.
Student uses 3 sources.
Student uses 4 or more sources.

This is not a rubric.


-->

1 pt.
2 pt.
3 pt.
4 pt.
Mistakes
Writing has lots of mistakes.
Writing has a few mistakes.
Writing has some mistakes.
Writing has no mistakes.

How do we know? Rubrics are about quality... and you can't count quality.

A movie doesn't win the Academy Award because it had 4 crying scenes and the others only had 3.
Your favorite book didn't make you feel something 17 times while you're least favorite book only made you feel something 2 times.Your partner didn't suddenly become perfect for you the 9th time they did something for you.

In life, we don't count quality. I write that as if it's a truth and it can be argued. However, generally speaking when we talk about quality, we talk about attributes and definitions. We don't stop and count and determine what is good. "Sorry, Barb, can't recommend this as a good restaurant. The waiter took 17 minutes to refill my water. I recommend the one next door. They only took 12 minutes." We know, though, the traits of a good restaurant and if a friend of ours opened a new restaurant and asked for feedback, we wouldn't sit there and count.

In school, it is not physically possible to provide all students with feedback all of the time. There will come a moment when a child has to self-assess their work. Without understanding what quality looks like, their efforts are likely to result in guessing. Yes, we tell them. Yes, we give them verbal feedback. But we know students don't remember everything we tell them. A well-developed, thoughtful rubric can help students understand the attributes of quality. Consider this rubric that @russgoerend and I co-constructed. The rubric describes the quality of the presentation - note the top level is all about "breaking the rules" and the bottom of the document contains a checklist.

I think "rubric-esque tables" are developed way too frequently because we think rubrics are better than checklists.But, if you want specific things in order for a student's work to be considered acceptable, tell them what you want in a checklist. Checklists aren't bad or less then. They each serve a purpose in communicating expectations. Personally, I'd like to take a red pen to a certain website and rename 90% of their "rubrics" as "scoring guides". Many oft-shared critiques of rubrics are actually discussing "scoring guides" - and identify major problems with that tool. It seems were in a cycle of critiquing a tool while we're still negotiating a common understanding. My hunch is if we polled 100 lawyers and asked them to define "contract", there would be near unanimous agreement. I wonder what results we'd get if we polled 100 educators about "rubrics"?

If you'd like to read more about what makes quality rubrics, I invite you to check out the rubric wiki. I am always open to discussions of challenges and flaws in rubrics and how we can create better rubrics with and forth students and I would like to extended the invitation to readers who have identified those flaws to ask themselves if their concerns are with rubrics or impostors.

In Defense of SED

My instinct when I see an article or blog posting that slams SED is to come to their (its?) defense. Granted, if an argument is sound, I'll agree but when it's not I feel compelled to say something. I think it stems from the fact that I used to work for a school improvement team that fell under the SED umbrella. As a result, when I hear "SED", I think of Dawn, Elizabeth, and other educators I've worked with or heard talk and present about the assessments and standards. I truly believe that the vast majority of educators who make up SED have the best interests of schools and children in mind when they go to work. The challenges they face are numerous - one of which is how the public responds to their actions.

Yesterday, complaints about scoring on the math assessment bubbled up in my RSS Reader. Why now? Why this year? We've been using a holistic rubric for years but this year, The Post publishes an article with the frustrating headline of "NY passes students who get wrong answers on tests". Let's dismantle that, shall we? (Not hyperlinked by choice.)

1. You do not pass or fail the 3-8 Math NYS Assessments*. It sounds counter-intuitive, but the assessments are designed as large-scale programmatic assessments - not culminating exams. In other words, the assessments are a tool for schools and the state to determine if students are meeting certain performance indicators from our state standards. Consequences for performance belong to the school and district, not the student. Compare this to the Regents. A student scores a Level 2 on a 3-8 assessment, they're referred to intervention services and may or may not receive them based on a review of their classroom, etc. If you get a 3, a magic passing fairy doesn't suddenly appear and grant you admission to the next grade level. A student scores a 54 on a Regents? They fail the exam. Two different types of assessments. (Analogy alert: it's like taking a cholesterol test to find out how good your blood pressure is. Different questions and purposes, different assessment measures.)

2. NYS 3-8 Math Assessments contains short and long-response questions. If one of the multiple choice questions is 2+2 (don't worry, they're not that simple), and you pick 5, you'll get it wrong. So no. NYS is not giving credit for wrong answers. If on a long response, you demonstrate you can correctly do the math - i.e. set up the problem, determine which variables to use, show you understand the concept - but make a computational error, wouldn't you want a child to get partial credit? A student is getting credit for what they do know, and losing points for their mistakes. If an argument is that the "Real World" doesn't work like that, no one is claiming the state assessments are the "Real World." Nor, are they claiming the tests measure creativity, senses of humor, or even if a child is a "good student". They are asking: on this day, at this time, can they students of this school demonstrate mastery of these performance indicators?

3. "This is rocket science." David Abrams, Director of Assessment for NYS. There is a field of study called psychometrics - the design and study of measures in the social sciences, including education. It's a fascinating field and sets the rules and standards for test design. Perhaps I am skewed by my personal connection to SED but I have to ask: Does it make sense that NYS would allow our students to take an assessment that didn't meet the requirements of quality assessments? Does it make sense that NYS would give "partial credit" if there wasn't a sound reason for it? I recommend reading the technical reports about the assessments to learn more.

Yes, there are issues with high stakes testing in the US. I support and follow FairTest. I think we've lost sight of what "data" means and are focusing on numbers to the detriment of multiple measures. I think we as a field have a lot of unanswered - and unasked - questions about standardized testing. All of that being said, this article frustrates me and does not add to the collective conversation around education. The headline was probably designed to get hits. And did it ever.

Here's the hard part for me. Self-reflection. Am I off-base? This argument isn't about how the scores are used, or how the pressure teachers feel around the tests, rather just this narrow band about the irresponsibility of publishing an article that doesn't even begin to address the complexity of a large-scale assessment system. Grumble, grumble.


* Yes. In NYC, a student can fail the assessments. That is a local decision and not how the tests were designed to be used.

What is PD for anyway?

For the past seven years, I have had the distinct pleasure of working part-time in a district as their K-12 Curriculum Coordinator.  While the road has not always been smooth - what I have learned from the "bumps" has made me a better educator and a better leader.  With budget cuts, my formal work there will end this year but I leave confident that when it comes to professional learning they are headed in the right direction.

I know this because last week was the last Superintendent's Conference Day of the year and it was one in which the teachers shared their learning from participating in Collegial Circles all year long.  It took seven long years and many mistakes in PD for this to happen - but the teachers wanted it, cried for it and we were able to make it happen.

For each conference day this year, thirteen groups of teachers met under the direction of a team appointed facilitator with a team developed agenda to learn about team goals.  They ranged from infusing technology in a variety of settings to developing curriculum in areas like counseling and PE to examining middle level transition.  These weren't administrative directives, these were teacher selected and teacher led.

Along the way, the teacher leaders on the committee that helped to lay out the PD plan met and reviewed what was happening and tweaked the process and sought additional input.  And they were honest with each other about what they thought was effective teaching and good pedagogy and many other things.  Brutally honest at times.  And incredibly professional.

I know that it seems too good to be true - a local superintendent wanting to replicate the process had her doubts too.  But she visited us - so don't believe me, believe her.

It makes me sad that I haven't been able to help replicate the culture anywhere else - at least not yet.  But I know now that it is possible and not just me tilting windmills again.  And when I read posts like this (and of course the comments that follow) I see complaints and excuses and lamenting galore - but I don't see a lot of  "so here is what we are going to do" happening.  I'll keep working toward changing the meaning of professional development - what will you do?

Teacher Evaluations and Value Added

As many may have heard by now, NYS has passed legislation designed to enhance it's Race To The Top (RTTT) application as well as "support the Regents reform agenda to improve teaching and learning, increasing the opportunity of all students to graduate from high school ready for higher education and employment.."  We had a fantastic discussion about it at a regional data meeting and it raised many more questions than it answered.

Having watched the May 11th webcast announcing the legislative reforms around the APPR process, it is clear that incorporating student performance into teacher evaluations makes some sense (at least to me!).  By looking at growth in both the NYS assessments and locally determined assessments, understanding the impact that teachers have by seeing how their students shows growth makes some logical sense.  The key word seems to be growth (rather than achievement.)

Without a doubt - this raises some issues and questions and of course, some concerns.  While I wrestle with the impact this might have (positive and negative), the lawyer in me just can't stop thinking about the legal implications of these changes.  Or at least the challenges that could be presented.

Fortunately - I am not alone and there are some well thought out arguments that has my gray matter really working.  Bruce Baker over at Schoolfinance101's blog teases out his thoughts in "Pondering Legal Implications of Value-Added Teacher Evaluation."  Interesting here is whether the proposals to link evaluations to student performance will in fact raise due process challenges due to the idea that "tenure" is in fact a property interest.  Many other possible legal arguments are addressed but I find the tenure rights one very interesting (and very complicated!)

At Edjurist, Justin Bathon posted a response to the post above touching again on the due process piece but adding another tidbit that has really been bugging me about the use of quantitative data as well as the quantative way in which New York is proposing to weight the components of the evaluation:

Generally, all this is what happens when you start forcing statistics in the legal system - which is not built for that at all. The legal system is a very qualitatively oriented system, making decisions mostly based on evidence obtained through interviews and the like. The jury, even, is a qualitative system that collectively makes a decision based on all the evidence presented. Statistics throw a wrench in all that because people react differently to numbers. They think numbers don't lie (although, of course, we know that they can and do).
Finally, Scott Bauries responds from the perspective of an employment lawyer and I will warn you - there is plenty of legal speak here.   Bottom line on this post - yes there will be lawsuits ("Simply put, if you fire people, and they think their firings were unfair, then you are going to be sued.") but it is going to very difficult for many plaintiffs (i.e. the teachers) to be successful.

I share these to help to continue the discourse around this topic.  Many are wondering with data in the mix - will people want to become teachers? Will we lose teachers because we have set this new bar? Or will we in fact begin to have others view teaching as a profession with the same accountability measures that other professions have?  Weigh in - I would love to hear your thoughts!