Friday, December 16, 2011

Dec 16 Record keeping, standards and New Ideas!!

December 16, 2011
RecordKeeping
I have been displeased with my folder method of tracking grades for some time but was having difficulty creating an alternative.  I had originally decided to create a divided version of the open grid they have now.  Formative homeworks and practices are recorded in one column, artifacts (aka pop quizzes or assessment-like scenarios) in another and summatives (aka big tests) would be in a third.  After DeSoto visited, I created a second version, modeled on theirs.  This one has the standard written at the top instead of the abbreviation present on the divided version.  It also features a graph where students plot their scores on the grid for a more visual representation of a student’s trend of learning.  Only artifacts and summatives would be recorded on this grid.  I like this later version because of the more visual version of the trend of learning, the ease of seeing the standard above the grades and the use of only artifacts and summatives.  I feel like having the students record all of their formative assignments is data overload.  Now, I love data as much as the next analytical chemist but this is just too much for the kiddos, to record EVERYTHING they do.  The artifacts are a good representation of how a student would do on a test, where a formative assignment shows what they know with their notes and friends in a comfortable situation.  But what I want doesn’t always coincide with what the students think or will do. So I pitched it to my students, both my honors chemmies and the students who stayed after school with me yesterday.  Most of them liked the graphing one.  They liked the idea of it being more visual and they also liked the idea of only recording artifacts and summatives.  I also told them about the power law that Pinnacle uses that I was trying to recreate, which I will discuss in the next section, and they really liked that as well.  I have not decided yet if I will try to implement all these improvements in chemistry.  Overall they are a really awesome group that is really flexible and tolerant to all my changes and crazy ideas but I hate to just throw everything we’ve done this semester out the window. . . even though it is not serving our purposes. . . and the definition of insanity is doing the same thing only to expect different results. . . . So I guess I should introduce some of these changes J

Grades
So. . . since I made this new tracking sheet with the graph, I got to thinking about the power law again.  DeSoto talked about it and I felt like a total n00b because I had no idea what it was.  The way they described it was looking at the trend of assessment scores instead of averaging things together.  This concept works so much better and provides an even truer reflection of student achievement than the traditional averaging.  It has long been a complaint of grading that student scores are always lower at the beginning because the student has not had time to mastery the material.  The brain has not had time to integrate the new information.  Thus the student has a poor score, let’s say a 2, on an early assessment.  Later on the student masters the material and really steps it up to achieve a 4 on a much later assessment.  By traditional grading, those two scores are averaged, resulting in a 3.  Even though the student mastered the material, she is still haunted by that first attempt.  I had attempted to rectify that through reassessment opportunities but the power law allows for an even greater efficacy in correcting that error.  Prior to now, I had no idea how to do the “power law” in excel but I think I’ve figured it out.  There is a “trend” function on excel that, according to my reading, does a rough approximation of this power law.  I didn’t think it would be this hard to find a mathematical explanation of the bloody thing.  This means that I can in fact count artifacts as part of the term grade, without averaging them together and reestablishing the old paradigm of punishing first attempts.  Supposedly, this power law weights later assessments heavier than earlier ones, meaning that the earlier pop quizzes will have less impact than the bigger summative tests. 

My difficulties arise from the fact that the way I have my standards set up currently, there is a 2.a, a 2.b, a 2.c, and so on.  I only have 11 “standards” but each one has 2-8 pieces.  I will have to find a different way to word and organize my standards so I don’t have 15 trends to follow for each student.    And, since we are still using a points, I will still have to find a way to combine all standards into one grade. . All this means that I have to do better about having multiple artifacts and multiple assessments for every standard.  This is most certainly a process.   

Standards
At lunch, we were talking about different ways to organize standards and track them.  Right now, I have my standards set up to where I have the larger titles (atoms, compounds) as the “power standards” and then the nit-picky pieces (history, configs, wave equations) as the little a, b, c below that larger one.  I’ve been tracking each little piece separately so far because they are such different concepts with different difficulty levels.  If I’m going to do this “power law” trending to come up with the grade for a standard, I would have to look for the trends of these little pieces and then somehow put them all together to get the grade for that standard.  I could average them. . . but the evils of averaging are something I am trying to avoid. . . so I would almost have to look for a trend among the pieces.  That wouldn’t be any better than the averaging because the pieces are all so different.  I think I’m going to have to start looking at assessments as a whole, without separating them into these smaller pieces.  The smaller sections could be present in the verbal standard but as for grading and recording, it would all be recorded as simply “standard 2”. 

Thursday, December 8, 2011

December, SIS

December 8, 2011
December is always a difficult month.  In addition to the regular world stress of holiday shopping, family visits and holiday decorations, no one wants to be at school.  Students are squirrely.  Everyone’s forgotten everything they need to know for the final and somehow it all has to come together before December 19th.  All in all, it makes for a very stressful month.  *Steps off soapbox*

                So I finally caved.  SIS has caused too many problems and too much confusion between administration, SSD teacher and others who need regular access to student grades but have not talked with me at length about my grading system.  Prior to now, I had been putting the 5-1 grades in SIS and telling students to ignore the SIS percentage and average their grade together themselves.  The aforementioned other people were unaware of this truth or forgot or didn’t understand or whatever the case may be and multiple students were being reprimanded for a grade that wasn’t correct.  So I translated the mastery levels into the percentages by the same chart I use at term to assign term grades.  It’s really a lose-lose with this bloody thing because the mastery level gave invalid percentage but the percentages, while closer to their actual grade, are still not correct and put the emphasis back on points.  However, the saving grace with the percentages is that now I can confidently say their grade on SIS is a reflection of 100% knowledge.  There is nothing to inflate or deflate a student’s grade other than their performance on a summative assessment event.