10.23.2010

SBG: How To Grade

I think my issue with sbg is how to grade.

I know my main problem with sbg is getting students to come in and reassess, but hopefully the conversations I had with 16 parents this week at Parent Teacher Conference will start to move that into motion.

So for me personally it's the issue with grading. I started out doing two questions per skill per assessment. I created my own rubric with a mixture of C's, P's, and I's with the second question weighted more heavily than the first. But sometimes the rubric didn't serve my students well and I couldn't, in good conscience, always stick to it. Which probably implies I need a new rubric.

But as I began to work with my instructional coach and discover second-year-teaching wisdom, I realized you all were right and I was assessing way  too many skills. I started to broaden my skills so that one skill contained baby ones. I suppose you understand what I mean. We also started to look at the ACT and the Work Keys and pulling questions from there so that I could backwards plan my lessons to lead up to hard problems I normally would have avoided asking my students. We've already established that I should plan backwards, I've just started it, moving right along...

So my past couple assessment have only been assessing one skill but I've asked about 8 questions. How do I grade that with a rubric?

Give each question a score 0-4 and then average them together? I thought averaging was the devil...

Grade as usual, giving a certain amount of points for each problem, counting off, and then giving a percentage of points correct out of points possible? My twitter peeps said this puts me back into points instead of levels of understanding. But what if I assigned a range of percents to a rubric, say:

100% =4
90-99% = 3.5
80-89% = 3
70-79%= 2.5
60-69%= 2
50-59%= 1.5 
40-49%= 1
30-39% = .5


But I guess that still isn't providing accurate information to the student because a 73% doesn't tell them what they messed up on.


I previously tried @druinok's idea of asking 3 questions per skill on different levels but that rubric was still confusing to me too.


Am I asking too many questions per skill? How often do you assess and how long are the assessments?


We've been working on developing assessments that come naturally at the end of a small unit. My coach has talked to me about balanced assessments: including some more basic, straightforward questions as well as application, word problem, synthesis type of problems. And I like that. I like the assessments we've been creating but I don't know how to give an overall score when I'm asking so many questions. 


What happens with multiple choice? If they get it right a 4? If they get it wrong is it a 1, 2, or 3?


@dcox gave the advice:  Say you have one basic, one "proficient" and one application/synthesis problem. Students who can do all three =5, 2/3 =4, 1/3 =3. But what if they do 1.5 out of 3, or 2.5 or 3.5? What then? What if they make small mechanical errors that throw off the whole problem? What if they start off well and then nose dive?

It's like no matter what rubric I find or create, when I'm grading, I always find a loophole that leaves me staring blankly at a paper trying to estimate how much they know based on the test and what I see in class.

What am I missing?

6 comments:

  1. "What am I missing?"

    There is no perfect assessment. If you have an easy problem, a medium problem and a hard problem, I'd give a point for the easy one, 2 for the medium one, and 3 for the hard one. Give partial points for partial understanding.

    There is nothing wrong with using points in an assessment. The problem comes with students (and teachers) treating points as an end rather than a means.

    ReplyDelete
  2. Trust your judgement here. Do your best to word you learning targets so that they include what you're looking for in an answer, then make a judgement. Then, if at all possible, give the kid a short comment on what they're missing. Hopefully they are getting multiple attempts to demonstrate mastery, so if you give them a 2 instead of a 3 on one quiz and they know why, the impetus is on them to learn and demonstrate knowledge better the next time around. Always remember, you are the professional in the room, and that counts for a lot.

    ReplyDelete
  3. You're getting stuck in the world of averages, which is against the heartbeat of sbg. Here's what we do...
    All of our tests are multiple choice to emulate the Big State Test. I write the test and I make sure that there are at least 3 questions for each assessed skill. Of those questions, I make sure that there are multiple levels of rigor present. Our department then goes through the problems and asks ourselves which problems are indicative of Advanced (4) skills, which indicate Proficient (3.5),those problems a Basic (3) student should get, and the Below Basic (2) skills. This will ultimately mean something to the affect of an Advanced (4) student can miss from 0-2, Proficient (3.5) from 3-4, Basic (3) from 5-8, Below Basic (2) from 9-10, and Far Below Basic (1) they miss them all, sometimes all but one. We figure out the percentages later and don't let them affect our decisions. Our cut scores are based purely on the rigor and level of the problems, not on the averages. We don't do anything with the typical averages or percentages. We make decisions based on the problems on the current assessment, not on averages.

    ReplyDelete
  4. teamalzen,

    I really did not understand your comment. How do you figure out the percentages later? Do you use points or percentages or what? How are your cut scores basd purely on rigor and level of the problems? Also, I have no department which leaves all decisions on my shoulders which i what makes the grading hard.

    ReplyDelete
  5. I really think you're getting bogged down in details and that's stressing you far too much. SBG is supposed to be FUN for a teacher, because they get to quickly see students improving! (or stagnating, of course)

    In other words, allow you to let the scores be a little subjective - otherwise you go crazy and that doesn't help the students either!

    5 - Two perfect quizzes. This kid HAS it. I can relax and not think about him/her anymore w/r/t this topic.

    4 - One perfect quiz. This kid has it, but I want one more quiz to be SURE.

    3.5 - One arithmetic mistake. This kid has the idea, but needs to be careful.

    3 - Several minor mistakes. This kid has the basic idea, but hopefully will pick up the rest as class goes on.

    2 - Major mistake(s). This kid is missing some large conceptual ideas and needs tutoring to catch up.

    1 - Little to nothing on this paper makes sense. This kid needs tutoring to catch up.

    0 - Blank quizzes.

    ~ Surani

    ReplyDelete
  6. I have to agree with Surani here, you're getting tied up in the details.

    I look at the standards based assessment in two categories: meeting or not. From there, you can decided if it's exceptional, middling, or squeaking by.

    I had a big panic when I started to turn all of the standards back into a letter grade, until I realized that anything based off standards is more real than points. No matter how I pull it all into a letter grade, I've got students working on specific skills rather than just chasing points.

    ReplyDelete