I think my issue with sbg is how to grade.
I know my main problem with sbg is getting students to come in and reassess, but hopefully the conversations I had with 16 parents this week at Parent Teacher Conference will start to move that into motion.
So for me personally it's the issue with grading. I started out doing two questions per skill per assessment. I created my own rubric with a mixture of C's, P's, and I's with the second question weighted more heavily than the first. But sometimes the rubric didn't serve my students well and I couldn't, in good conscience, always stick to it. Which probably implies I need a new rubric.
But as I began to work with my instructional coach and discover second-year-teaching wisdom, I realized you all were right and I was assessing way too many skills. I started to broaden my skills so that one skill contained baby ones. I suppose you understand what I mean. We also started to look at the ACT and the Work Keys and pulling questions from there so that I could backwards plan my lessons to lead up to hard problems I normally would have avoided asking my students. We've already established that I should plan backwards, I've just started it, moving right along...
So my past couple assessment have only been assessing one skill but I've asked about 8 questions. How do I grade that with a rubric?
Give each question a score 0-4 and then average them together? I thought averaging was the devil...
Grade as usual, giving a certain amount of points for each problem, counting off, and then giving a percentage of points correct out of points possible? My twitter peeps said this puts me back into points instead of levels of understanding. But what if I assigned a range of percents to a rubric, say:
90-99% = 3.5
80-89% = 3
30-39% = .5
But I guess that still isn't providing accurate information to the student because a 73% doesn't tell them what they messed up on.
I previously tried @druinok's idea of asking 3 questions per skill on different levels but that rubric was still confusing to me too.
Am I asking too many questions per skill? How often do you assess and how long are the assessments?
We've been working on developing assessments that come naturally at the end of a small unit. My coach has talked to me about balanced assessments: including some more basic, straightforward questions as well as application, word problem, synthesis type of problems. And I like that. I like the assessments we've been creating but I don't know how to give an overall score when I'm asking so many questions.
What happens with multiple choice? If they get it right a 4? If they get it wrong is it a 1, 2, or 3?
@dcox gave the advice: Say you have one basic, one "proficient" and one application/synthesis problem. Students who can do all three =5, 2/3 =4, 1/3 =3. But what if they do 1.5 out of 3, or 2.5 or 3.5? What then? What if they make small mechanical errors that throw off the whole problem? What if they start off well and then nose dive?
It's like no matter what rubric I find or create, when I'm grading, I always find a loophole that leaves me staring blankly at a paper trying to estimate how much they know based on the test and what I see in class.
What am I missing?