Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

8.03.2015

Formative Assessment Tracking


I cannot take credit for this idea- it came from a friend of mine; and I don't know what she calls it but this is what I typed when I had to create a name to save it.

It essentially looks like a seating chart, and it can be used as one.



You write or type the name of the student four times in each box. Or five, or whatever fits. At the bottom, I made a code so that I can write a letter next to that person's name. My friend uses x's, check marks, or crossing out their name. *Just do your thang honey!

I listed the most common problems I deal with: sleeping, phone out (they are not allowed to be out or on), or just not working. Next I used I and C as codes for when I'm doing formative assessment.

Basically this is just a way to give me real-time data in the classroom. I will carry multiple copies of these for each period on a clipboard. As I scan the room, I can mark the misbehavior I see. I'm actually not focusing on the negatives here, I'm focusing on what will take the least amount of time to write, which FINALLY, is the negative behaviors.

If we're doing any kind of anything at all, practice problems, worksheet, dry erase, etc etc I can ask students to show me their answers, call on specific students, or walk around and scan their answers and simply write I or C next to their names.

I can use this as much or as little as I want. I like the idea that I could have four different data points every day for every period for ever student. If I want.

I can use this for attendance, to call on people, to track behavior, to give feedback, to form groups, for a seating chart, for a sub, etc etc!

Here are some ways I'm thinking I'll use this:

"Suzy Q, I've noticed you've been sleeping in class every Monday. Is there anything going on that I can help with?"

"John Boy, you've had your phone out in class twice this week. If I see it a third time, I will have to write you a referral."

"J-lo, you've missed four problems in a row today. Let's work some problems together and see what's going wrong."

"Jimmy Crack-Corn, you've got every question right today. Do you care to help J-lo find out where she's messing up?"

And so on...

I've also already talked to my admin about how this could be used as student data for my teacher evaluation. This is not my year to be evaluated so I will "pilot" the idea this year and see how it goes.

On the bottom left, I have a place to list the type of formative assessments I used, more to hold myself accountable for variety than anything else.

On the bottom right, I have a place for my own reflection of things to change in the lesson or errors to fix. Also a great way to show growth in your teacher evaluation.

You could even use different colors of pens to get even more in-depth!

What other uses do you see for this? Are there any pitfalls?


1.20.2015

Should I Eliminate Tests?


Here is how assessment works for me currently...

District-Wide

  • End of Course exam (teacher created) given the first few days of school, the last week before Christmas break, and at the end of April. In August it's recorded but not graded, in December it is worth half (so a 50% would be a 100%), and in April it is graded normally and students must pass it to pass the course. If they fail it they can take it again, if they fail the second time then they take summer school. Students must pass the EOC and the class.
  • Discovery Testing which is done by Discovery Education (must have an outside source test) in September, November, and February. Multiple choice, on the computer, and recorded but not graded.

My Class
  • Quizzes that student can retake as many times as they want which average 4 per unit.
  • Unit test which occurs every 4 concepts and students use their INB but can't retake it and I make sure the points for the test outweigh the points for quizzes.
  • Binder check worth 10 points once per quarter.
  • Semester Reflection paper worth 40 points once each semester

New This Year
  • PARCC for students who are in Algebra II only (Illinois is poor) in place of ACT for juniors only
  • Optional ACT for juniors who want to take it 

Cons of Unit Tests
  • I hate givings tests!! The students complain that I wait too long between tests. I do.
  • They complain that the test is over more than one unit. It's not. It's four concepts. Five at most.
  • I let them use their notebooks which is an internal struggle every time.
  • I hate grading tests!! It takes forever.
  • What does the grade on a unit test really mean?

Pros of Unit Tests
  • Mixes concepts together so that students have to apply different strategies
  • Students have to be able to tell the difference between concepts

So what I am thinking is what if I get rid of unit tests? If I write a quality EOC exam, it should be a cumulative review right? I could even give it at the end of each quarter and count that as an extra test grade. 

Does the EOC and Discovery test do a good enough job of applying mixed concepts? Does letting students use their notebooks seem more justifiable on a quarterly test? It is 35 multiple choice questions and 5 open ended questions so it would also be easier to grade and simultaneously less to grade.

If I give a quiz after each concept then I am assessing often. Then again, if I'm assessing right after I teach a concept, how do I show retention? Is the quarterly EOC enough to show retention?

But the question is....how do I know if they are really learning?

I feel like helping them learn how to study for a quarterly exam is more aligned to how college finals work. Ultimately, I'd like to create a syllabus similar to college courses that tell you the entire course is y amount of points, say 1000, and you need x amount of points to get an A, B, etc and have it laid out from the beginning.

Is that feasible?


11.01.2014

Quiz Retakes Formerly Known as SBG


If you've read my blog at all I'm sure you've already read something I complained wrote about SBG before.

This is my best idea yet.

For our first test in each of my classes, I allowed them to use their INB. But of course I didn't tell them this until the test started.

Some of my older students were used to me offering test retakes in former years...even though most of them never actually did them. So when they asked about it this year, I kinda felt like a butthole to say no.

Here's what I decided. They can retake quizzes as many times as they want, but no notebook. No retakes on tests, but they can use their notebook.

This has been like magic for me. Quizzes are much easier to make multiple versions of and much easier to grade. Which eliminates a lot of hassle for me. It's almost like every quiz is a formative. It shows them what they know and then they make an informed decision about their satisfaction with their knowledge.

I do it right during class which eliminates the need for students to come in on their own time and makes me feel less guilty about not having time to work with them.

It motivates studying because once they get their original quiz back, they look back in their notebook to see what they missed and why. I could never get them to do that before.

What I've seen is students asking for retakes that NEVER have in the past. Students asking why they got something wrong and arguing their thinking. Students who want to retake more than once. Students who failed a quiz improving their score to passing.  I call that perseverance.

For tests, I think letting students use their INB is more of a comfort to them than they realize. I know exactly what is in the notebook so I can choose to make my tests in whatever way I want for them to show me their knowledge. It also increases ownership of the notebooks.

I feel like tests are more a show of retention and application while quizzes are more skill based.  By students repeatedly retaking the skill based quizzes, I think they are more prepared for the application on a test.

The best of both worlds.

I do have to say that I'm not grading with a rubric which I feel guilty about but...baby steps.

9.20.2014

Should Grades Be Tests Only?


Currently, my grade book consists of only quizzes and tests. I believe grades should reflect what knowledge of the content that students can show, not on behaviors such as turning things in on time or being prepared for class, etc.

I pretend to do SBG but really, my students do not reassess so I just have an SBG grade book.

So far this school year, I have only given quizzes. I'm teaching so slowly and I don't really know how to fix that.

This week I gave a quiz over reflections in Geometry. Out of 27 students, about 8 did well. I was unhappy with this. So I gave students a practice sheet to work through in class. Engagement was high since so many students wanted to improve their grades. Even students who only missed two wanted to retake it. After practicing, I gave a new version of the quiz. 100% of the students improved and quite a few made perfect scores.

At first, I was happy with this. I just did a whole class demonstration of SBG! Hooray! Improved scores! Learning! Yay!

But the more I thought about it...the unhappier I became. We practiced problems exactly like what was on the quiz....which I handed to them immediately afterward. That wasn't learning, it was memorization.

I started to think about quizzes. What am I measuring? Is it fair to work on a concept for a day or two and then score them on it? Do sports teams practice one or two days and then have a game?  Are quizzes fair?

So then I started to think about not grading quizzes. I've been wanting to try some different methods anyway. 1) Student feedback quizzes. 2) Feedback only quizzes. 3) Stoplight stickers.

But what would happen to my grade book? I've been teaching for over a month and still have not given a test. If I did not give quizzes, my students would have no scores in the grade book at all. How could I compensate?

Maybe it is just my teaching to blame and quizzes are completely fine. Maybe I should let students used their INB for all assessments. Maybe I need to find a way to work with students one on one. Maybe I need to do a better job of formative assessment.

Maybe I need to do a better job with assessment, period.

7.02.2012

Made 4 Math #1- Popsicle Stick Proofs


This is more for my final project in my grad class than a cool Pinterest idea but it is still a creative idea for my classroom.

In working with my English teacher friend, I've discovered that writing geometric proofs is really the skill of making inferences. We start with the given information, use facts that we know, and infer a conclusion.

I also 'discovered' that writing a proof is similar to writing the outline of a story or paper. I'm thinking that next year I will introduce proofs by first giving a story (hopefully something funny) and have the students help me write an outline: introduction, supporting evidence, and conclusion. Then we will compare the parts of an outline to the parts of a proof: the introduction is the given, the supporting evidence are the facts we know based on definitions and properties, and the conclusion is our prove statement and postulate/theorem.

I created a type of formative assessment using popsicle sticks. I wrote each statement and reason individually on a popsicle stick. Every stick has a unique symbol on the right side. I can use these symbols to have students identify parts of the proof and manipulate the sticks into the correct order without giving anything away.


I used colored popsicle sticks ($2.99 for 150 at Hobby Lobby) and they came in red, blue, yellow, green, orange, and purple. I used the yellow because I thought marker would show up best on that. The other colors are pretty dark so maybe it would be better to use regular ones and colored markers? Even better, we could use both sides of the sticks by writing in all one color on one side and writing in a different color on the other.

So I made a formative assessment using the sticks, again, more for my class than this but maybe someone will like it and use it or make it better. I also scanned the popsicle sticks in to the assessment so you could pass it out to students, have them cut them out, and then fill out the worksheet.

Either way, what I envision is students working in partners and manipulating the pieces of the proof, identifying important characteristics of a proof, and hopefully making better connections to English skills and writing proofs that make sense.

6.28.2012

Data Wise Ch 5

Notes from text
Data Wise
Boudett, City, Murnane

Chapter 5: Examining Instruction

Reframe the learning problem as a "problem of practice". It should:

-include learning and teaching
-be specific and fine-grained
-be a problem within the school's control
-be a problem that, if solved, will mean progress toward some larger goal

There are four main tasks to help you investigate instruction and articulate s problem of practice:

1. Link learning and teaching: With this particular learning problem, how does instruction impact what students learn?
2. Develop the skill of examining practice: How do we look at instructional data?
3. Develop a shared understanding of effective practice: What does effective instruction for our learning problem look like and what makes it effective?
4. Analyze current practice: What is actually happening in the classroom in terms of the learning problem, and how does it relate to our understanding of effective practice?

If teachers don't fundamentally believe that their teaching can make a difference for student learning, then it's going to be difficult to convince them to change their teaching.

When planning opportunities for teachers to link learning and teaching, consider these points:

-How will you move the conversation from "students/parents/poverty" to "teachers"?
-How will you frame the work as an opportunity to improve instruction, rather than as a failure (proactive vs. reactive)?
-How will you help teachers have a questioning rather than a defensive stance?
-How will you surface and get people to acknowledge the fundamental assumption that teaching matters for learning?

Components of examining practice:
1. Evidence, data about teaching
2. Precise, shared vocabulary
3. Collaborative conversation with explicit norms

Hearing others' responses to the same lesson helps challenge individual assumptions, helps us notice different things and see the same things Ina new way, and leads to a better understanding of the practice observed.

We need a vision for what [this] effective teaching looks like so we can assess whether what we're doing now fits or doesn't fit that vision.

When looking internally to develop ideas of effective practice, the key is too ground the discussion in evidence.

Connecting best practices to data serves multiple purposes: it increases the likelihood that the practice is effective rather than simply congenial; it reinforces the discipline of grounding all conversations about teaching and learning in evidence rather than generalities or assumptions; it's more persuasive-teachers are more likely to try something for which there's evidence that it works; and it reinforced the link between learning and teaching.

Inquiry is essential in developing a shared understanding of effective practice because you want everyone to understand not only what effective practice for the leaning problem looks like but why it is effective.

Three questions to consider when making decisions about how to examine instruction are:

1. What data will answer your questions about teaching practice in your school?
2. What are teachers ready for and willing to do?
3. What are your resources, including time?

Data Wise Ch 4

Notes from text
Data Wise
Boudett, City, Murnane

Chapter 4: Digging Into Data

Without an investigation of the data, schools risk misdiagnosing the problem.

There are two main steps when using data to identify the learner-centered problem in your school: looking carefully at a single data source and digging into other data sources.

The first thing to consider is, What questions do you have about the student learning problem, and what data will help answer those questions?

The next consideration is context: What data will be most compelling for the faculty?

Understanding how students arrived at a wrong answer or a poor result is important in knowing how to help them learn to get to the right answer or a good result.

Challenging assumptions is critical for three reasons:
1. Assumptions obscure clear understanding by taking the place of evidence
2. Teachers have to believe that students are capable of more than what the data shows
3. Solutions will require change

Starting with data and grounding the conversation in evidence from the data keeps the discussion focused on what we see rather than what we believe.

By triangulating your findings from multiple data sources- that is, by analyzing other data to illuminate, confirm, or dispute what you learned through your initial analysis- you will be able to identify your problem with more accuracy and specificity.

Students are an important and underused source of insight into their own thinking, and having focus groups with students to talk about their thinking can have a positive impact on your efforts to identify a problem underlying low student performance.

While you refine your definition of the learner-centered problem, you also build a common understanding among teachers of the knowledge and skills students need to have- in other words, what you expect students to know and be able to do, and how well they are meeting your expectations.

Guiding questions to identify a learner-centered problem:

Do you have more than a superficial understanding of the reasons behind students' areas of low performance?

Is there logic- based on the data you have examined- in how and why you've arrived at the specific problem identified?

Is your understanding of the problem supported by multiple sources of data?

Did you learn anything new in examining the data?

Do you all define the problem in the same way?

Is the problem specifically focused on knowledge and skills you want students to have?

If you solve this problem, will it help you meet your larger goals for students?

Data Wise Ch 3

Notes from text
Data Wise
Boudett, City, Murnane

Chapter 3: Creating a Data Overview

Preparing for a faculty meeting:
1. Decide on the educational questions
2. Reorganize your assessment data (simple is better)
3. Draw attention to critical comparisons
4. Display performance trends

The underlying educational questions should also drive every aspect of the presentation of the assessment data and provide a rationale for why it is important to present the data one way rather than another.

For example, the questions you are trying to answer should help you make the following decisions about your data presentation: Do you want to emphasize time trends? Are teachers and administrators interested in cohort comparisons? Is it important to analyze student performance by group? Do you want to focus the discussion on the students who fall into the lowest proficiencies or those who occupy the highest? Do you want to focus the audience's attention on the performance of your school's students relative to the average performance of students in the district or the state?

Understanding how students outside your school perform on the same assessment can provide benchmarks against which to compare the performance of your school's students.

In labeling and explaining graphs showing student performance, it is very important to be clear about whether the display illustrates trends on achievement for the same group over time, or whether it illustrates cohort-to-cohort differences over a number of years in the performance of students at the same grade level.


Components of Good Displays

1. Make an explicit and informative title for every figure in which you indicate critical elements of the chart, such as who was assessed, the number of students whose performance is summarized in the figure, what subject specialty, and when.

2. Make clear labels for each axis in a plot, or each row and column in a table.

3. Make sensible use of the space available on the page, with the dimensions, axes, and themes that are most important for the educational discussion being the most dominant in the display.

4. Keep plots uncluttered and free of unnecessary detail, extraneous features, and gratuitous cross-hatching and patterns.

Actively involve teachers with the data by giving them an opportunity to make sense of the data for themselves, encouraging them to ask questions, and offering them a chance to experience and discuss the actual questions on the test.

In reality, student assessment data is neither weak nor powerful. The real value in looking at this kind of data is not that it provides answers, but that it inspires questions.

Assessment FOR Learning Ch 6-9

Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius

These are the presentations from the other groups in my class in case you were interested.

Chapter 6 Written Response (Essay Assessment)
Chapter 7 Performance Assessment
Chapter 8 Personal Communication as Assessment
Chapter 9 Assessing Dispositions

6.20.2012

Data Wise Ch 2

Notes from text
Data Wise
Boudett, City, Murnane

Chapter 2: Building Assessment Literacy

Assessments should be of middling difficulty; extremely easy or extremely hard tests give you little information about what students know.

Sample principle of testing- making inferences of students' knowledge of an entire domain from a smaller sample.

Discrimination- discriminating items are used to reveal differences in proficiency of students that already exist.

Measurement error- inconsistencies in scores; for example, when various forms of a test have different samples, in people's behavior, and between individual scores.

Reliability- degree of consistency of measurement; a reliable measure is one that gives you nearly the same answer time after time

Score inflation- increase in scores that do not reflect a true increase in students' proficiency

Sampling error refers to inconsitency that arises from choosing the particular people from whom to take measurements.

The margin of error is simply a way to quantify how much the results would vary from one sample to another.

While a well-designed test can provide valuable information, there are many questions I cannot answer. How well does a person persevere in solving problems that take a long time and involve many false starts? To what extent has a student developed the dispositions we want-for example, a willingness to try applying what she has learned in math class to problems outside of school? How well does the student write long and complex papers requiring repeated revision? People demonstrate growth and proficiency in many ways that would not show up on any single test.

Significant decisions about a student should not be made on the basis of a single score.

Raw scores- percentage of possible credit. They are difficult to interpret and compare because they depend on the difficulty of the test which is likely to vary.

Norm-referenced tests- designed to describe performance in terms of a distribution of performance. Individual scores are reported in comparison to others (a norm group).

Percentile rank- percentage of students in the norm group performing below a particular student's score. PR tells you where a student stands, nut only relative to a specific comparison group taking a specific test.

Criterion referenced tests- determines whether a students has mastered a defined set of skills or knowledge; measures whether a student has reached a preestablished passing level (cut score). It does not rank students and seves only to differentiate those who passed from those who failed.

Standards-referenced tests- developed by specifying content standards and performance standards; scored with various performance levels

Developmental (vertical) scales- trace a students development as he or she progresses through school

Grade equivalents- developmental scores that report the performance of a student by comparing the student to the median at a specific stage; easy to interpret and explain but have become popular and rarely used. Ex 3.7 would be a third grader in their seventh month of school

Developmental scale (standard) score- reports performance on an arbitrary numerical scale; students who score the same are believed to have the same proficiency even if they are in different grades.

When interpreting the results of a single test, it is often useful to obtain performance data from more than one scale.

For purposes of diagnosis and instructional improvement, most educators want more detail than less. Although finer-grained levels of detail are instructionally more useful, because fewer items are used in reporting performance the results will also be less reliable.

Cohort-to-cohort change model- when schools test a given grade every year and gauge improvement by comparing each years scores for students in that grade to the scores of the previous year's students in that grade (mandated by NCLB).

Longitudinal (value-added) assessment- measures the gains shown by a given cohort of students as it progresses through school.

It is risky and misleading to rely on a single item to draw conclusions about a single student because of measurement error and not being able to tell which skill caused the student to miss the question.

Three complementary strategies for interpreting scores on a particular assessment, all of which involved using additional information:
1. Look beyond one years assessment results by applying either the cohort-to-cohort change or value-added assessment approach
2. Compare your students' results with those of relevant students in the district or the state.
3. Compare your students' results on the most recent assessment with their performance on other assessments.

Three reasons why small differences should not be given credence:
1. Sampling error
2. Measurement error
3. Any given set of content standards could lead to a variety of different blueprints for a test.

Differences that are sizable or that persist for some time should be taken seriously.

To understand whether improved student scores are meaningful, educators need to determine whether teaching has been focused on increasing mastery rather than on changing scores.

If students are gaining mastery, then the improvement will show up in many different places- on other tests they take or in the quality if their later academic work- not just in their scores on their own state's test.

This book focuses on how to use assessment results to change practice in ways that make a long-term, meaningful difference for students.

Data Wise Ch 1

Notes from text
Data Wise
Boudett, City, Murnane

Chapter 1: Organizing for Collaborative Work

Three activities that can support a "data culture" within schools: creating and guiding a data team, enabling collaborative work among faculty, and planning productive meetings.

Data team
Having a few people responsible for organizing and preparing the data means that you can dedicate the full faculty's time to discussing the data.

3 Tasks
1. Create a data inventory (external and internal assessmentsbandvstudent-level information)
2. Take stock of data organization
3. Develop an inventory of the instructional initiatives currently in place

Are you satisfied with the way you capture the information generated from each of your assessments?

It is better to share responsibility for interpreting data among all teachers.

When planning conversations around data,  the challenge is to find an effective way to give all faculty members a chance to make meaning of what they see.

Four Helpful Strategies for Planning Productive Meetings
1. Establish group norms (i.e. no blame, no wrong answers)
2. Use protocols to structure conversations
3. Adopt an improvement process
4. "Lesson plan" for meetings (repackage data results so they can be easily understood)

6.18.2012

Assessment FOR Learning Ch 5

Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius

Chapter 5: Selected Response Format

We had to work in groups to summarize chapters 5-9 and my group chose chapter 5. We also created a selected response assessment based on informational text at the eighth grade level, one for English and one for math. In addition, we created a handout of checklists from the book.

PowerPoint Summary
Assessments
Handout

6.13.2012

Assessment FOR Learning Ch 2, 4

Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius

Chapter 2: Understanding Why We Assess

One must always start the assessment process with a clear answer to the question, Why am I assessing?

Assessments at each of these levels can serve either of two purposes: they can support student learning (formative applications) or verify that learning has been attained (summative applications).

The evidence generated must reveal how each student is doing in mastering each standard. Assessments that cross many standards and blend results into a single overall score will not help, due to their lack of sufficient detail.

Teachers ask, Did the student make progress toward mastery of the standard? School leaders ask, Did enough students achieve mastery of the standard?

Formative assessments have no place in the determination of report card grades. They are the continuous assessments that we conduct while learning is happening to help students see and feel in control of their ongoing growth.

Use classroom assessment to keep students believing they are capable learners.

Chapter 4: Designing Quality Classroom Assessments

Four Categories of Assessment Methods
1. Selected response
2. Essay
3. Performance
4. Direct personal interaction

Our goal in assessment design is to use the most powerful assessment option we can; maximum information for minimum cost.

Selected response items can assess recall, classification, analytical and comparative reasoning and even draw conclusions but not evaluative reasoning because students must express and defend a position.

We always need to know why a student failed. Choosing the wrong assessment method can obscure the 'why'.

6.12.2012

Assessment FOR Learning Ch 1,3

Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius

Chapter 1: Classroom Assessment for Success

[Students] assessed their own achievement repeatedly over time, so they could watch their own improvement.

[Students] continually see the distance closing between their present position and their goal...ongoing student- involved assessment, not for entries in the grade book, but as a confidence-building motivator and teaching tool.

Our assessments have to help us accurately diagnose student needs, track and enhance student growth toward standards, motivate students to strive for academic excellence, and verify student mastery of required standards.

Whatever else we do, we must help the, believe that success in learning is within reach.

Keys to Assessment Quality:
1. Clear Purposes
2. Clear Targets
3. Sound Design
4. Effective Communication

You must ask yourself, "Do I know what it means to do well? Precisely what does it mean to succeed academically?"

Chapter 3: Clear Achievement Expectations: The Foundation of Sound Assessment

Students can hit any target that they see and holds still for them.

Please realize that the path to academic success doesn't change as a function of how fast students travel it.

Types of Achievement Targets:
Knowledge (prerequisite to all other forms of achievement)
Reasoning (analytical, synthesizing, comparative, classifying, evaluative, inductive/deductive)
Performance Skills (to integrate knowledge and reasoning proficiencies and to be skillful)
Products (developing the capacity to create products that meet certain standards of quality)
Dispositions (attitudes, interests, motivation)

In the case of analytical reasoning, our instructional challenges are to be sure that students have the opportunity to master whatever knowledge and understanding they need to be able to analyze the things we want them to understand , and that they receive guided practice in exercising their analytical thought processes.

What do students need to come to know and understand in order to be ready to demonstrate that they can meet this standard when the time comes to do so?

What patterns of reasoning, if any, must they gain mastery of on their journey to this standard?

What performance skills, if any, are called for as building blocks beneath this standard?

What products must students become proficient at creating, if any, according to this standard?

6.01.2012

End of Course Exams

Hi everybody.

I spent all last week working on my end of course exams and pacing guides for Algebra I, Geometry, and Algebra II. We have never used end of course exams and I wanted to get some feedback from you on how we are planning to use them, possible setbacks, and also how you use them.

The plan is we will no longer have semester exams at the end of each quarter. Before, each semester ended with a comprehensive exam of that semester, but not the entire year.

Now, at the end of the year, students will have to have a passing grade as well as pass the end of course exam in order to pass the entire course. This prevents two things. One, a student can't slack off all year, then do awesome on the EOC and pass the class. Two, a student can't do awesome all year, then slack off on the EOC and pass the class. I like both of those.

Also, the test is pass/fail and will not go in the grade book to avoid it being a double whammy on a student's grade. The test will be given at the end of April. For students who don't pass, there will be remediation during class time while the students who did pass are doing some sort of project. The test will be given again in May. If students don't pass the second time, then they will come to summer school, re-take the course, and try their third attempt at passing the EOC. If they still don't pass, then they will repeat the course in the following school year.

The intention of the EOC is to help stop the cycle of passing students who aren't ready to move on. We spend too much time reviewing the previous course because we feel students are unprepared and then we get behind in teaching the course itself. I'm not sure how well this will combat that problem because the student has three chances, and if they do pass, will they be prepared enough to start the next course?

I suppose that is where our rigor on the test comes in. The test is a two-day test and the math is set up so that session I is all multiple choice and session II is all open-ended involving at least one writing piece.

To kill two birds with one stone, we will also be using the EOC to show student growth over the course of the year (a key part of our teacher evaluations). We will give it at the end of each quarter so that by the 'official' time they take it, they will have seen it three previous times.  

I have two options here. One, I can count it in the grade book as a regular test and grade it according to the quarter: 25% right in the first quarter would be an A, 50% in the second quarter would be an A, and 75% in the third quarter would be an A. Two, I could not put it into the grade book at all but just keep a separate record of their scores each time to show growth over the year. I'm leaning toward the second option just because it would be an easy document to turn in for my summative evaluation. Also, if I did it in Excel then I could create fancy impressive graphs. Yay me!!

I am worried because my students can't even pass my semester exams which are easier than the EOC and I've had to curve grades every semester. But the students will be more familiar with the EOC after taking it three times. In addition, I built my pacing guide so that I can see which EOC question matches up to each concept. My hope is that when I create my lessons and assessments, I will remember to include questions similar to the ones on the EOC so that hopefully, nothing will be a surprise!

This is the closest I've been to actually using backward design/UbD so it will be interesting to see how it all plays out.

What drawbacks do you see to this process? How does your school use EOC's differently? How do you show student growth?

5.21.2012

Math Portfolio

I have achieved one of my summer goals. I finally, finally created the template for my math portfolio. Only because it was my final project for my writing across the curriculum class, but still, it's done.

The portfolio is built around my Algebra II class but I don't have the concept list or my priorities nailed down yet so the concepts the students will be writing about will change. I think that's what I need the most help on. I want students to write about big concepts, essential questions, the overall themes throughout the course.

I'm feeling pretty optimistic about it but I'm ready to see what you guys think. Is this doable? Will this project increase student achievement in any way? Did I use writing in a way that will increase learning? Did I leave out any important ideas? What should be changed or removed? How could I do this better?

I wish I could show a picture of every page (I could, but won't) because I am ridiculously proud of it, but it's 25 pages long.

Without further ado, my math portfolio!

4.09.2012

Dichotomous Rubric for Assessing Math Portfolios

My final project for our Writing Across the Curriculum class is to develop the template for my math portfolio. It's due May 5th (I think?) which means I haven't really started on it yet. I have ideas in my head and a few resources. I plan on it being a mix of data (graphs from Lee Jenkins) reflections on that data, the math part, and reflections about the math part. A lot of writing, but not an overwhelming amount. It has to be doable. And my goal is not just for them to write but to use writing as a tool for learning. I may or may not have that part figured out yet.

Anyway, Lee Jenkins came a couple weeks ago to meet with us again and even though I haven't started any of his ideas yet, he did introduce me to the dichotomous rubric. The examples he showed me were about writing but so is my math portfolio so it wasn't too far of a stretch.

So I've made my own and it's the first real contribution to my portfolio but hey, it's a start.


It's pretty and it's colorful and I quite like it.

Feedback?