6.29.2012
6.28.2012
Data Wise Ch 5
Notes from text
Data Wise
Boudett, City, Murnane
Chapter 5: Examining Instruction
Reframe the learning problem as a "problem of practice". It should:
-include learning and teaching
-be specific and fine-grained
-be a problem within the school's control
-be a problem that, if solved, will mean progress toward some larger goal
There are four main tasks to help you investigate instruction and articulate s problem of practice:
1. Link learning and teaching: With this particular learning problem, how does instruction impact what students learn?
2. Develop the skill of examining practice: How do we look at instructional data?
3. Develop a shared understanding of effective practice: What does effective instruction for our learning problem look like and what makes it effective?
4. Analyze current practice: What is actually happening in the classroom in terms of the learning problem, and how does it relate to our understanding of effective practice?
If teachers don't fundamentally believe that their teaching can make a difference for student learning, then it's going to be difficult to convince them to change their teaching.
When planning opportunities for teachers to link learning and teaching, consider these points:
-How will you move the conversation from "students/parents/poverty" to "teachers"?
-How will you frame the work as an opportunity to improve instruction, rather than as a failure (proactive vs. reactive)?
-How will you help teachers have a questioning rather than a defensive stance?
-How will you surface and get people to acknowledge the fundamental assumption that teaching matters for learning?
Components of examining practice:
1. Evidence, data about teaching
2. Precise, shared vocabulary
3. Collaborative conversation with explicit norms
Hearing others' responses to the same lesson helps challenge individual assumptions, helps us notice different things and see the same things Ina new way, and leads to a better understanding of the practice observed.
We need a vision for what [this] effective teaching looks like so we can assess whether what we're doing now fits or doesn't fit that vision.
When looking internally to develop ideas of effective practice, the key is too ground the discussion in evidence.
Connecting best practices to data serves multiple purposes: it increases the likelihood that the practice is effective rather than simply congenial; it reinforces the discipline of grounding all conversations about teaching and learning in evidence rather than generalities or assumptions; it's more persuasive-teachers are more likely to try something for which there's evidence that it works; and it reinforced the link between learning and teaching.
Inquiry is essential in developing a shared understanding of effective practice because you want everyone to understand not only what effective practice for the leaning problem looks like but why it is effective.
Three questions to consider when making decisions about how to examine instruction are:
1. What data will answer your questions about teaching practice in your school?
2. What are teachers ready for and willing to do?
3. What are your resources, including time?
Data Wise
Boudett, City, Murnane
Chapter 5: Examining Instruction
Reframe the learning problem as a "problem of practice". It should:
-include learning and teaching
-be specific and fine-grained
-be a problem within the school's control
-be a problem that, if solved, will mean progress toward some larger goal
There are four main tasks to help you investigate instruction and articulate s problem of practice:
1. Link learning and teaching: With this particular learning problem, how does instruction impact what students learn?
2. Develop the skill of examining practice: How do we look at instructional data?
3. Develop a shared understanding of effective practice: What does effective instruction for our learning problem look like and what makes it effective?
4. Analyze current practice: What is actually happening in the classroom in terms of the learning problem, and how does it relate to our understanding of effective practice?
If teachers don't fundamentally believe that their teaching can make a difference for student learning, then it's going to be difficult to convince them to change their teaching.
When planning opportunities for teachers to link learning and teaching, consider these points:
-How will you move the conversation from "students/parents/poverty" to "teachers"?
-How will you frame the work as an opportunity to improve instruction, rather than as a failure (proactive vs. reactive)?
-How will you help teachers have a questioning rather than a defensive stance?
-How will you surface and get people to acknowledge the fundamental assumption that teaching matters for learning?
Components of examining practice:
1. Evidence, data about teaching
2. Precise, shared vocabulary
3. Collaborative conversation with explicit norms
Hearing others' responses to the same lesson helps challenge individual assumptions, helps us notice different things and see the same things Ina new way, and leads to a better understanding of the practice observed.
We need a vision for what [this] effective teaching looks like so we can assess whether what we're doing now fits or doesn't fit that vision.
When looking internally to develop ideas of effective practice, the key is too ground the discussion in evidence.
Connecting best practices to data serves multiple purposes: it increases the likelihood that the practice is effective rather than simply congenial; it reinforces the discipline of grounding all conversations about teaching and learning in evidence rather than generalities or assumptions; it's more persuasive-teachers are more likely to try something for which there's evidence that it works; and it reinforced the link between learning and teaching.
Inquiry is essential in developing a shared understanding of effective practice because you want everyone to understand not only what effective practice for the leaning problem looks like but why it is effective.
Three questions to consider when making decisions about how to examine instruction are:
1. What data will answer your questions about teaching practice in your school?
2. What are teachers ready for and willing to do?
3. What are your resources, including time?
Tags:
Assessment,
Book Excerpts
Data Wise Ch 4
Notes from text
Data Wise
Boudett, City, Murnane
Chapter 4: Digging Into Data
Without an investigation of the data, schools risk misdiagnosing the problem.
There are two main steps when using data to identify the learner-centered problem in your school: looking carefully at a single data source and digging into other data sources.
The first thing to consider is, What questions do you have about the student learning problem, and what data will help answer those questions?
The next consideration is context: What data will be most compelling for the faculty?
Understanding how students arrived at a wrong answer or a poor result is important in knowing how to help them learn to get to the right answer or a good result.
Challenging assumptions is critical for three reasons:
1. Assumptions obscure clear understanding by taking the place of evidence
2. Teachers have to believe that students are capable of more than what the data shows
3. Solutions will require change
Starting with data and grounding the conversation in evidence from the data keeps the discussion focused on what we see rather than what we believe.
By triangulating your findings from multiple data sources- that is, by analyzing other data to illuminate, confirm, or dispute what you learned through your initial analysis- you will be able to identify your problem with more accuracy and specificity.
Students are an important and underused source of insight into their own thinking, and having focus groups with students to talk about their thinking can have a positive impact on your efforts to identify a problem underlying low student performance.
While you refine your definition of the learner-centered problem, you also build a common understanding among teachers of the knowledge and skills students need to have- in other words, what you expect students to know and be able to do, and how well they are meeting your expectations.
Guiding questions to identify a learner-centered problem:
Do you have more than a superficial understanding of the reasons behind students' areas of low performance?
Is there logic- based on the data you have examined- in how and why you've arrived at the specific problem identified?
Is your understanding of the problem supported by multiple sources of data?
Did you learn anything new in examining the data?
Do you all define the problem in the same way?
Is the problem specifically focused on knowledge and skills you want students to have?
If you solve this problem, will it help you meet your larger goals for students?
Data Wise
Boudett, City, Murnane
Chapter 4: Digging Into Data
Without an investigation of the data, schools risk misdiagnosing the problem.
There are two main steps when using data to identify the learner-centered problem in your school: looking carefully at a single data source and digging into other data sources.
The first thing to consider is, What questions do you have about the student learning problem, and what data will help answer those questions?
The next consideration is context: What data will be most compelling for the faculty?
Understanding how students arrived at a wrong answer or a poor result is important in knowing how to help them learn to get to the right answer or a good result.
Challenging assumptions is critical for three reasons:
1. Assumptions obscure clear understanding by taking the place of evidence
2. Teachers have to believe that students are capable of more than what the data shows
3. Solutions will require change
Starting with data and grounding the conversation in evidence from the data keeps the discussion focused on what we see rather than what we believe.
By triangulating your findings from multiple data sources- that is, by analyzing other data to illuminate, confirm, or dispute what you learned through your initial analysis- you will be able to identify your problem with more accuracy and specificity.
Students are an important and underused source of insight into their own thinking, and having focus groups with students to talk about their thinking can have a positive impact on your efforts to identify a problem underlying low student performance.
While you refine your definition of the learner-centered problem, you also build a common understanding among teachers of the knowledge and skills students need to have- in other words, what you expect students to know and be able to do, and how well they are meeting your expectations.
Guiding questions to identify a learner-centered problem:
Do you have more than a superficial understanding of the reasons behind students' areas of low performance?
Is there logic- based on the data you have examined- in how and why you've arrived at the specific problem identified?
Is your understanding of the problem supported by multiple sources of data?
Did you learn anything new in examining the data?
Do you all define the problem in the same way?
Is the problem specifically focused on knowledge and skills you want students to have?
If you solve this problem, will it help you meet your larger goals for students?
Tags:
Assessment,
Book Excerpts
Data Wise Ch 3
Notes from text
Data Wise
Boudett, City, Murnane
Chapter 3: Creating a Data Overview
Preparing for a faculty meeting:
1. Decide on the educational questions
2. Reorganize your assessment data (simple is better)
3. Draw attention to critical comparisons
4. Display performance trends
The underlying educational questions should also drive every aspect of the presentation of the assessment data and provide a rationale for why it is important to present the data one way rather than another.
For example, the questions you are trying to answer should help you make the following decisions about your data presentation: Do you want to emphasize time trends? Are teachers and administrators interested in cohort comparisons? Is it important to analyze student performance by group? Do you want to focus the discussion on the students who fall into the lowest proficiencies or those who occupy the highest? Do you want to focus the audience's attention on the performance of your school's students relative to the average performance of students in the district or the state?
Understanding how students outside your school perform on the same assessment can provide benchmarks against which to compare the performance of your school's students.
In labeling and explaining graphs showing student performance, it is very important to be clear about whether the display illustrates trends on achievement for the same group over time, or whether it illustrates cohort-to-cohort differences over a number of years in the performance of students at the same grade level.
Components of Good Displays
1. Make an explicit and informative title for every figure in which you indicate critical elements of the chart, such as who was assessed, the number of students whose performance is summarized in the figure, what subject specialty, and when.
2. Make clear labels for each axis in a plot, or each row and column in a table.
3. Make sensible use of the space available on the page, with the dimensions, axes, and themes that are most important for the educational discussion being the most dominant in the display.
4. Keep plots uncluttered and free of unnecessary detail, extraneous features, and gratuitous cross-hatching and patterns.
Actively involve teachers with the data by giving them an opportunity to make sense of the data for themselves, encouraging them to ask questions, and offering them a chance to experience and discuss the actual questions on the test.
In reality, student assessment data is neither weak nor powerful. The real value in looking at this kind of data is not that it provides answers, but that it inspires questions.
Data Wise
Boudett, City, Murnane
Chapter 3: Creating a Data Overview
Preparing for a faculty meeting:
1. Decide on the educational questions
2. Reorganize your assessment data (simple is better)
3. Draw attention to critical comparisons
4. Display performance trends
The underlying educational questions should also drive every aspect of the presentation of the assessment data and provide a rationale for why it is important to present the data one way rather than another.
For example, the questions you are trying to answer should help you make the following decisions about your data presentation: Do you want to emphasize time trends? Are teachers and administrators interested in cohort comparisons? Is it important to analyze student performance by group? Do you want to focus the discussion on the students who fall into the lowest proficiencies or those who occupy the highest? Do you want to focus the audience's attention on the performance of your school's students relative to the average performance of students in the district or the state?
Understanding how students outside your school perform on the same assessment can provide benchmarks against which to compare the performance of your school's students.
In labeling and explaining graphs showing student performance, it is very important to be clear about whether the display illustrates trends on achievement for the same group over time, or whether it illustrates cohort-to-cohort differences over a number of years in the performance of students at the same grade level.
Components of Good Displays
1. Make an explicit and informative title for every figure in which you indicate critical elements of the chart, such as who was assessed, the number of students whose performance is summarized in the figure, what subject specialty, and when.
2. Make clear labels for each axis in a plot, or each row and column in a table.
3. Make sensible use of the space available on the page, with the dimensions, axes, and themes that are most important for the educational discussion being the most dominant in the display.
4. Keep plots uncluttered and free of unnecessary detail, extraneous features, and gratuitous cross-hatching and patterns.
Actively involve teachers with the data by giving them an opportunity to make sense of the data for themselves, encouraging them to ask questions, and offering them a chance to experience and discuss the actual questions on the test.
In reality, student assessment data is neither weak nor powerful. The real value in looking at this kind of data is not that it provides answers, but that it inspires questions.
Tags:
Assessment,
Book Excerpts
Assessment FOR Learning Ch 6-9
Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
These are the presentations from the other groups in my class in case you were interested.
Chapter 6 Written Response (Essay Assessment)
Chapter 7 Performance Assessment
Chapter 8 Personal Communication as Assessment
Chapter 9 Assessing Dispositions
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
These are the presentations from the other groups in my class in case you were interested.
Chapter 6 Written Response (Essay Assessment)
Chapter 7 Performance Assessment
Chapter 8 Personal Communication as Assessment
Chapter 9 Assessing Dispositions
Tags:
Assessment,
Book Excerpts
6.25.2012
Group Research Project Survey
I'm taking an Intro to Research Methods class this summer and we are doing a group research project.
We are in the early stages of this and so right now, I need you people to be my data!
Please take this very short 5 question survey.
Thank you and feel free to pass this along.
Stay tuned for results. =)
We are in the early stages of this and so right now, I need you people to be my data!
Please take this very short 5 question survey.
Thank you and feel free to pass this along.
Stay tuned for results. =)
6.20.2012
Data Wise Ch 2
Notes from text
Data Wise
Boudett, City, Murnane
Chapter 2: Building Assessment Literacy
Assessments should be of middling difficulty; extremely easy or extremely hard tests give you little information about what students know.
Sample principle of testing- making inferences of students' knowledge of an entire domain from a smaller sample.
Discrimination- discriminating items are used to reveal differences in proficiency of students that already exist.
Measurement error- inconsistencies in scores; for example, when various forms of a test have different samples, in people's behavior, and between individual scores.
Reliability- degree of consistency of measurement; a reliable measure is one that gives you nearly the same answer time after time
Score inflation- increase in scores that do not reflect a true increase in students' proficiency
Sampling error refers to inconsitency that arises from choosing the particular people from whom to take measurements.
The margin of error is simply a way to quantify how much the results would vary from one sample to another.
While a well-designed test can provide valuable information, there are many questions I cannot answer. How well does a person persevere in solving problems that take a long time and involve many false starts? To what extent has a student developed the dispositions we want-for example, a willingness to try applying what she has learned in math class to problems outside of school? How well does the student write long and complex papers requiring repeated revision? People demonstrate growth and proficiency in many ways that would not show up on any single test.
Significant decisions about a student should not be made on the basis of a single score.
Raw scores- percentage of possible credit. They are difficult to interpret and compare because they depend on the difficulty of the test which is likely to vary.
Norm-referenced tests- designed to describe performance in terms of a distribution of performance. Individual scores are reported in comparison to others (a norm group).
Percentile rank- percentage of students in the norm group performing below a particular student's score. PR tells you where a student stands, nut only relative to a specific comparison group taking a specific test.
Criterion referenced tests- determines whether a students has mastered a defined set of skills or knowledge; measures whether a student has reached a preestablished passing level (cut score). It does not rank students and seves only to differentiate those who passed from those who failed.
Standards-referenced tests- developed by specifying content standards and performance standards; scored with various performance levels
Developmental (vertical) scales- trace a students development as he or she progresses through school
Grade equivalents- developmental scores that report the performance of a student by comparing the student to the median at a specific stage; easy to interpret and explain but have become popular and rarely used. Ex 3.7 would be a third grader in their seventh month of school
Developmental scale (standard) score- reports performance on an arbitrary numerical scale; students who score the same are believed to have the same proficiency even if they are in different grades.
When interpreting the results of a single test, it is often useful to obtain performance data from more than one scale.
For purposes of diagnosis and instructional improvement, most educators want more detail than less. Although finer-grained levels of detail are instructionally more useful, because fewer items are used in reporting performance the results will also be less reliable.
Cohort-to-cohort change model- when schools test a given grade every year and gauge improvement by comparing each years scores for students in that grade to the scores of the previous year's students in that grade (mandated by NCLB).
Longitudinal (value-added) assessment- measures the gains shown by a given cohort of students as it progresses through school.
It is risky and misleading to rely on a single item to draw conclusions about a single student because of measurement error and not being able to tell which skill caused the student to miss the question.
Three complementary strategies for interpreting scores on a particular assessment, all of which involved using additional information:
1. Look beyond one years assessment results by applying either the cohort-to-cohort change or value-added assessment approach
2. Compare your students' results with those of relevant students in the district or the state.
3. Compare your students' results on the most recent assessment with their performance on other assessments.
Three reasons why small differences should not be given credence:
1. Sampling error
2. Measurement error
3. Any given set of content standards could lead to a variety of different blueprints for a test.
Differences that are sizable or that persist for some time should be taken seriously.
To understand whether improved student scores are meaningful, educators need to determine whether teaching has been focused on increasing mastery rather than on changing scores.
If students are gaining mastery, then the improvement will show up in many different places- on other tests they take or in the quality if their later academic work- not just in their scores on their own state's test.
This book focuses on how to use assessment results to change practice in ways that make a long-term, meaningful difference for students.
Data Wise
Boudett, City, Murnane
Chapter 2: Building Assessment Literacy
Assessments should be of middling difficulty; extremely easy or extremely hard tests give you little information about what students know.
Sample principle of testing- making inferences of students' knowledge of an entire domain from a smaller sample.
Discrimination- discriminating items are used to reveal differences in proficiency of students that already exist.
Measurement error- inconsistencies in scores; for example, when various forms of a test have different samples, in people's behavior, and between individual scores.
Reliability- degree of consistency of measurement; a reliable measure is one that gives you nearly the same answer time after time
Score inflation- increase in scores that do not reflect a true increase in students' proficiency
Sampling error refers to inconsitency that arises from choosing the particular people from whom to take measurements.
The margin of error is simply a way to quantify how much the results would vary from one sample to another.
While a well-designed test can provide valuable information, there are many questions I cannot answer. How well does a person persevere in solving problems that take a long time and involve many false starts? To what extent has a student developed the dispositions we want-for example, a willingness to try applying what she has learned in math class to problems outside of school? How well does the student write long and complex papers requiring repeated revision? People demonstrate growth and proficiency in many ways that would not show up on any single test.
Significant decisions about a student should not be made on the basis of a single score.
Raw scores- percentage of possible credit. They are difficult to interpret and compare because they depend on the difficulty of the test which is likely to vary.
Norm-referenced tests- designed to describe performance in terms of a distribution of performance. Individual scores are reported in comparison to others (a norm group).
Percentile rank- percentage of students in the norm group performing below a particular student's score. PR tells you where a student stands, nut only relative to a specific comparison group taking a specific test.
Criterion referenced tests- determines whether a students has mastered a defined set of skills or knowledge; measures whether a student has reached a preestablished passing level (cut score). It does not rank students and seves only to differentiate those who passed from those who failed.
Standards-referenced tests- developed by specifying content standards and performance standards; scored with various performance levels
Developmental (vertical) scales- trace a students development as he or she progresses through school
Grade equivalents- developmental scores that report the performance of a student by comparing the student to the median at a specific stage; easy to interpret and explain but have become popular and rarely used. Ex 3.7 would be a third grader in their seventh month of school
Developmental scale (standard) score- reports performance on an arbitrary numerical scale; students who score the same are believed to have the same proficiency even if they are in different grades.
When interpreting the results of a single test, it is often useful to obtain performance data from more than one scale.
For purposes of diagnosis and instructional improvement, most educators want more detail than less. Although finer-grained levels of detail are instructionally more useful, because fewer items are used in reporting performance the results will also be less reliable.
Cohort-to-cohort change model- when schools test a given grade every year and gauge improvement by comparing each years scores for students in that grade to the scores of the previous year's students in that grade (mandated by NCLB).
Longitudinal (value-added) assessment- measures the gains shown by a given cohort of students as it progresses through school.
It is risky and misleading to rely on a single item to draw conclusions about a single student because of measurement error and not being able to tell which skill caused the student to miss the question.
Three complementary strategies for interpreting scores on a particular assessment, all of which involved using additional information:
1. Look beyond one years assessment results by applying either the cohort-to-cohort change or value-added assessment approach
2. Compare your students' results with those of relevant students in the district or the state.
3. Compare your students' results on the most recent assessment with their performance on other assessments.
Three reasons why small differences should not be given credence:
1. Sampling error
2. Measurement error
3. Any given set of content standards could lead to a variety of different blueprints for a test.
Differences that are sizable or that persist for some time should be taken seriously.
To understand whether improved student scores are meaningful, educators need to determine whether teaching has been focused on increasing mastery rather than on changing scores.
If students are gaining mastery, then the improvement will show up in many different places- on other tests they take or in the quality if their later academic work- not just in their scores on their own state's test.
This book focuses on how to use assessment results to change practice in ways that make a long-term, meaningful difference for students.
Tags:
Assessment,
Book Excerpts
Data Wise Ch 1
Notes from text
Data Wise
Boudett, City, Murnane
Chapter 1: Organizing for Collaborative Work
Three activities that can support a "data culture" within schools: creating and guiding a data team, enabling collaborative work among faculty, and planning productive meetings.
Data team
Having a few people responsible for organizing and preparing the data means that you can dedicate the full faculty's time to discussing the data.
3 Tasks
1. Create a data inventory (external and internal assessmentsbandvstudent-level information)
2. Take stock of data organization
3. Develop an inventory of the instructional initiatives currently in place
Are you satisfied with the way you capture the information generated from each of your assessments?
It is better to share responsibility for interpreting data among all teachers.
When planning conversations around data, the challenge is to find an effective way to give all faculty members a chance to make meaning of what they see.
Four Helpful Strategies for Planning Productive Meetings
1. Establish group norms (i.e. no blame, no wrong answers)
2. Use protocols to structure conversations
3. Adopt an improvement process
4. "Lesson plan" for meetings (repackage data results so they can be easily understood)
Data Wise
Boudett, City, Murnane
Chapter 1: Organizing for Collaborative Work
Three activities that can support a "data culture" within schools: creating and guiding a data team, enabling collaborative work among faculty, and planning productive meetings.
Data team
Having a few people responsible for organizing and preparing the data means that you can dedicate the full faculty's time to discussing the data.
3 Tasks
1. Create a data inventory (external and internal assessmentsbandvstudent-level information)
2. Take stock of data organization
3. Develop an inventory of the instructional initiatives currently in place
Are you satisfied with the way you capture the information generated from each of your assessments?
It is better to share responsibility for interpreting data among all teachers.
When planning conversations around data, the challenge is to find an effective way to give all faculty members a chance to make meaning of what they see.
Four Helpful Strategies for Planning Productive Meetings
1. Establish group norms (i.e. no blame, no wrong answers)
2. Use protocols to structure conversations
3. Adopt an improvement process
4. "Lesson plan" for meetings (repackage data results so they can be easily understood)
Tags:
Assessment,
Book Excerpts
6.18.2012
Warm Ups and Exit Slips
I've been thinking a lot about how I want to start and end class next year. I think my hang up is that I am a very routine person so I want to pick one thing and use it every day. Then I can make a nice little form to pass out and be done with it. No more thinking.
Just like there is no one best teaching method, there is no one best way to start and end class. I'd like to brainstorm some ideas to keep in my 'toolbox' of ideas. (It is a very sexy toolbox by the way.)
In the past I have mostly done review problems from the previous lesson or review problems of skills they should have and will need for the current lesson as my warm up.
I tried exit slips for the first time last year and it failed. Students would do the warm up on one side and then immediately flip it over and attempt the exit slip, even though I hadn't taught them the new skill yet. Or I would run class too long and forget about the exit slip completely.
I also spent entirely too much time creating, printing, and cutting them. It was not worth it. Neither the ends nor the means were justified.
These are my priorities and purposes behind a warm up and exit slips:
1. Warms ups suck students into learning as soon as the bell rings
2. I want to make the most of each one of my instructional minutes.
3. I want a seamless transition from one class to another.
4. Exit slips require me to give students time to reflect.
5. I want students to make meaning and create connections from what they've learned.
6. I want some kind of instructional feedback so I know what to do next.
Here are the things I am brainstorming about and please help me add new ideas.
Warm Up Exercises
Review problem from previous lesson
Review problem of skill they should know and will need for current lesson
Review problem of skill from previous unit
Review problem of skill from previous grade
Vocabulary review
Write a main idea from the previous lesson
Compare/contrast something
Look back at a practice problem you didn't understand and write one question about it
Exit Slip Exercises
Write a one sentence summary of the math we did today.
Create analogies/metaphors.
Choose one example problem from the notes and ask a question about it.
Write a main idea or important fact from today's lesson.
Rewrite a process from the notes in your own words.
Rewrite the definition(s) of important vocabulary word(s) in your own words.
Write down any important formulas and label what the variables mean.
Logistics
Do I build this into their daily guided notes sheet?
Do I give them blank index cards and project the task on the SMART board?
Are index cards the right size for what I want? (Do I even know what I want?)
Do I keep these for accountability or do students?
Could I manage it so that the exit slip can be reviewed/shared/discussed the next morning as the warm up exercise, killing two birds with one stone?
If I want some type of instructional feedback, I have to be in possession of the exit slips. When I read through them, what do I do with them? Sort? Keep? Toss?
Could a throwaway exit slip be transferred to my unit summary sheet as the warm up exercise for the day and then be discarded?
I think my bottom line question (which applies to most things I want to do) is:
What is the simplest way to do this with the most impact?
One more thing...I have always wanted to make a giant BINGO (or MATHO I suppose) board with different activities and then roll an awesome BINGO wheel thingy to select the activity. Could that be a possibility? Could I create activities generic enough that could apply to either a warm up or exit slip? I could have students take turns spinning the wheel. I think they would love it and it would be random. Could I let go of my control freak nature enough to let it happen without, gasp, me planning it?
Inquiring minds want to know.
Just like there is no one best teaching method, there is no one best way to start and end class. I'd like to brainstorm some ideas to keep in my 'toolbox' of ideas. (It is a very sexy toolbox by the way.)
In the past I have mostly done review problems from the previous lesson or review problems of skills they should have and will need for the current lesson as my warm up.
I tried exit slips for the first time last year and it failed. Students would do the warm up on one side and then immediately flip it over and attempt the exit slip, even though I hadn't taught them the new skill yet. Or I would run class too long and forget about the exit slip completely.
I also spent entirely too much time creating, printing, and cutting them. It was not worth it. Neither the ends nor the means were justified.
These are my priorities and purposes behind a warm up and exit slips:
1. Warms ups suck students into learning as soon as the bell rings
2. I want to make the most of each one of my instructional minutes.
3. I want a seamless transition from one class to another.
4. Exit slips require me to give students time to reflect.
5. I want students to make meaning and create connections from what they've learned.
6. I want some kind of instructional feedback so I know what to do next.
Here are the things I am brainstorming about and please help me add new ideas.
Warm Up Exercises
Review problem from previous lesson
Review problem of skill they should know and will need for current lesson
Review problem of skill from previous unit
Review problem of skill from previous grade
Vocabulary review
Write a main idea from the previous lesson
Compare/contrast something
Look back at a practice problem you didn't understand and write one question about it
Exit Slip Exercises
Write a one sentence summary of the math we did today.
Create analogies/metaphors.
Choose one example problem from the notes and ask a question about it.
Write a main idea or important fact from today's lesson.
Rewrite a process from the notes in your own words.
Rewrite the definition(s) of important vocabulary word(s) in your own words.
Write down any important formulas and label what the variables mean.
Logistics
Do I build this into their daily guided notes sheet?
Do I give them blank index cards and project the task on the SMART board?
Are index cards the right size for what I want? (Do I even know what I want?)
Do I keep these for accountability or do students?
Could I manage it so that the exit slip can be reviewed/shared/discussed the next morning as the warm up exercise, killing two birds with one stone?
If I want some type of instructional feedback, I have to be in possession of the exit slips. When I read through them, what do I do with them? Sort? Keep? Toss?
Could a throwaway exit slip be transferred to my unit summary sheet as the warm up exercise for the day and then be discarded?
I think my bottom line question (which applies to most things I want to do) is:
What is the simplest way to do this with the most impact?
One more thing...I have always wanted to make a giant BINGO (or MATHO I suppose) board with different activities and then roll an awesome BINGO wheel thingy to select the activity. Could that be a possibility? Could I create activities generic enough that could apply to either a warm up or exit slip? I could have students take turns spinning the wheel. I think they would love it and it would be random. Could I let go of my control freak nature enough to let it happen without, gasp, me planning it?
Inquiring minds want to know.
Tags:
Planning
Assessment FOR Learning Ch 5
Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
Chapter 5: Selected Response Format
We had to work in groups to summarize chapters 5-9 and my group chose chapter 5. We also created a selected response assessment based on informational text at the eighth grade level, one for English and one for math. In addition, we created a handout of checklists from the book.
PowerPoint Summary
Assessments
Handout
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
Chapter 5: Selected Response Format
We had to work in groups to summarize chapters 5-9 and my group chose chapter 5. We also created a selected response assessment based on informational text at the eighth grade level, one for English and one for math. In addition, we created a handout of checklists from the book.
PowerPoint Summary
Assessments
Handout
Tags:
Assessment,
Book Excerpts
6.13.2012
Assessment FOR Learning Ch 2, 4
Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
Chapter 2: Understanding Why We Assess
One must always start the assessment process with a clear answer to the question, Why am I assessing?
Assessments at each of these levels can serve either of two purposes: they can support student learning (formative applications) or verify that learning has been attained (summative applications).
The evidence generated must reveal how each student is doing in mastering each standard. Assessments that cross many standards and blend results into a single overall score will not help, due to their lack of sufficient detail.
Teachers ask, Did the student make progress toward mastery of the standard? School leaders ask, Did enough students achieve mastery of the standard?
Formative assessments have no place in the determination of report card grades. They are the continuous assessments that we conduct while learning is happening to help students see and feel in control of their ongoing growth.
Use classroom assessment to keep students believing they are capable learners.
Chapter 4: Designing Quality Classroom Assessments
Four Categories of Assessment Methods
1. Selected response
2. Essay
3. Performance
4. Direct personal interaction
Our goal in assessment design is to use the most powerful assessment option we can; maximum information for minimum cost.
Selected response items can assess recall, classification, analytical and comparative reasoning and even draw conclusions but not evaluative reasoning because students must express and defend a position.
We always need to know why a student failed. Choosing the wrong assessment method can obscure the 'why'.
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
Chapter 2: Understanding Why We Assess
One must always start the assessment process with a clear answer to the question, Why am I assessing?
Assessments at each of these levels can serve either of two purposes: they can support student learning (formative applications) or verify that learning has been attained (summative applications).
The evidence generated must reveal how each student is doing in mastering each standard. Assessments that cross many standards and blend results into a single overall score will not help, due to their lack of sufficient detail.
Teachers ask, Did the student make progress toward mastery of the standard? School leaders ask, Did enough students achieve mastery of the standard?
Formative assessments have no place in the determination of report card grades. They are the continuous assessments that we conduct while learning is happening to help students see and feel in control of their ongoing growth.
Use classroom assessment to keep students believing they are capable learners.
Chapter 4: Designing Quality Classroom Assessments
Four Categories of Assessment Methods
1. Selected response
2. Essay
3. Performance
4. Direct personal interaction
Our goal in assessment design is to use the most powerful assessment option we can; maximum information for minimum cost.
Selected response items can assess recall, classification, analytical and comparative reasoning and even draw conclusions but not evaluative reasoning because students must express and defend a position.
We always need to know why a student failed. Choosing the wrong assessment method can obscure the 'why'.
Tags:
Assessment,
Book Excerpts
6.12.2012
Assessment FOR Learning Ch 1,3
Notes from text
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
Chapter 1: Classroom Assessment for Success
[Students] assessed their own achievement repeatedly over time, so they could watch their own improvement.
[Students] continually see the distance closing between their present position and their goal...ongoing student- involved assessment, not for entries in the grade book, but as a confidence-building motivator and teaching tool.
Our assessments have to help us accurately diagnose student needs, track and enhance student growth toward standards, motivate students to strive for academic excellence, and verify student mastery of required standards.
Whatever else we do, we must help the, believe that success in learning is within reach.
Keys to Assessment Quality:
1. Clear Purposes
2. Clear Targets
3. Sound Design
4. Effective Communication
You must ask yourself, "Do I know what it means to do well? Precisely what does it mean to succeed academically?"
Chapter 3: Clear Achievement Expectations: The Foundation of Sound Assessment
Students can hit any target that they see and holds still for them.
Please realize that the path to academic success doesn't change as a function of how fast students travel it.
Types of Achievement Targets:
Knowledge (prerequisite to all other forms of achievement)
Reasoning (analytical, synthesizing, comparative, classifying, evaluative, inductive/deductive)
Performance Skills (to integrate knowledge and reasoning proficiencies and to be skillful)
Products (developing the capacity to create products that meet certain standards of quality)
Dispositions (attitudes, interests, motivation)
In the case of analytical reasoning, our instructional challenges are to be sure that students have the opportunity to master whatever knowledge and understanding they need to be able to analyze the things we want them to understand , and that they receive guided practice in exercising their analytical thought processes.
What do students need to come to know and understand in order to be ready to demonstrate that they can meet this standard when the time comes to do so?
What patterns of reasoning, if any, must they gain mastery of on their journey to this standard?
What performance skills, if any, are called for as building blocks beneath this standard?
What products must students become proficient at creating, if any, according to this standard?
An Introduction to Student-Involved Assessment FOR Learning
Stiggins and Chappius
Chapter 1: Classroom Assessment for Success
[Students] assessed their own achievement repeatedly over time, so they could watch their own improvement.
[Students] continually see the distance closing between their present position and their goal...ongoing student- involved assessment, not for entries in the grade book, but as a confidence-building motivator and teaching tool.
Our assessments have to help us accurately diagnose student needs, track and enhance student growth toward standards, motivate students to strive for academic excellence, and verify student mastery of required standards.
Whatever else we do, we must help the, believe that success in learning is within reach.
Keys to Assessment Quality:
1. Clear Purposes
2. Clear Targets
3. Sound Design
4. Effective Communication
You must ask yourself, "Do I know what it means to do well? Precisely what does it mean to succeed academically?"
Chapter 3: Clear Achievement Expectations: The Foundation of Sound Assessment
Students can hit any target that they see and holds still for them.
Please realize that the path to academic success doesn't change as a function of how fast students travel it.
Types of Achievement Targets:
Knowledge (prerequisite to all other forms of achievement)
Reasoning (analytical, synthesizing, comparative, classifying, evaluative, inductive/deductive)
Performance Skills (to integrate knowledge and reasoning proficiencies and to be skillful)
Products (developing the capacity to create products that meet certain standards of quality)
Dispositions (attitudes, interests, motivation)
In the case of analytical reasoning, our instructional challenges are to be sure that students have the opportunity to master whatever knowledge and understanding they need to be able to analyze the things we want them to understand , and that they receive guided practice in exercising their analytical thought processes.
What do students need to come to know and understand in order to be ready to demonstrate that they can meet this standard when the time comes to do so?
What patterns of reasoning, if any, must they gain mastery of on their journey to this standard?
What performance skills, if any, are called for as building blocks beneath this standard?
What products must students become proficient at creating, if any, according to this standard?
Tags:
Assessment,
Book Excerpts
6.07.2012
Summary Sheets
I'm teaching one week of summer school and the students are rotating between doing board work, worksheets, and a computer program. I decided to create a summary sheet so that after each rotation they could stop, summarize their learning, and hopefully remember it before moving on.
And surprisingly, one student said that we should use these during the regular school year. So now I'm thinking maybe this could be a good addition to our notes as an exit slip type exercise.
I'm already planning to include a unit summary sheet in our math portfolios where students have to go back through each section of the unit and rewrite summaries as a way of review.
Seems like they could go hand in hand. Maybe I should modify the unit summaries to include example problems as well?
What modifications would you make so that writing summaries becomes a more natural and useful process?
And surprisingly, one student said that we should use these during the regular school year. So now I'm thinking maybe this could be a good addition to our notes as an exit slip type exercise.
I'm already planning to include a unit summary sheet in our math portfolios where students have to go back through each section of the unit and rewrite summaries as a way of review.
Seems like they could go hand in hand. Maybe I should modify the unit summaries to include example problems as well?
What modifications would you make so that writing summaries becomes a more natural and useful process?
Tags:
Literacy
6.01.2012
End of Course Exams
Hi everybody.
I spent all last week working on my end of course exams and pacing guides for Algebra I, Geometry, and Algebra II. We have never used end of course exams and I wanted to get some feedback from you on how we are planning to use them, possible setbacks, and also how you use them.
The plan is we will no longer have semester exams at the end of each quarter. Before, each semester ended with a comprehensive exam of that semester, but not the entire year.
Now, at the end of the year, students will have to have a passing grade as well as pass the end of course exam in order to pass the entire course. This prevents two things. One, a student can't slack off all year, then do awesome on the EOC and pass the class. Two, a student can't do awesome all year, then slack off on the EOC and pass the class. I like both of those.
Also, the test is pass/fail and will not go in the grade book to avoid it being a double whammy on a student's grade. The test will be given at the end of April. For students who don't pass, there will be remediation during class time while the students who did pass are doing some sort of project. The test will be given again in May. If students don't pass the second time, then they will come to summer school, re-take the course, and try their third attempt at passing the EOC. If they still don't pass, then they will repeat the course in the following school year.
The intention of the EOC is to help stop the cycle of passing students who aren't ready to move on. We spend too much time reviewing the previous course because we feel students are unprepared and then we get behind in teaching the course itself. I'm not sure how well this will combat that problem because the student has three chances, and if they do pass, will they be prepared enough to start the next course?
I suppose that is where our rigor on the test comes in. The test is a two-day test and the math is set up so that session I is all multiple choice and session II is all open-ended involving at least one writing piece.
To kill two birds with one stone, we will also be using the EOC to show student growth over the course of the year (a key part of our teacher evaluations). We will give it at the end of each quarter so that by the 'official' time they take it, they will have seen it three previous times.
I have two options here. One, I can count it in the grade book as a regular test and grade it according to the quarter: 25% right in the first quarter would be an A, 50% in the second quarter would be an A, and 75% in the third quarter would be an A. Two, I could not put it into the grade book at all but just keep a separate record of their scores each time to show growth over the year. I'm leaning toward the second option just because it would be an easy document to turn in for my summative evaluation. Also, if I did it in Excel then I could create fancy impressive graphs. Yay me!!
I am worried because my students can't even pass my semester exams which are easier than the EOC and I've had to curve grades every semester. But the students will be more familiar with the EOC after taking it three times. In addition, I built my pacing guide so that I can see which EOC question matches up to each concept. My hope is that when I create my lessons and assessments, I will remember to include questions similar to the ones on the EOC so that hopefully, nothing will be a surprise!
This is the closest I've been to actually using backward design/UbD so it will be interesting to see how it all plays out.
What drawbacks do you see to this process? How does your school use EOC's differently? How do you show student growth?
I spent all last week working on my end of course exams and pacing guides for Algebra I, Geometry, and Algebra II. We have never used end of course exams and I wanted to get some feedback from you on how we are planning to use them, possible setbacks, and also how you use them.
The plan is we will no longer have semester exams at the end of each quarter. Before, each semester ended with a comprehensive exam of that semester, but not the entire year.
Now, at the end of the year, students will have to have a passing grade as well as pass the end of course exam in order to pass the entire course. This prevents two things. One, a student can't slack off all year, then do awesome on the EOC and pass the class. Two, a student can't do awesome all year, then slack off on the EOC and pass the class. I like both of those.
Also, the test is pass/fail and will not go in the grade book to avoid it being a double whammy on a student's grade. The test will be given at the end of April. For students who don't pass, there will be remediation during class time while the students who did pass are doing some sort of project. The test will be given again in May. If students don't pass the second time, then they will come to summer school, re-take the course, and try their third attempt at passing the EOC. If they still don't pass, then they will repeat the course in the following school year.
The intention of the EOC is to help stop the cycle of passing students who aren't ready to move on. We spend too much time reviewing the previous course because we feel students are unprepared and then we get behind in teaching the course itself. I'm not sure how well this will combat that problem because the student has three chances, and if they do pass, will they be prepared enough to start the next course?
I suppose that is where our rigor on the test comes in. The test is a two-day test and the math is set up so that session I is all multiple choice and session II is all open-ended involving at least one writing piece.
To kill two birds with one stone, we will also be using the EOC to show student growth over the course of the year (a key part of our teacher evaluations). We will give it at the end of each quarter so that by the 'official' time they take it, they will have seen it three previous times.
I have two options here. One, I can count it in the grade book as a regular test and grade it according to the quarter: 25% right in the first quarter would be an A, 50% in the second quarter would be an A, and 75% in the third quarter would be an A. Two, I could not put it into the grade book at all but just keep a separate record of their scores each time to show growth over the year. I'm leaning toward the second option just because it would be an easy document to turn in for my summative evaluation. Also, if I did it in Excel then I could create fancy impressive graphs. Yay me!!
I am worried because my students can't even pass my semester exams which are easier than the EOC and I've had to curve grades every semester. But the students will be more familiar with the EOC after taking it three times. In addition, I built my pacing guide so that I can see which EOC question matches up to each concept. My hope is that when I create my lessons and assessments, I will remember to include questions similar to the ones on the EOC so that hopefully, nothing will be a surprise!
This is the closest I've been to actually using backward design/UbD so it will be interesting to see how it all plays out.
What drawbacks do you see to this process? How does your school use EOC's differently? How do you show student growth?
Tags:
Assessment,
Planning
Subscribe to:
Posts (Atom)