I'm sort of in "assessment mode" here in the homeschool, nine weeks into the school year. We're just finishing up a grammar text and assessing before we start a new one; my 10-y-o just finished a unit in his physics-kit-based activity; I just gave a quiz to all three of the oldest kids to see what they remembered from American history so far; and last week I made them a Latin test. Next week a proctor will come to my house to administer the required annual standardized test to my 10- and 7-year-olds.
Some of these assessments are created by third parties, but a lot of them I write myself. I expect that won't change much as the kids grow older, unless I radically change my style of homeschooling. Which always makes me think about the purpose of assessment, and consider how third-party assessments and my own homegrown style achieves those purposes.
Let's set aside the derivative purposes of (1) meeting state-mandated requirements and (2) teaching children how to perform on various assessments to demonstrate accurately their mastery of material. The primary reasons for assessment are these:
(1) use by the teacher to find out how well students have mastered material, so that the teacher can either move on to new material or to adjust her teaching to present material more effectively
(2) use by the teacher or student as a practical tool to aid retention through motivated reviewing of material and working under external constraints (e.g. without outside help, or with time limits)
(3) by third-party gatekeepers (e.g. potential employers, educational admissions staff) as a purportedly objective credential.
By the way: Not just the teacher, but the student, can use self-assessment techniques for purposes (1) and (2). It also must be said that he student can also evaluate his assessments as credentials, and in so doing tailor his search for employment, educational opportunities, and other benefits that are controlled by third parties.
As the teacher who's also the parent, I have a certain dilemma.
As the teacher (and to some extent, as the scientist I will always be), I want to measure something. I need the information so I can tailor my teaching to my young students. But I'm aware, as that teacher, as that scientist, that I'm also not a biased observer -- because I'm the mother of the student, there's always a result I'm hoping to see, a hope that my child will "do well." I want my child to show himself that he is learning, and to be pleased with his progress, and to confidently attack new material. I want to believe, and I want him to believe, that he's smart and studious and capable.
And because I'm the teacher, I am also hoping to see that same "good" result because it will demonstrate that I'm a pretty darn good teacher. Since I only have a few students, by the way, it isn't acceptable for "almost all" or "all but one" of my students to pass muster. They all have to do it -- some slower, some faster, but they all have to do it -- or else I have to face an unpleasant truth: inadequate teaching or parenting. (Yes, as kids get older their schooling decisions become more their own; but the responsibility of choosing the school is mine until they are adult.)
And because I know each of my own kids pretty well, I could (if I wished) design a test on purpose that I know he would complete at an apparently excellent level. Or I could design one to trip him up on purpose. It all depends how I select the questions.
Choosing only objective assessments written by someone else is one way to get around the bias, and of course it takes little time or effort. I do this with most of our purchased curricula -- for example, I use the tests that come with our math program and some of the tests that come with the Latin book. The downside is that it may not measure exactly what I am interested in knowing. The math assessments I choose are pretty objective and matched to the material, so I'm comfortable with those (especially in combination with the annual standardized test). But the Latin tests don't quite cover everything I teach, because I've tweaked the material so much and also added some material that I thought was being covered too slowly. And then of course I sometimes want to know more subtle things, like the difference between remembering detailed objective facts and understanding how causes led to effects, or the difference between failures of understanding and failures to avoid careless oversights. Sometimes you have to look deeper than the test-in-a-box to see those.
I don't use school-in-a-box much; I write or design or improvise most of my own curriculum. And if I want an assessment, I have to come up with the assessment myself . If all I cared about was leaving a paper trail proving I had done something that looks like assessment -- and I respect the views of parents who choose to assess only to provide the state with its required paper trail -- it would not be so hard. But I really do want to measure how the kids are doing. And the truth is, when I sit down to write an assessment, I can feel the temptation to write questions that have an outcome I already know.
That's not to say that there's no purpose in writing some questions that I am sure will be answered correctly, because those questions may help the children remember what they've learned or make them confident to attempt harder material. And of course it generates a paper trail for the state. But it's not really "assessment" if it tells me only what I already know. I'm wasting my time unless I go beyond demonstrating and really start evaluating.
It's never been more clear to me now that I am writing assessments for other people's children as well as my own. I carefully designed a Latin test for three different children a couple of weeks ago, and I'm pretty sure that was a fair assessment, although I knew well how my own son would perform and couldn't predict quite as closely how the other two would do. I decided on impulse to write a history quiz yesterday, and wrote it in about fifteen minutes, and I am less confident that I designed it well.
The difference is how much time I spent on them. With the Latin test, I began by asking myself pretty objectively, "What have I been trying to teach for the last few months? What do I want the children to know before I can move on?" I listed those teaching goals, and then I tried to write questions that would measure them. As I wrote it, I had a pretty good guess which children would do well on which parts of the test, but I didn't know exactly what was the extent of their knowledge. I tried to suppress an impulse to trip them up on purpose. I needed the test to see what to emphasize in upcoming lessons; I also wanted to use it to demonstrate to my friends, their mothers, how well they were mastering the material.
I wrote the history test more on impulse, and not wanting to make it very long, used just a few questions. The first part, where I listed eleven historical events (1492 through 2001), and asked them to put them in order -- that was definitely a well-designed question. But for the other ones, I just sort of thought quickly about a few of the books we had read, and then asked a question or two about the time periods they covered. Two questions asked for recall of details that I knew quite well my own son would know easily and that I had a pretty good idea the other children wouldn't. That's not why I picked them, I picked them because they were objective-sounding questions that came quickly to mind, but I didn't think very hard about whether it was the kind of question I really needed to be asking. The other two questions were meant to be essay- or list-type questions, and while I think the questions covered good material, I didn't write them well -- it wasn't obvious to the children that I wanted them to write as much as they could remember about each subject (they each wrote a few words and had to be sent back to write more, and only one answer of one question turned out to be anything like what I wanted them to write).
So, that history test created a paper trail for the state, and demonstrated mastery of one question, but I'm not really satisfied with it. As an assessment of myself it was pretty darn effective. I now know that I need to take more time writing tests. I can also use the essay questions to help teach the kids how to answer essay questions. So: not useless, but needs more work.
I haven't written yet about the gatekeeper/credential function of assessment. Unfortunately, the traditional use of "grades" (as assessed by the teacher) as credentials -- for employment or higher education admission-- is obviously absurd for the homeschool student. If higher grades will help my kid get a job or a scholarship, wouldn't I be insane not to give him higher grades? Or rather, wouldn't it be insane to imagine that I could assess performance objectively? On the other hand, maybe it's not unfortunate that it's obviously absurd, because maybe all grades-as-credentials is absurd, and homeschooling only makes that absurdity more obvious. How can letter grades be objective credentials at all, when different schools teach to different levels of difficulty to different pools of students? When the student is also a "customer" paying for the credential? Do credentials such as letter grades really predict success in all the endeavors for which they serve as proxy qualifications?
The bottom line is that a single method of assessment cannot serve two masters, i.e., the teacher and the third-party gatekeeper. As my children approach high school age, when the assessments and the credentials start to converge, I'm going to have to decide how to navigate this problem.
Can I suggest something that I have tried successfully with my grad students?
Ask the students themselves to write questions (and answers) for the test. Probably not useful for the youngest, but for older students I think that the idea of re-evaluating material with the question of 'what would I ask' can help you see what they think you think are the important points. Also can be a nicely cooperative means of honoring everyone's perspective and contribution to learning. Also, then they know that there will be at least one familiar question on the test and learn to appreciate how hard it can be to write a good question. Once again, depending on age, YMMV.
Posted by: Christy Porucznik | 27 October 2010 at 01:56 PM
Hey, Christy, that is a brilliant idea. You're right that it probably won't work for the younger kids, but I'm sure I could try that with the ten- and twelve-year-olds.
Posted by: bearing | 27 October 2010 at 03:28 PM
Glad to be of service. Web 2.0!
Posted by: Christy Porucznik | 27 October 2010 at 04:06 PM
Thanks Christy! I will also try this with my 12 year old boy.
Posted by: Judy Kingston-Smith | 28 October 2010 at 06:55 AM
I struggle with the 'what do I need to have confidence the child has mastered the material and so we can move forward' vs 'what does the state need' vs 'what will he need as credentials for college' as well... more so now that I have a senior!
Riley Lark at Point of Inflection blog (http://larkolicio.us/blog/) has been having a number of posts over the past month on the topic of assessments and weighting, especially as they relate to the standards covered during the course. It's mostly math-focused, and as we learn at home we may not be quite as standards-focused, but the conversations - especially in the comments - have been very interesting and given much to think about.
Posted by: Shirley | 28 October 2010 at 08:09 AM
Having slept on it, I'm increasingly convinced of the assertion in my last paragraph: "a single method of assessment cannot serve two masters." I think the homeschool teacher would do best to separate the functions with different assessments.
In other words, you have one kind of assessment for credentialing (including for communicating to the student where he is or isn't capable, where he has or has not passed external milestones); another for measuring and correcting the effectiveness of the teaching/learning dyad midstream; and maybe another as a retention tool.
Then you can decide which of these, if any, you'll retain as part of your paper trail.
Schools sort-of do this when they give low-stakes quizzes throughout a course as well as high-stakes exams or projects that serve as credentialing. We could do this, but why should we behave like schools when our strengths and weaknesses are so different from those of schools?
We can never measure our own children with objectivity like a school can, but a school can never *know* them like we can. We should be really, really good at doing the assessment-for-improvement functions our own way. And then we should look for some other way to assess for credentialing.
Posted by: bearing | 28 October 2010 at 08:52 AM
I tend to differentiate sometimes between assessment and tests. Assessments to me literally "assess" where my daughter is on a subject whether I have formally exposed it to her or not. They don't involve studying. (For instance, at the end of first grade I had her take a series of first and second grade spelling tests cold on Spelling City. We had never studied spelling.)
To me "tests" involve review and studying and a certain amount of pressure (time constraints, extremely limited assistance). Let me just say that at this point I am very glad that testing is not mandatory in IL. Otherwise my very bright daughter (almost 8) would completely bomb. Her mind completely shuts down when exposed to the tiniest amount of real or perceived pressure.
In the past I've given her a "final exams" for math (we use Singapore so a lot of people use the placement tests for this). I did it as much as early preparation for standardized testing as assessment. But it's been very hard for me to know what level of "help" it is appropriate to give and not to mention dealing with her screaming and crying from the stress.
I just pray that she outgrows this before it is time for her ACT/SAT, and in the meantime I am debating whether I will really give her the next final when she finishes her current math level.
Posted by: Barbara C. | 28 October 2010 at 02:49 PM
I also know what you mean about wanting your child to do well on an "objective" test so that it will reflect better on you as a teacher. After the last "final exam" I went through a real period of self-doubt and self-examination as a teacher and a parent.
While I realized that some of her less than stellar performance was due to behavioral and psychological issues, I had to reassess what my goals are for her age. Somewhere and for some unknown reason I had slipped from trying to "plant seeds" to expecting complete mastery.
I had forgotten that sometimes if I just plant the seeds they may not be ready to germinate for days or even months later. There are some lessons (formal or informal) that she doesn't seem to pay attention to at all or is actively rejecting (especially during Religion) but then she'll pull out an amazing insight or I will overhear something she told her Granny about it much later.
So, I think I am becoming more and more leery of doing any testing until my kids are at least 10. And even then it would probably be more to prepare them on how to take tests than to truly assess their learning.
Posted by: Barbara C. | 28 October 2010 at 03:07 PM