Grade 3 Math Assessments: “meeting standards” or “struggling”?

Peter Rukavina

The Prince Edward Island Department of Education released the Primary Mathematics Assessment results yesterday. These are the results of testing of students who were in Grade 3 in 2009-2010 and were tested on their mathematics skills in October 2010.

There were 1,295 students tested, and the results were as follows:

  • met expectations: 68%
  • approached expectations: 12%
  • experienced difficulty: 20%

Here are three news headlines that resulted from the news release announcing the results, Results from Primary Math Assessments available online:

There was a similar story in Nova Scotia in 2008, reporting on that province’s grade 3 math assessment; in that case the headline on the CBC was Nova Scotia Grade 3 students struggle with math.

In that story the lede was “One-third of the children tested last year did not meet expectations in the province’s first Early Elementary Mathematical Literacy Assessment.” Which turns out to be almost exactly the same number of students “not meeting expectations” in Prince Edward Island’s assessment.

So in Prince Edward Island the minister is “pleased,” the “numbers add up” and students are “meeting standards” whereas, with the same result, in Nova Scotia student are “struggling” with math.

My problem, as a parent of a student in this group, is that I have no idea of who’s right on this: should I be pleased with the results, or concerned? Are our children being well-schooled in math or not? Do the results mean that 32% of 9 and 10 year olds in the province can’t add, or do they mean that 68% could go on to become mathematicians?

And this reveals the problem with standardized testing, especially when the results are wrapped in fuzzy words like “met expectations” that are useful for public relations but of little utility to parents. I remain unconvinced that there’s any utility at all in testing students like this, especially given the resources and attention that the testing takes away from actual education, and that the results can be spun to mean anything you want them to mean.

Comments

Submitted by Clark on

Permalink

Whose expectations? The “fuzzy words” are ambiguous at best but the scores found in the report seem extremely poor. A group of kids achieving an average of 56 (St. Jeans) on a math test should be alarming to someone.

Submitted by Ritchie Simpson on

Permalink

I don’t see that the problem here is so much with standardized testing as with the classification and interpretation of the results, positioning of the goal posts as it were. There is a large empirical element to the education of our children and as parents and a society we have to know that our efforts are successful.
We cannot simply rely on the assertion of those hired to do the job that they have been successful, particularly so when empirical testing shows otherwise. We hear too often that the results are not significant or skewed in some way, the criticism of “teaching to the test”. Of course the reason that this is put forward is that the results can and do expose poor performers at the blackboard as well as those seated in the classroom.

Submitted by Bon on

Permalink

Ritchie, there’s a lot more to the critique of standardized tests than that they expose poor performers: many educators who have scored well on large-scale instruments, both in their student years and their professional careers, take issue with them.

The fact that we still have large empirical components to our curriculum may or may not be of benefit to our students: that’s a separate conversation and one I don’t have a black and white view on. But the fact that we have them, and want or even need to know that our efforts to teach them are successful, does NOT mean that standardized tests actually give us this information. They often tell us how kids take tests more than they provide useful information for better teaching.

The standardized instruments are part of one ideology of education, and if you start from the premise that the way to judge learning is to test it, then they appear natural and necessary. There are other ways to view education and the learning process.

Submitted by ashleyjohnston on

Permalink

I’m not sure why they would group students into three coarse groups in this age of instant data visualization. But I wouldn’t object to drawing the arbitrary grouping bars on the distribution graph.

Even then the data only seems useful when comparing it to other jurisdictions.

Add new comment

Plain text

  • Allowed HTML tags: <b> <i> <em> <strong> <blockquote> <code> <ul> <ol> <li>
  • Lines and paragraphs break automatically.

About This Blog

Photo of Peter RukavinaI am . I am a writer, letterpress printer, and a curious person.

To learn more about me, read my /nowlook at my bio, listen to audio I’ve posted, read presentations and speeches I’ve written, or get in touch (peter@rukavina.net is the quickest way). 

You can subscribe to an RSS feed of posts, an RSS feed of comments, or a podcast RSS feed that just contains audio posts. You can also receive a daily digests of posts by email.

Search