U.S. DOE Report Shows Smarter Balanced Lagging Behind

image

In a two-year report written by the U.S. Department of Education and published in May 2013 reviewing the Smarter Balanced Assessment Consortium shows the consortium is lagging behind its stated goals and with technology preparedness in local schools.

Looking at pg. 15 of the report it is evident that Smarter Balanced does not have a handle on the technology readiness of the states in the consortium.

image

Seven of their member states they don’t have any data listed at all.  Only Delaware and Connecticut reported 100% readiness.

On page 7 of the report they say they developed 5000 items for the Spring 2013 pilot.  This is a reduction from a 10,000 item goal.

Recognizing the importance of having strong quality control measures, the consortium revised its processes for monitoring and reviewing alignment to the CCSS and quality throughout future item development cycles. In order to provide a focus on alignment and quality, and to increase the percentage of machine-scored items, the consortium reduced the number of items developed for the pilot test from 10,000 to 5,000, (pg. 9).

That is report-speak for we had a crappy product so we needed to tweak it.  The Department of Education saw this as a challenge:

While Smarter Balanced item development is well underway, the consortium also experienced several  challenges. During the item review process in fall 2012, the consortium recognized that the review process for ensuring the quality of the items was not sufficient. As a result, the consortium revised the number of items that were developed for the pilot test (from 10,000 to 5,000) so that an additional review could occur to provide a clearer focus on quality and alignment to the CCSS. Moving forward, the consortium is going to be developing 38,000 items in year three for the field test in spring 2014. It is essential that Smarter Balanced maintain a strong process for determining the quality of the items  being developed. This will require that the consortium monitor and evaluate the processes for writing  and reviewing items as well as for reviewing the quality of the items themselves, and that the consortium include external content experts in English language arts and mathematics as a component of the item development processes. The RFP for field test item writing, released in December 2012, included several of these components to strengthen the consortium’s quality control measures, (pg. 23).

They came up with only 5,000 assessment items in two years.  We are to believe they will develop 32,000 additional items by Spring 2014?  Right.

You can read the report for yourself here, here or below.

Comments

  1. Richard_Innes says

    This raises even more questions about sustainability, a topic I addressed to the Indiana legislative commission examining CCSS on October 1, 2013. Sustainability became a big problem with Kentucky’s KIRIS assessment Performance Events in the early 1990s and those events crashed in just four years (KIRIS was remarkably similar to S-B proposals).

    It’s not enough just to create a first round of questions. Questions have to be changed out over time to avoid teaching to the tests, and generating new questions that cover the same academic material to the same level of difficulty gets really challenging when we are talking about either open-response questions or the even more problematic performance items.

    One other question, are the 5,000 questions spread out across both English language arts and math and across all Elementary and Secondary Education Act testable grades from 3 to 8 plus high school? If so, that works out to an average of less than 400 questions per subject per grade, which probably isn’t enough to sustain the assessments for very long.

    With these tests slated for use in multiple states, question compromise is pretty much a certainty, so they will have to be changed out. Also, some of the questions will probably prove unsuitable during pilot testing and will have to be discarded, as well.