Tag: assessment

Can MOOCs Become System-Builders?

July 1, 2015 by
From Atlantic Article

Image from The Atlantic Website

Something really special happens in the second or third year of implementation in schools that are applying competency education with the spirit of learning and the spirit of empowerment – educators develop a deep sense of urgency to improve their skills so they are in a better position to help students learn.

In the first year or so, there is a shared purpose that the goal is to make sure students learn, not cover the curriculum; educators have figured out the new infrastructure for learning; the understanding of what proficiency means for each academic level has been calibrated; everyone is aware of where as a school they are strong and where they are weak in terms of being able to help students learn; and if a strong information system has been put into place, everyone also knows exactly how students are progressing and which ones need more help. With this transparency about how the school is performing, educators become focused on how improve their instructional tool kits – deepening their knowledge about how to teach their discipline, how to upgrade instruction and assessment to higher order skills, integrating language and literacy practices, how to organize learning opportunities so students are really engaged in robust learning, how to better coach students in building habits of learning….and the list goes on.

It’s a tremendous lift in instruction and assessment led by educators themselves who realize that their own professional skills need to be improved if they are going to help students achieve – I think of this as the transition toward the Finnish model. Teachers have explained that this stage of the transition is both the most challenging and the most rewarding. However, as a country, we are challenged to provide adequate professional development and learning opportunities for teachers that are rooted in the values and practices of competency-based education and are available in just-in-time modules. (more…)

When Diplomas and Credits Send False Signals

June 12, 2015 by

PercentageThis post originally appeared at the Foundation for Excellence in Education on June 11, 2015.

Last month Achieve launched its #HonestyGap campaign. The effort highlights the gap between the percent of students deemed proficient on state exams versus the percent of students deemed proficient on the National Assessment of Educational Progress (NAEP). Not surprisingly, the gaps are wide and pervasive.

The NAEP is considered to be the gold standard of assessments, and this Achieve report clearly demonstrates how parents, students and, quite frankly, educators are being misled by inconsistent expectations of proficiency. In many states when a student passes a state exam, it may not mean he has mastered the content. Often the tests are too easy or the passing scores too generous.

This proficiency gap is decried by the education reform community, but the NAEP isn’t a test most parents are even aware of because it has no impact on individual students or state accountability systems.

Parents typically rely on the most familiar aspect of American education to understand how their student is performing in school—the report card.

The real miscommunication happens when students earn passing grades in required courses yet struggle with end of year assessments. Students may accumulate all the required credits, but what is their diploma worth if they haven’t mastered the content? (more…)

Making Sense (or Trying to) of Competencies, Standards and the Structures of Learning

June 9, 2015 by
math comps

From Building 21 (Click to Enlarge)

States, districts, and schools are developing a range of different ways to structure their Instruction and Assessment system (the set of learning goals of what schools want students to know and be able to do; the way they can identify if students have learned them; and, if not, how they can provide feedback to help them learn it). I’m having difficulty being able to describe the differences as well as the implications. The issue of the importance of the design of how we describe what students should and/or have learned has come up in meetings about assessment, about learning progressions (instructional strategies that are based on how students learn and are designed to help them move from one concept to the next), and with the NCAA over the past month.

So I’m doing the only thing I know how to do—which is to try to identify the different issues or characteristics that are raised to see if I can make some sense of it. For example, here are a number of questions that help me understand the qualities of any set of standards and competencies:

Is it designed to reach deeper levels of learning?

Some structures clearly integrate deeper learning and higher order skills, whereas others seem to depend solely on the level of knowledge based on how the standard is written. We could just use standards and forgo an overarching set of competencies. However, the competencies ask us to ensure that students can transfer their learning to new situations. It drives us toward deeper learning.

Is it meaningful for teachers for teaching and for students for learning?

As I understand it, much of the Common Core State Standards was developed by backward planning, or backing out of what we wanted kids to know and be able to do upon graduation and then figuring out what it would look like at younger ages. Much less attention was spent on structuring the standards based on how students learn and meaningful ways to get them there. The learning progression experts are emphatic that it is important to organize the units of learning in a way that is rooted in the discipline and helps teachers to recognize where students’ understanding is and how they can help them tackle the next concept. That means the structures are going to be different in different disciplines. Thus, we need to understand how helpful the structures of the standards, competencies, and assessments are to actually help students learn. (more…)

Needed: Partners with Assessment Expertise

June 2, 2015 by

MeasurementI had a sense of dread as I flew to Colorado to join the National Center for the Improvement of Educational Assessment for its annual Colloquim on Assessment and Accountability Implications of Competency and Personalized Learning Systems. A room full of experts on measurement? I was prepared to have any ideas I might have about what assessment looks like in a fully developed competency-based system destroyed in a Terminator-like fashion.

Instead what I found was a room of incredibly thoughtful, creative, forward-thinking people who are willing to explore along with all of us how we might organize a system that keeps the focus on learning while using discrete and meaningful mechanisms to ensure rigor and equity. Along with myself, Ephraim Weisstein, founder of Schools for the Future, Maria Worthen, Vice President for Federal and State Policy at iNACOL, and Laura Hamilton of Rand were invited to the Colloquium to kick off the conversation. My brain started churning as I listened to the presentations from Kate Kazin, Southern New Hampshire University; Samantha Olson, Colorado Education Initiative; Christy Kim Boscardin, University of California, San Francisco; and Eva Baker, CRESST.

And then my brain went into overdrive listening to the insights of the team of assessment experts as they sorted through the conversation, explored different options, and identified where there was opportunity to create a system that generated consistency in determining levels of learning. It would be a system in which credentialing learning generates credibility, a system that allows us to trust when a teacher says a student is proficient, providing us with real confidence that they are, in fact, ready for the next set of challenges.

Some Big Take-Aways

Below are some of the big take-aways that Ephraim, Maria, and I came away with.

1. Get Crystal Clear on the Goal: It’s critical for the field and competency-based districts and schools to be explicit about their learning targets (however they might be defined and organized) so results can be evaluated and measured. There are a variety of ways of structuring competencies and standards, and we need to think about the ways in which we can measure them (or not).

2. Consider Applying Transparency to Designing Assessments: We all operate with the assumption that summative assessment items have to be hidden in a black box. However, we could make test items transparent – not their answers, of course – but the questions themselves. Consider the implications—lower costs, more sharing, more opportunity for the stakeholders to understand the systems of assessments. It’s worth having an open conversation about the trade-offs in introducing transparency as a key design principle in designing the system of assessments to support competency education. (more…)

Credibility Starts with Consistency with Common Assessments

May 26, 2015 by

Screenshot 2015-05-26 08.55.53The more I think about what the key elements of a competency system might be — those elements that if they working perfectly allow the system to weaken or be corrupted — the more I focus on ensuring that the system is calibrated or tuned. When a district or school puts that a student is proficient on the transcript then we need to have absolute trust that their is an agreement on what that means and that the next school or college will have a pretty darn close understanding of proficiency as well. Basically, we want our system to be credible and trusthworthy. That’s what accountability is all about.

And that’s why we need to do everything we can to build in this capacity into our districts and schools as fast as we can.  Our traditional system doesn’t expect this nor does it have the mechanisms in place to make it happen. That’s why we’ve had to turn to NAEP and state accountability assessments to tell us how we are doing helping our kids to learn.

And that’s why the webinar Ensuring Consistency When Using Common Assessments sponsored by Great Schools Partnership is so important. It’s tommorow, Wednesday May 27 from 3-4 EST.

Here’s the description: Ensuring consistency when using common assessments requires collaboration with colleagues to calibrate scoring, refine assessment tasks and scoring criteria, and collectively reflect on the results. This process ensures that there is a constant practice of evaluation and refining scoring criteria and assessment tasks and the instruction practices leading up to this. Ultimately, having more trustworthy judgments enables teachers to better align instructional strategies to student needs, provide more consistent feedback to students, and create opportunities  for deeper learning. In this webinar, we will present protocols and processes to create a system that supports teachers in the process of making consistent judgments on the quality of students’ work.

Presenters
Jon Ingram, Senior Associate, Great Schools Partnership
David Ruff, Executive Director, Great Schools Partnership
Becky Wilusz, Senior Associate, Great Schools Partnership

FYI — it’s free but registration is limited.

The End of the Big Test: Moving to Competency-Based Policy

May 25, 2015 by

TestThis post originally appeared at Getting Smart on May 19, 2015.

What’s next? With all the frustration surrounding NCLB and big end of year tests, what’s the new policy framework that will replace grade level testing? For a state ready to embrace personalized and competency-based learning, what are the next steps?

This post suggests the use of assessment pilots and innovation zones where groups of schools apply and become authorized to operate alternative assessment systems. But first, some background.

Jobs to be done. To get at the heart of value creation, Clayton Christensen taught us to think about the job to be done. Assessment plays four important roles in school systems:

  1. Inform learning: continuous data feed that informs students, teachers, and parents about the learning process.
  2. Manage matriculation: certify that students have learned enough to move on and ultimately graduate.
  3. Evaluate educators: data to inform the practice and development of educators.
  4. Check quality: dashboard of information about school quality particularly what students know and can do and how fast they are progressing.

Initiated in the dark ages of data poverty, state tests were asked to do all these jobs. As political stakes grew, psychometricians and lawyers pushed for validity and reliability and the tests got longer in an attempt to fulfill all four roles.

With so much protest, it may go without saying but the problem with week long summative tests is that they take too much time to administer; they don’t provide rapid and useful feedback for learning and progress management (jobs 1&2); and test preparation rather than preparation for college, careers, and citizenship has become the mission of school. And, with no student benefit many young people don’t try very hard and increasingly opt out.

But it is no longer necessary or wise to ask one test to do so many jobs when better, faster, cheaper data is available from other sources.

What’s new? There have been six important assessment developments since NCLB was enacted in 2002: (more…)

What’s New in K-12 Competency Education?

May 21, 2015 by

ResourcesScreen Shot 2014-08-30 at 7.22.41 AM

Achieve released a new paper titled Assessment to Support Competency-Based Pathways which addresses the role of summative assessment, clarifies key assessment challenges, and provides examples and recommendations that are useful to those who wish to design and implement assessment systems to support competency-based pathways.

Additionally, Springpoint is sharing a new set of resources, “Inside Mastery Based High Schools: Profiles and Conversations.” These resources — which include profiles, artifacts, and interview transcripts with school leaders — are drawn from visits to six competency-based high schools last year. Together, they provide a vivid picture of what competency-based learning looks like in a variety of contexts.

Springpoint began this project to address a need for concrete examples of competency-based learning in practice. Given the novelty of this work, they realized that many new school designers know the theory behind competency-based learning but would benefit from a deeper an understanding of its day-to-day practicalities.

They visited the following six schools: (more…)

Using Technology-Enhanced Items Effectively to Close Student Achievement Gaps

April 15, 2015 by
Aditya Agarkar

Aditya Agarkar

This post was originally published at Getting Smart on January 17, 2015.

Don’t you think it’s time we retired those Scantron machines? Since the 70s, they’ve been trusted in hundreds of school districts across the country to tally the scores of students who filled pink ovals with #2 pencils. The Scantron machine heralded the pervasive use of multiple-choice questions in the decades that followed. Today, with all that we know about how to assess a student’s mastery of a topic, MCQs are an anachronism — like cassette tapes and typewriters. As readers of this blog are well aware, the education sector is undergoing the same technological innovation that has swept through businesses and households — and the rate of change is accelerating.

With all this technological progress underway, why are MCQs are still in use? One reason is that they are the default question format for many of the technology-assisted tools that, when introduced, made the assignment and grading process much more efficient and scalable. However, MCQs are simply not the best way test a student’s knowledge because they shed little light on the student’s ability to apply, integrate, and synthesize knowledge. Information gleaned from a MCQ test can often be misleading because students can guess the right answer or game the system to eliminate all the wrong choices – even if they don’t understand the question or know any of the correct answers. (more…)

WordPress SEO fine-tune by Meta SEO Pack from Poradnik Webmastera