Standard 5 Meta-reflection: Assessment [EDU 6613 Standards-based Assessment]

Standard 5 Assessment

Assesses students’ mastery of curriculum and modifies instruction to maximize learning. 

Unknown

Beginning at the End

When I initially went through the process of putting to paper a plan for my two-year Curriculum and Instruction program at Seattle Pacific University (SPU), I was cognizant of the order that I would take classes in. I felt that it made sense to tackle what I considered to be the core components of what I needed to learn in a particular order; I intended to take classes such as Curriculum Design and Survey of Instructional Strategies first. I reasoned that it made sense to first design an instructional unit, then learn how to incorporate instructional practices, and finally tackle the behemoth that is assessment. This seemed sequential and logical. Nevertheless, due to which term courses are scheduled, I found myself in the seemingly incongruous situation of “working backwards” by taking Standards-Based Assessment (EDU 6613) my first quarter. Not surprisingly, as I near the end of this course and reflect on all I have learned, I realize the error of my initial thinking.

I have come to appreciate the innate logic of beginning at the end. In other words, whether I am designing an instructional unit for my students or engaged in the teaching and learning core courses at SPU, I must first have a clear picture of my intended purpose and learning targets both as a teacher and student; being able to answer up front where I am going allows me the luxury of reflecting on where I am now and how I can close the gap. Whether I’m differentiating for exceptional students, utilizing technology, communicating and collaborating with parents and colleagues, or engaged in any number of myriad roles that an educator assumes on a daily basis, understanding and applying the philosophy of quality classroom assessment practices through EDU 6613 will set the tone and direction for all my teaching practices. The artifacts and blog entries contained in this metareflection will highlight my growth in these five areas of classroom assessment. Additionally, I address the role of reflection in student success. I speak to a renewed philosophy of grading that has come about through the reading of O’Connor’s (2009) work on the link between grading practices and student achievement. Finally,  I provide examples of ways that I plan to utilize technology.

images

From Self-doubt to Confidence

I haven’t always felt this level of confidence in utilizing sound assessment practices. In my initial discussion post, I wrote the following as I reflected on my lack of confidence in assessment at that time:

(C & I Discussion Board, September, 26, 2012)

Fortunately, one of the initial practices I learned from Chappuis, et al. (2012) is that educators should engage in assessment that is designed as either for learning (formative assessment that provides guidance for further learning) or of learning (summative assessment that measure what was learnt). Determining whether an assessment is for learning or of learning is best determined by utilizing Chappuis, et al. (2012) five Keys to Quality Classroom Assessment:

  • Ensuring that there are clear purposes for assessment
  • Providing clear learning targets for both educators and students
  • Safeguarding that assessments are created around a sound design that will not bias assessment results
  • Creating effective means of communication for all stakeholders (students, families, other faculty and administrators)
  • Bringing student involvement into the forefront of assessment practices as a means to create intrinsic motivation to achieve while engaging in self and peer assessment

The second gem of sound classroom assessment that I learned is the importance of deconstructing standards to create clear learning targets. This is a key element of good assessment and should occur in a three-step process:

  • Determine the target type (knowledge, reasoning, skill, product, and disposition)
  • Identify the prerequisite or underlying knowledge, reasoning and/or skills
  • Check work for alignment and reasonableness (Chappuis, et al., 2012).

Through this three-part deconstruction process, educators are able to answer the question are learning targets clear to teachers for each standard, and in turn, drive instruction and assessment for learning. My Blog entry Considering Learning Targets to Ensure Standards-Based Assessment is an example of how I deconstructed Grade 5 standards at an International Baccalaureate (IB) School to create learning targets that had a clear purpose and were written in student-friendly language. The blog entry details a unit titled “Persuasion” and includes student work samples from two summative assessments. One set of examples are graded using a skill and product rubric, demonstrating my ability to evaluate student performance.

Chappuis, et al, (2012) emphatically remind us that “In all contexts and cases, the driving force behind the selection of an assessment method must be matched to the learning target” as this will yield accuracy in assessment from which to base learning decisions (p. 98).  Further, it is important that I consider the diverse strengths and needs of my learners when designing assessments; it is important to find balance. The key to my finding balance begins with the systematic use of the three-part assessment development cycle:

Planning

  • Determine who and how results will be used
  • Identify learning targets
  • Select appropriate method(s)
  • Determine sample size

Development

  • Select items, exercises, task, and scoring
  • Review overall quality before implementation

Use

  • Conduct and score
  • Revise as needed Chappuis, et al. (2012).

Within the development cycle, careful attention must be paid to the selection of an assessment method. Indeed, Chappuis, et al. (2012) remind us that, “The accuracy of any classroom assessment depends on selecting the appropriate assessment method that matches the achievement target to be assessed” (p. 93). There are four main assessment methods I will utilize in my classroom including selected response, written response, performance assessment and personal communication (Chappuis, et al., p. 88-89). However, each of these assessment methods exists on a continuum of strong to weak, depending upon the achievement target, and highlights the importance of intent in decision-making. The examples below indicate my growing proficiency in selecting appropriate assessment method when matched to the learning target.

This is an example of an Assessment Blueprint_counselorLtr that I made for a Grade 8 language arts assignment that I created for the novel Speak last year. My assessment blueprint of a summative assessment is a formal business letter written from the point-of-view of the ‘school guidance counselor’ to the main character.

Unknown

The Power of Involving Students

At the heart of assessment for learning is the involvement of students. Students need to feel a sense of empowerment and accomplishment in relation to their learning; when assessment is demystified, students are able to put into practice the learner dispositions that will guide them towards becoming life long learners.

I believe that self-awareness of our strengths and areas for improvement is a powerful tool that can be translated across subjects and throughout life. When students know what they do well, and what they need to give additional efforts to, they are better able to become resilient learners. A bad grade doesn’t have to mean failure, it can merely mean that there is an opportunity to dig deeper and in turn, begin to foster intrinsic motivation. When “failures” are framed as opportunities, we are better able to bounce-back and improve.

It is my job to provide the guidance and opportunities for students to engage in self-reflective analysis of their learning. Chappuis, et al. (2012) remind us from chapter 2 “any activity that requires students to reflect on what they are learning and to share their progress both reinforces the learning and helps them develop insight into themselves as learners” (p. 157). Chappuis, et. al. (2012) recommend that students focus on three essential questions when engaged in reflective thought about their learning: “Where am I going?”; “Where am I now?”; and “How can I close the gap?” (p. 148). These core, guiding questions can be answered using seven strategies explained in detail by Chappuis, et al. (2012) (p. 148-157):

  1. Provide students with a clear and understandable vision of the learning target
  2. Use examples and models of strong and weak work.
  3. Offer regular descriptive feedback.
  4. Teach students to self-assess and set goals.
  5. Design lessons to focus on one learning target or aspect of quality at a time.
  6. Teach students focused revision.
  7. Engage students in self-reflection and let them keep track of and share what they know.

This is a link to my blog entry titled The Potential of Selected Response which details how I have utilized (and envision using) student self-reflection to analyze their learning.

Finally, building student self-awareness depends largely upon educators’ similarly being willing to take a look at their own successes and areas for improvement. Self-reflective thought around what works and doesn’t work in our own teaching is crucial to creating a partnership for learning. I must be willing to ask what my shortcomings and areas for growth are and share that information in a frank discussion with my students. They are the experts on what I as an educator is doing right or wrong; we both benefit greatly by my asking.

Unknown-1Different Types of Assessment

Selected Response

In the past, I always felt a little bit guilty using selected response questions when assessing students; I had a notion that this type of testing was only appropriate for math and science content areas and was the ‘lazy way’ of assessment coming from an English/social studies teacher. Yet, Chappuis, et al, (2012) point-out that learning targets at the knowledge-level and some reasoning patterns are strongly demonstrated through selected response formats. When teachers keep the Keys to Quality Classroom Assessment in mind (clear purpose, clear targets, sound design, effective communication, and student involvement), follow the Assessment Development Cycle (planning, development, and use), and develop and use propositions to guide assessment with student involvement, selected-response can be an engaging means to determine student acquisition of knowledge and reasoning learning targets.

This is an example of a selected response formative assessment from the aforementioned Speak unit that indicates my proficiency in creating an assessment tool that targets student knowledge and reasoning learning targets. This task would be used as a novel “warmer” and would take approximately five minutes for students to complete. We would then spend between ten and fifteen minutes discussing student results through a process of “stand up if you agree, disagree, or are undecided”.

Anticipation_Reaction

In my blog entry titled, The Potential of Selected Response   I have additional examples of how I have increased my proficiency in using selected response as an assessment method both of and for learning. The narrative of my blog sets the artifact context as being from a Grade 7 unit on Folktale writing and includes an Assessment Map that includes student friendly learning targets and Vocab-Types_Quiz that is an updated version of the original selected response assessment. Vocabulary quizzes typically take students between ten and fifteen minutes to complete and are given at the beginning of class. English Language and Learning Disabled students are given as much extra time as they need, including coming in after school or during lunch if necessary.

Written Response

While it may seem comparatively easy to generate written response assessments in relation to creating complex sets of selected response assessments, I must take an equal amount of care when crafting this assessment method. Chappuis, et al. (2012) remind us that because we can’t “see” student reasoning patterns we must ask them to describe their thinking (p. 171). When employed properly, written response assessments are a strong match for both reasoning and knowledge learning targets but only if they are properly designed. Determining whether to develop a short answer or extended response assessment is key.

The following artifacts from my blog entry The Potential Genius of Written Response Assessments indicate how my ability to craft a written response assessment has evolved since I wrote this prompt:

Original written response

Modified written response

Performance task assessment

The hallmark of a sound performance task assessment is content followed by structure and sampling (Chappuis et al., 2012). It seems that at the heart of performance task assessments is the creation of a task that asks students to apply their learning in as authentic context as possible; it is up to me to ensure that the task produces accurate evidence through the avoidance of bias and other factors (lack of internet access, knowledge of prerequisite skills), which may distort student achievement results. I personally believe it is the authenticity piece that provides students an opportunity not only to showcase their learning, but also to create connections; if they are engaged in a task that is realistic and relevant to their reality, enduring understandings are molded and reinforced.

At times, performance task assessments need to include a detailed explanation of knowledge application and task creation, especially in the language arts classroom Chappuis, et al. (2012). As a language arts teacher, I was especially heartened by the Role, Audience, Format, Topic, and Purpose (RAFTS) Writing Task Design (Figure 7.4, p. 218) coupled with Specifying Purpose in a Writing Task: Strong Verbs (Figure 7.5 p. 220). RAFTS is a simple formulaic approach to language arts performance task assessment generation that ensures the assessment is written as clearly and explicitly as possible. Using the “Strong Verbs” table breaks-down writing task purposes (to narrate, to inform, to persuade) by verb.

This is an example of how I used RAFTS to re-imagine a summative performance task assessment from my Speak unit. Students spent varying amount of time outside of class preparing for their presentation. Those who were finished with other work were encouraged to practice with a friend that would critique their performance. In hindsight, I would set aside a portion of a class period, perhaps 45 minutes where small groups would use a rubric to score each other while practicing.

However, Chappuis et al. (2012) admit that, “Performance assessment is essentially subjective and can take more time to conduct and score than do the other methods we have studied so far” (p. 204). They go on to indicate that care must be taken around ensuring objectivity and maximizing instructional value. This is where my creation of a clear rubric with which to assess student performance is essential. This is an example of the Oral Presentation Rubric I created to help me assess student oral presentations in a way that was objective and easier to score.

Personal Communication

More than any other assessment method, personal communication can fuel a sort of provocative and transformative potential in students not present in other assessment methodologies. Personal communication ranges from discussions, conferences and interviews, oral examinations, and student journals and logs (Chappuis et. al., 2012). I believe that it is through these types of intra and inter personal dialogues with, amongst and within students that the prospective richness of our respective content areas come to life.

According to Chappuis, et. al, (2012) instructional questions are really just an oral version of selected or written response assessment.  Chappuis, et. al. (2012) affirm that “…through careful questioning, we can access prior knowledge, pique curiosity, check for understanding, provoke and explore thinking, and create new learning” (p. 264). Effective questioning techniques should leave our students with a lasting impression of the happenings in class, and an impetus to keep coming back. I feel that I effectively contrive and support opportunities for relevant class discussion; this is a well-practiced component of my assessment for learning practices. This is an example of a pre-novel discussion prompts for my unit on the novel Speak. I would have students spend five to ten minutes discussing these questions with their table partner or in small groups before participating in a full class discussion.

Class discussion as a summative tool, requires my being aware whether I am “evaluating the content of students’ contributions- the knowledge, understanding, or reasoning demonstrated- or the form of their contribution- how they communicate, or some combination of both” (Chappuis, et. al, 2012 p. 276-277). Moreover, scoring based on a rubric or checklist will aid in the development of an effective recording-keeping system which will in turn, lend to insuring that student grades are controlled for bias and the accurate grading of achievement. I could revisit these same discussion prompts (slightly modified) to evaluate the content of student contribution after finishing the novel Speak. The revised discussion prompts would look like the following:

The final form of personal communication, written, has presented some of the richest interaction and insight into my students’ knowledge and reasoning, yet it too has presented challenges. I’ve used both journals and logs in each of my content areas (middle school language arts and US history) and Grade 5 classrooms. Practicalities such as How much and often should I read and comment on entries? How often should students make entries?; and What role do journals and logs play in assessment? These are just a few questions I’ve had. Chappus et al. (2012) do provide practical guidance saying that these methods are best suited as assessment for learning and should be focused on students’ knowledge and reasoning learning targets. This method can be particularly difficult for my ELL students. I will teach expectations regarding length and quality of response to students through the use of a scoring rubric, and involve all students in their own self-assessment as ways to support student achievement.

These are examples of Personal Communication writing prompts that I created for students to guide their thinking in my Speak unit. Originally, this was an in class pop quiz of sorts, used to ensure that students were keeping up with their reading. However, in hindsight, I can see that these would best be used as formative assessments and would not be assessed for a grade, but as a means to check student knowledge and reasoning skills. Journal entry length and time to complete vary by student. In the future, I will have students create their own WordPress site at the beginning of the year; this is where they would engage in formative assessment tasks such as journal writing. I would have to teach length expectations through a discussion about the components of a quality journal entry through the use of a rubric. I would have students practice scoring themselves and others until they felt confident that they had a sense of when “enough is enough”.

Marking Period 1_journal, Marking Period 2_journal, MarkingPeriodThree_journalMarking Period 4_journal

Portfolios as Assessment

Portfolios have the capacity to package student learning in ways that direct and support metacognitive thinking; when students habitually consider how they best learn and why, and practice communicating their understanding, they are better able to employ their own set of “best practices” as learners. Chappuis et al. (2012) support this reasoning by telling us that “Collecting, organizing, and reflecting on their own work builds students’ understanding of themselves as learners and nurtures a sense of accomplishment” (p. 364). However, it is not enough that I merely collect student work in an undifferentiated folder; this is what Chappuis et. al. (2012) refers to as a working folder. Instead, I will lead my students through the process of reviewing, sorting, selecting and deconstructing tangible evidence of their learning to gain insight.

There are five distinctive portfolio types and each serves a particular purpose:

  • Growth– to show progress towards one or more learning goals
  • Project– To document the trajectory of a project
  • Achievement– To demonstrate current level of achievement over a collection of learning targets
  • Competence– To provide evidence of having attained competence in one or more areas
  • Celebration– To showcase best work or what the student is most proud of

This is a link to a description of my past and present actions while teaching Grade 5 regarding the use of student portfolio as assessment for and of achievement.

Effective Assessment for ELLs and Learning Disabled

I feel the most conflicted in my past assessment practices when it comes to English language learners (ELL) and students that are learning disabled (LD); I feel that my assessment practices weren’t balanced in the sense that I couldn’t reconcile learner diversity with appropriate assessment methodology. As a language arts teacher many of my learning targets were knowledge and reasoning based; my preferred summative assessment methodology was typically written response (short answer, essay) while my formative assessments were both written response and personal communication (participation, student journals and logs, oral exams, interviews, and questioning during instruction). However, as an international teacher, a proportion of my students’ English skills are typically so poor that they have difficulty writing and articulating their understanding of learning targets in meaningful ways. In contrast, many of my other students are typically highly proficient in their English language skills.

I can see that by relying primarily on written response and personal communication for formative assessments, my ELL and LD students were at a disadvantage engaging with the taught curriculum and when preparing for summative assessment. In hindsight, if my ELL and LD students couldn’t readily demonstrate what they knew (and didn’t know) orally or textually, how could I effectively reteach these students? Indeed, the first potential source of bias common to all assessment methods is language barriers (Chappuis, et al., 2012).

In retrospect, I can see that selected response (multiple choice, true/false, matching, fill in-the-blank questions) while not a “strong” method is still a “good” match for both knowledge and reasoning learning targets (Chappuis, et al. 2012); this is an assessment method both for learn and of achievement that I will utilize more in the future to support my ELL and LD students.

This is an updated example of a summative performance assessment from my Grade 5 Persuasion unit. This is an excerpt from the full description of the context, learning targets and timing of this assignment is detailed in my blog entry Considering Learning Targets to Ensure Standards Based Assessment

  “The summative assessment was an assignment that asked students to design a parody of an advertisement; I was looking to assess their understanding of the inquiries: how the media forms perception and techniques that are used to persuade. Selecting the advertisement they wanted to work with, the identification of the persuasive message, and an initial sketch of their parody were completed in my class. Their final parody was finished in art class”. 

I have included it here because I believe it is a good example of how I have and could modify summative assessments to meet the needs of ELL and LD students. Chappuis et al. (2012) tell us that performance assessments are only a partial match to assess students’ reasoning and is dependent on the overall context of the task. However, I think that this performance task coupled with a written response for students proficient in English would be a strong match. For my ELL and LD students, the written response assessment could be modified to a personal communication assessment through an oral “check for learning conference”. Further, after creating a reasoning synthesis rubric to score this work, I can see that my ELL student that struggled the most with oral and written expression in English actually scored a “Strong” in her ability to demonstrate a synthesis of reasoning. This student is particularly gifted in their ability to visually represent their understanding, so this summative assessment, for her, was an excellent match. I found this to be true with another of my ELL students; ability to represent understanding visually, more difficulty articulating their thoughts through writing and/or orally. However, I do think it’s important to provide ELL and LD students opportunities to practice articulation of their reasoning as well.

ELL student work scored as Strong

ELL student work scored as Strong

 strong
This example is from a student that was considered proficient in English language skills. I scored their work as Developing in ability to synthesize their reasoning.
jail break

Developing

This is an example of student work that was LD. This student was better able to articulate their thinking during our whole-class discussion and one-on-one informal talk.

LD student work scored as Beginning in Reasoning Synthesis assessment.

LD student work scored as Beginning in Reasoning Synthesis assessment.

Beginning

Rubrics in Assessment

Rubrics serve many purposes, both “communicative and evaluative” (Chappuis, et al., 2012, p. 226). It is through the systematic creation and use of high quality rubrics that performance tasks can be evaluated more objectively rather than subjectively; good rubrics have the following characteristics:

  • Content– target alignment and focus on essential elements
  • Structure– number and independence of criteria, grouping of descriptors, and number of levels
  • Descriptors– king of detail, content of performance levels, and formative usefulness

(Chappuis, et. al. 2012, p. 231)

Best practices draw student involvement into the assessment process in meaningful ways. Generating a general rubric, with student input, is a great way to begin providing opportunities towards developing a sense of ownership in learning. Chappuis et al. (2012) recommends developing a generic rubric that can be used in most cases. “…a rubric is a detailed description of the features of work that constitute quality” (p. 183). If students have a hand in determining what is quality, they have a yardstick by which to measure their own achievement; this is even more powerful if that yardstick of quality maintains a measure of consistency over time rather than being assessment-specific. While Chappuis, et al. (2012) maintain that there is a time and place for task-specific rubrics, they enumerate their potential deficits (p. 184):

  • Can’t be handed out to students in advance because they give away the “answer.”
  • You have to create a new one for each assignment.
  • It’s easy to award points to specific idiosyncratic (yet unessential) features of the assignment rather than the learning target itself.

Creating and using rubrics as assessment for learning and of achievement is one of the areas of my growth as an educator that most excite me. Here are examples of rubrics that I created for two assessment pieces from my Speak unit. The first rubric is the generic or general rubric I created for the Business Letter Writing Rubric that I would assess students counselor’s letter with. The second rubric is a Combined Reasoning Rubric for the same summative assessment.

Unknown-2Grading Policy

Chappuis et al. (2012) confess that “…of all the things we do as teachers, few have the potential for creating more conflict and communication problems than grading” (p. 335) I thought that because middle school grades do not typically determine entrance to university, scholarship eligibility nor are reported to prospective employers, that they lacked the potential to harm. I thought, erroneously, that they just don’t count that much, at least not comparatively and that I was somehow inoculated from these types of conflicts. However, I have come to realize that there are sound guidelines that I can utilize to guide my assessment and hence, grading practices to ensure that unnecessary subjectivity is removed from the grading equation. I have also learned that grades at any level, do count. I can adopt a philosophy of grading that includes the following guidelines offered by Chappuis et al. (2012):

  • Use grades to communicate, not to motivate
  • Report achievement and other factors separately
  • Reflect only current level of achievement in the academic grade

In the past, I used grades to motivate, or to punish; I can see how faulty this thinking was! I am reminded of these failures by Chappuis et al. (2012) statement that, “…no studies find that reducing grades as punishment increases efforts…lowering grades more often causes students to withdraw from further learning” (p. 337). In my future school, I will continue the same process of communication with parents, the student and administration that I always have, but I will consider suggestions by Chappuis et al. (2012) that I issue an incomplete (no basis for grade) rather than an F on reports, or I will not allow that student to take a summative assessment if their prior work is not complete. This puts the responsibility squarely back on the student, and communicates that other issues are getting in the way of achievement, that must be addressed.

An important aspect of effective communication is, “..when we convert summative assessment information into grades that accurately reflect achievement at a point in time” (Chappuis, et al., 2012, p. 8). I wonder how to deal with the logistics of including a large enough sampling size, my time at the end of the quarter, allowing the grading of late assessments and a myriad other factors too numerous to enumerate here that affect a final grade. My blog entry New Ways of Thinking About Achievement and Grading details the initial shifts in my grading philosophy. These are specifics that I will have to work-out when I am in a traditional classroom setting again, but what I can hold on to is Chappuis et al. (2012) holds to be true that “…it’s not where a students starts on a given learning target that matter, but where the student finishes” (p. 341). This is clearer to me with every chapter I read. This is an example of my refreshed, updated and revised Student Grading communication philosophy that has been influenced by O’Connor’s (2009) text How to Grade for Learning, K-12. This is a document that I would provide to students the first day of class. We would read through my policy, answering questions that students may have. This document would be posted on my class webpage, shared with parents during back-to-school night and a copy would be given to administration following our middle school meeting on grading policies and procedures. My hope is that this would provide an initial springboard to a larger and ongoing discussion (with students and colleagues) as to the purpose of assessment in my classroom; how I will communicate learning targets to students, my procedure and purposes for communicating assessment results to stakeholders (students, parents, administration, larger school community), and the ways that students will be involved in their own self assessment.

Unknown-1 Technology as an assessment tool

I am intrigued with the potential of utilizing technology in the classroom. I recently completed a Web 2.0 class that provided not only myriad tools and applications that I can use to enrich my assessment for learning practices, but also communicate student achievement to parents, administrators and the school community at-large. This is an example of my thinking surrounding the potential of the thoughtful utilization of technology in the classroom.

Link to full-text Reflection Web 2.0

Google documents are one way that students can engage in sharing, editing and reflecting on their and their peers writing. This is a short explanation of my imagined uses of Google Docs in the classroom.

Through my studies at SPU and my Web 2.0 class, I have come to see the value of creating a webpage/blog to document and reflect on one’s learning. This is a link to the WordPress website Mrs.Rayl’s English Portal that I created for student and parent use through my Web 2.0 class. This is a screenshot of the front page of my blog. Since I am not teaching in a traditional classroom this year, it is not currently being maintained, but is nonetheless a good example of how I can utilize technology in assessment for learning.

I have also been using Google Hangouts for the first time; I meet weekly with a fellow SPU student that is also taking EDU 6613 to discuss artifacts, expectations and share our insights as we work towards completing our Metareflection and final blog. This is documentation Peer Communication Log of our meetings; when we met, what we discussed, artifacts shared, and items for further thought. I envision introducing students to this technology as a means to share documents when collaborating on a group project.

images

A Final Look

Predictably, the expectations have been raised not only for my own assessment practices, but for my instructors at SPU as well. I now look to assessment practices with a discerning and critical eye. Is there a clear purpose for the assessments that I must engage in? Have the learning targets been communicated to me in graduate student-friendly language? Are the assessment tasks I’m completing of sound design, free of bias and of sufficient quantity to accurately measure my growth and/or achievement? Are my assessment results communicated to me and the other stakeholders involved with the Curriculum and Instruction program at SPU? How have I been invited to be involved in the assessment process both for my learning and of my achievement? As I reflect on the growth I’ve undergone both professionally and as a second time around graduate student, I am amazed at the transformation.

Despite the substantial commitment both in time and financially, I know that learning and growth does not occur in a vacuum. The beauty of engaging in a graduate program to heighten the sophistication of my teaching practices is that I am able to draw from the duality of being both teacher and student. I am able to ponder my experiences from multiple perspectives; this has created a rich and supportive intellectual and practical environment. While I have much to learn, assessment is no longer the lurking monster in my closet. I have sound, research based practices from which to guide my continued exploration and growth. Next quarter, I will dive into EDU 6526 Instructional Strategies; I am looking forward to continuing this journey as I enter the next plane of competence as an educator.

References

Chappuis, J., Stiggins, R., Chappuis, S., & Arter, J. (2012).  Classroom assessment for student learning: Doing it right, using it well. Upper Saddle River, NJ: Pearson Education Inc.

O’Connor, Ken. (2009). How To Grade for Learning, K-12. Thousand Oaks, CA: Corwin.

The International Baccalaureate (2012) The IB Primary Years Programme. Retrieved from http://www.ibo.org/pyp/.

Advertisements

One thought on “Standard 5 Meta-reflection: Assessment [EDU 6613 Standards-based Assessment]

  1. Kim,
    I enjoyed your thorough and thoughtful summary on the standard of assessment. Your artifacts and reflections show a strong demonstration of your learning and practice related to assessment. One element you spent time focusing on in your reflection was the idea of student involvement in assessment. It is so important that students see assessment as a tool for learning and not as something that is just “done” to them. You captured a key component by highlighting those three questions that students should be able to answer. However, the reality is that as a future administrator, you will most likely at some time (or at numerous times) be faced to lead a staff where some teachers do not see assessment in this way. How can you effectively and systematically bring a staff to this new understanding of assessment (formative assessment, grading policies, student involvement and self assessment, etc.). It seems like a big feat, but as you know it is imperative to effective instruction and student learning. How can you utilize teacher leaders in your buildings to help with this adoption of a “new” way of approaching assessment?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s