by TeachThought Staff This post has been updated and republished If curriculum is the what of teaching and learning models are the how, assessment is the puzzled ‘Hmmmm’–as in, I assumed this and this about student learning, but after giving this assessment, well….”Hmmmmm.” So what are the different types of assessment of learning? The next time someone says ‘assessment,’ you can say “Which type, and what are we doing with the data?” In The Difference Between Assessment Of Learning And Assessment For Learning, we explained that “assessment for learning is commonly referred to as formative assessment–that is, assessment designed to inform instruction.” Below, we identify types of assessment of learning–very briefly, with simple ways to ‘think about’ each so that you hopefully wake up with a better grasp of each type. 6 Types Of Assessment Of Learning 1. Diagnostic Assessment (as Pre-Assessment) One way to think about it: Assesses a student’s strengths, weaknesses, knowledge, and skills prior to instruction Another way to think about it: A baseline to work from Tip: Done at the beginning–of the school year, beginning of a unit, beginning of a lesson, etc. See also What Is Project-Based Learning? 2. Formative Assessment One way to think about it: Assesses a student’s performance during instruction, and usually occurs regularly throughout the instruction process Another way to think about it: Like a doctor’s ‘check-up’ to provide data to revise instruction Tip: Using digital exit ticket tools like Loop can be an easy means of checking whether students have understood lesson content, while also promoting student reflection. 3. Summative Assessment One way to think about it: Measures a student’s achievement at the end of instruction. It’s like talking to someone about a movie after the movie is over. : ) Another way to think about it: It’s macabre, but if formative assessment is the check-up, you might think of summative assessment as the autopsy. What happened? Now that it’s all over, what went right and what went wrong? Tip: By using measurements of student performance, summative assessments can be useful for teachers to improve units and lessons year over year because they are, in a way, as much of a reflection on the quality of the units and lessons themselves as they are the students. 4. Norm-Referenced Assessment One way to think about it: Compares a student’s performance against other students (a national group or other ‘norm’) Another way to think about it: Place, group or ‘demographic’ assessment. Many standardized tests are used as norm-referenced assessments. Tip: These kinds of assessments are useful over time in student profiles or for placement in national-level programs, for example. 5. Criterion-Referenced Assessment One way to think about it: Measures a student’s performance against a goal, specific objective, or standard Another way to think about it: a bar to measure all students against Tip: These can be a kind of formative assessment and should be integrated throughout your curriculum to guide the adjustment of your teaching over time. Mastery or competency-based learning would use criterion-referenced assessments. 6. Interim/Benchmark Assessment One way to think about it: Evaluates student performance at periodic intervals, frequently at the end of a grading period. Can predict student performance on end-of-the-year summative assessments. A benchmark assessment is a type of interim assessment so it could be useful to think of them as distinct even though they function in a similar way. Another way to think about it: Bar graph or chart growth throughout a year, often against specific ‘benchmarks’ Tip: Benchmark assessments can be useful for communicating important facts and data to parents, district officials, and others to, among other goals, inform the allotment of resources (time and money) to respond to that data. 6 Types Of Assessment Of Learning
Once assessment has taken place, you will need to make a decision as to your learner's progress and achievements, and then provide them with feedback. You must always remain objective, i.e. by making a decision based on your learner’s competence towards set criteria. You should not be subjective, i.e. by making a decision based on your own opinions or other factors such as your learner’s personality. When making a decision, you will need to base it on everything you have assessed. If you are observing a learner’s skills, you could follow this up by asking questions to check their knowledge and understanding. You may find, when assessing, that your learner hasn’t achieved everything they should have. You need to base your decision on all the information and evidence available to you at the time. If your learner has not met all the requirements, you need to give constructive feedback, discuss any inconsistencies or gaps, and give advice on what they should do next. If your learner disagrees with the assessment process or your decision, they are entitled to follow your organisation’s appeals procedure. Don’t get too engrossed with your administrative work or form filling when making a decision, that you forget to inform your learner what they have achieved. Feedback can be given informally, for example during a discussion, or formally after an assessment activity. Feedback can be verbal or written depending upon the type of assessment you have carried out. If you are with your learners, for example, observing an activity, you can give verbal feedback immediately (but do keep written records too). If you are assessing work which has been handed in for marking, you can give written feedback later. Written feedback can involve providing it electronically. It's always useful to start by asking your learner how they think they have done. This gives them the opportunity to identify their own mistakes before you have to tell them. Comments which specifically focus on the activity or work produced, rather than the individual, will be more helpful and motivating to your learners. The advantages of providing feedback are that it:
You should always provide feedback in a way which will make it clear how your learner has met the requirements, what they have achieved (or not) and what they need to do next. For example: Peter, you have given a really interesting and informative presentation which engaged everyone in the group. However, you could consider providing a handout which covers the key areas you have explained, so that your learners can read it in their own time as a recap. Well done, and I look forward to seeing your next presentation. This is known as the 'feedback sandwich', a developmental point is sandwiched in the middle of two positive points. If you have time, you could expand on 'what' was interesting and 'how' it was informative. Two other feedback methods are known as: WWW (what went well) and EBI (even better if). You can state what went well first, and then follow it up with how it could be even better. You will need to try out what works for you and your learners. Just remember not to demoralise or upset your learner with the words you use or the tone of your voice. It's also helpful if you can relate your feedback to any specific criteria which your learner has met, and what they need to do next.
Category | Assessment basics Assessment literacy involves understanding how assessments are made, what type of assessments answer what questions, and how the data from assessments can be used to help teachers, students, parents, and other stakeholders make decisions about teaching and learning. Assessment designers strive to create assessments that show a high degree of fidelity to the following five traits: 1. Content validity 2. Reliability 3. Fairness 4. Student engagement and motivation 5. Consequential relevance In this blog post, we’ll cover the first characteristic of quality educational assessments: content validity. Understanding content validity One of the most important characteristics of any quality assessment is content validity. Simply put, content validity means that the assessment measures what it is intended to measure for its intended purpose, and nothing more. For example, if an assessment is designed to measure Algebra I performance, then reading comprehension issues should not interfere with a student’s ability to demonstrate what they know, understand, and can do in Algebra I. Content validity is evidenced at three levels: assessment design, assessment experience, and assessment questions, or items. The assessment design is guided by a content blueprint, a document that clearly articulates the content that will be included in the assessment and the cognitive rigor of that content. The content standards the test is designed to assess determine what content makes it into the test’s item pool. The next level where content validity matters is the assessment experience itself, meaning, when the student sits down to take the assessment, what items do they see? In a fixed-form, grade-level test, most or all students at a given grade level see the same item set, namely those assessing the grade-level standards to which the student is assigned. In a cross-grade, computer-adaptive test, an item selection algorithm presents each student with items sampled from a broad range of standards and adapts to the in-the-moment performance of the test taker. Each student sees items at the difficulty level that’s appropriate for them, based on their previous responses. This adaptivity enables test developers to provide very precise information about a student’s learning and performance in a domain area.
Content validity is a concept germane to the building block level of MAP® Growth™ as well: the questions, or items, themselves. Experts in both content and assessment design items to measure the concepts and skills in the standards at the indicated levels of cognitive complexity. Every item in a high-quality assessment goes through a rigorous development process with several levels of review, which ensures that item content is clear, accurate, and relevant. The result is a robust and aligned item pool that serves to provide the most accurate information possible about a student. Content validity is supported in a number of ways in educational assessments, including:
One way to check content validity of an assessment is to ask these guiding questions:
In closing Content validity is foundational to making accurate inferences. If an educator is unclear about what an assessment is measuring, then the inferences made will be uninformative. In other words, the assessment will have failed in its prime directive: to provide valuable information about what the test taker knows and can do. An assessment can have all sorts of bells and whistles, incorporate cutting edge technology and functionality, and have a great suite of reports that tell a compelling assessment narrative, but if the test is lacking content validity, it is not worth much. What’s more, when data from an assessment that lacks content validity is used to inform instruction, the result could include wasted time and inappropriate growth expectations of students. For these reasons, content validity is central to a high-quality educational assessment. Learn more about validity in our guide, Not all assessment data is equal: Why validity and reliability matter. In my next post on characteristics of quality educational assessments, I’ll explore the importance of reliability. |