About+Assessment+and+Data



= Purposes: =
 * =assessment that **informs** learning=
 * =assessment that **certifies** that learning=

= Uses of Data: =
 * =to answer questions about students=
 * =to make decisions about teachers, students, classes, and programs=

= Types of Data: =
 * =__**Formative Assessment**__ (classroom strategies, common formative assessments)=
 * =__Interim and Benchmark Assessment__ (Common Formative Assessments). This is a large-scale assessment, even though they are teacher developed, because it is typically administered across more than one classroom. This type of assessment data is used as a checkpoint to see whether students are on track in a sequence of intended learning outcomes. These are often administered in 6-9 week cycles and 2-4 times a year. Assessments administered at checkpoints are typically intended to: (1) inform instruction, (2) predict future results on tests (SAT), or (3) evaluate curriculum. In order to provide useful information for adjusting instruction, interim/benchmark assessments need smaller domains than "reading" or "mathematics."=
 * =__Summative Assessment__ (grading, report cards, school, state, and national assessments). Large-scale assessments are used for policy decisions and accountability. They include "domain assessments" that test broad areas such as reading, mathematics, and science (e.g. Smarter Balanced, PARCC). When looking at the percentages of students in achievement categories, keep in mind that it is only at ONE POINT IN TIME.=

= Data Available: =
 * =Commercial Assessments with __Item Banks__: ACT Aspire (grades 3-10), Pearson Formative Item Bank (K-12), Data Director (k-12), Edusoft (k-12), Galileo (k-12). Measure of Academic Progress (2-12), STAR (k-12), TerraNova Math and Reading Assessments (k-12).=

= Validity of Information: = =Does your classroom assessment actually give you the information you think it does and support your next instructional move? Does your classroom assessment give your students an accurate picture of what he or she needs to focus on next in order to improve?=
 * =What were the test questions about? Did students have the opportunity to learn it? Was it a mix of the things students were supposed to learn about?=
 * =Were the questions well written and clear? Was it at the appropriate level of difficulty? Did students understand all the words?=
 * =Was there any specialized knowledge needed to answer the questions besides what the questions were trying to test?=
 * =Did the questions ask students to use thinking skills in the manner the learning targets intended (e.g. recall information, analyze a problem, evaluate)?=
 * =Is the scoring reflective of the information you are aiming to get? Should some questions be worth more than others? Are there enough questions to indicate what students know?=

= Methods and Results: =
 * =__Norm Referenced Scores__: Scores are interpreted by comparing a test score with the scores of individuals in some identifiable group, known as the norm group. Example: comparing a student's score to a sample representative of other students in the USA and in the same grade will lead to an interpretation of his or her score. If the student's score is compared to students in a particular region of the US from private schools, the interpretation might be different=
 * =__Scale Scores__:=
 * =__Percentile Ranks__:=
 * =__Criterion Referenced Scores__:=