Assessment is happening at several levels all the time:
- are these students learning what I expect them to?
- do my assessments provide me with the evidence I need?
- does my course (class session, degree program, academic unit,
institution, ...) have coherence, i.e. is there a "chain of
connection" linking context, goals,
design (content and method), assessment, and student performance?
Top-down goal-setting, designing, etc. is excellent professional
practice: it provides direction and coherence. But it suffers from
many of the problems of the waterfall software development lifecycle.
Using bottom-up processes informed by empirical evidence from
students' and our own behaviors and artifacts can allow us to
discover emergent goals, refactor our designs, and realign our
assessments. For example,
- we discover a goal that we didn't articulate from inquiring about
the purpose of existing design elements; or
- we discover that a particular class session doesn't lead to
expected learning, by systematically examining student work; or
- we discover a tacitly but deeply held value by investigating
feedback we give to students;
- we discover that we need to probe student understanding more
formatively by assessing our assessments.
Assessment is not primarily about assigning grades to students.
More importantly, it can help make every part of the teaching
cycle more effective and meaningful.