To rubrics or not to rubrics? An experience using rubrics for monitoring, evaluating and learning in a complex project

In this Practice Note Samantha Stone-Jovicich shares her experience using an evaluation and monitoring approach called ‘rubrics’ to assess a complex and dynamic project’s progress towards achieving its objectives. Rubrics are a method for aggregating qualitative performance data for reporting and learning purposes. In M&E toolkits and reports, rubrics looks very appealing. It appears capable of meeting accountability needs (i.e. collating evidence that agreed-upon activities, milestones, and outcomes have been achieved) whilst also contributing to enhanced understanding of what worked, what was less successful, and why. Rubrics also seems to be able to communicate all of this in the form of comprehensive, yet succinct tables. Our experience using the rubrics method, however, showed that it is far more difficult to apply in practice. Nonetheless, its value-add for supporting challenging projects – where goal-posts are often shifting and unforeseen opportunities and challenges continuously emerging – is also understated. In this Practice Note, Samantha Stone-Jovicich shares the process of shaping this method into something that seems to be a right fit for the project (at the time of producing this note the project is ongoing and insights still emerging).