Digital Principles Monitoring, Evaluation, & Learning Part 3: The Indicator Library
)
This is the second installment of a 3-part series on the Digital Principles’ new suite of monitoring and evaluation resources.
In August 2021, the Digital Impact Alliance (DIAL) announced the slow release of a new, three-part monitoring and evaluation suite of resources as a basic conceptual monitoring and evaluation framework. The first was the Organizational Self-Assessment, which is intended to enable understanding around how the Digital Principles-endorsing community currently engages with the Digital Principles at their respective organizations.*
Today, we’re sharing the second installment, Digital Principles Common Metrics Indicator Library. This catalog of over 290 process indicators for activities and outputs associated with each individual Digital Principle aims to support an organization’s in-depth analysis of their adherence to individual Principles.
The principles Understand the Existing Ecosystem and Reuse and Improve guided our work as we researched common metric indicator databases created by other organizations. The Digital Principles Common Metrics were patterned in form after the IRIS+ metrics, though their intended function is more expansive than informing external stakeholders for socially responsible investing decisions.
Based on this, DIAL conducted a comprehensive review and synthesis of Digital Principles literature for clear actions and recommendations, culminating in over 400 potential actions organizations could take. Action statements were then converted into indicators to measure the recommended action.

For example, a core tenet in Digital Principles literature around Design with the User is: “Incorporate multiple user types and stakeholders in each phase of the project lifecycle.” The Digital Principles team converted this statement into an indicator to measure the “number of multiple user types and stakeholders that have been involved in each phase of the project lifecycle.”
Once actions were converted into indicators, DIAL developed tags to make it easier for users to find specific indicators suited for their purpose. These tags include the indicator type (e.g. activity, output), what stage of the project lifecycle the indicator is measuring, the potential measurement theme (e.g. Learn, Assess, Engage, etc.), and additional comments and notes. Following the Design With the User Principle, these indicators were shared with monitoring and evaluation professionals within the digital development ecosystem for feedback and iteration. Now pared down to 291 indicators and an average 32 indicators per Digital Principle, the list is relatively comprehensive for most organizations’ monitoring needs.
To help users navigate the library and select the most relevant indicators, a supplementary guidance document was developed. It includes invitations for organizations to be precise in the terminology they use—between values, principles, criteria, or standards—as well as caveats for use and against misuse.
Users will also find reference to a variety of indicator quality frameworks that can be used and adapted by endorsing organizations in determining which indicators best meet the organizations’ need for measuring Digital Principle adherence.
As the Indicator Library focuses solely on process indicators of adherence, omitting outcome indicators, it is not exhaustive, nor should it be the sole resource that organizations use for Digital Principles-Focused Evaluation. It is a living document that the Digital Principles team will update periodically, based on feedback from the Digital Principles community.
Use our Common Metrics Indicator Library to see how you’re connecting principle to practice; and watch this space as we launch the final MEL resource later this week!