FeedbackLogs

Summary

Even though machine learning (ML) pipelines affect an increasing array of stakeholders, there is little work on how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders. Each log records important details about the feedback collection process, the feedback itself, and how the feedback is used to update the ML pipeline. In this paper, we introduce and formalise a process for collecting a FeedbackLog. We also provide concrete use cases where FeedbackLogs can be employed as evidence for algorithmic auditing and as a tool to record updates based on stakeholder feedback.

Figure 1: Existing documentation uses static snapshots of a model to document an ML pipeline. (Right) In contrast, we propose FeedbackLogs to track the iterative development process: practitioners engage stakeholders for feedback and update the ML pipeline accordingly.

What are FeedbackLogs?

A FeedbackLog is constructed through the development and deployment of the ML pipeline. While the FeedbackLog contains a starting point and final summary to document the start and end of stakeholder involvement, the primary aspect of FeedbackLog are the records that document practitioners' interactions with stakeholders. Each record contains the content of the feedback provided by a particular stakeholder and how it was incorporated into the ML pipeline. The process for adding records to the FeedbackLogis shown in purple in Figure 1 (Right). Over time, a FeedbackLog reflects how the ML pipeline has evolved as a result of these interactions between practitioners and stakeholders.

Why Use a FeedbackLog?

Stakeholders, who interact with or are affected by machine learning (ML) models, should be involved in the model development. Their unique perspectives, however, may be ignored by practitioners, who are responsible for developing and deploying models (e.g., ML engineers, data scientists, UX researchers). Indeed, we notice a gap in the existing literature around documenting how stakeholder input was collected and incorporated in the ML pipeline. A lack of documentation can create difficulties when practitioners attempt to justify why certain design decisions were made through the pipeline: this may be important for compiling defensible evidence of compliance to governance practices, anticipating stakeholder needs, or participating in the model auditing process. While existing documentation literature (e.g., Model Cards and FactSheets) focuses on providing static snapshots of an ML model, as shown in Figure 1 (Left), we propose FeedbackLogs, a systematic way of recording the iterative process of collecting and incorporating stakeholder feedback.

How do I use FeedbackLogs?

  1. This website provides an interface to view and update feedback logs, designed for use by experts and machine learning practioners. An example log based on TV content recommendation is accessible here, while a blank log for you to fill out is available here.
  2. A command line interface (CLI) designed for use by practitioners to track updates to code based on expert feedback. Accessible in the GitHub repo here.

Components of a FeedbackLog

The starting point describes the state of the ML pipeline before the practitioner reaches out to any relevant stakeholders. The starting point might contain information on the objectives, assumptions, and current plans of the practitioner. More generally, a starting point may consist of descriptions of the data, such as Data Sheets; metrics used to evaluate the models; or policies regarding deployment of the system. A proper starting point allows auditors and practitioners to understand when in the development process the gathered feedback was incorporated, and defensibly demonstrates how specific feedback led to changes in the metrics.
Each record in a FeedbackLog is a self-contained interaction between the practitioner and a relevant stakeholder. It consists of how the stakeholder was requested for feedback (elicitation), the stakeholder's response (feedback), and how the practitioner used the stakeholder input to update the ML pipeline (incorporation).
The final summary consists of the same questions as the starting points. This component provides completeness by encapsulating the net effect as a result of feedback from all the relevant experts. Proper documentation of the finishing point of the FeedbackLog allows reviewers to clearly establish how feedback led to concrete and quantifiable changes.