FeedbackLogs
Summary
Even though machine learning (ML) pipelines affect an increasing array of stakeholders, there is little work on how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders. Each log records important details about the feedback collection process, the feedback itself, and how the feedback is used to update the ML pipeline. In this paper, we introduce and formalise a process for collecting a FeedbackLog. We also provide concrete use cases where FeedbackLogs can be employed as evidence for algorithmic auditing and as a tool to record updates based on stakeholder feedback.
What are FeedbackLogs?
A FeedbackLog is constructed through the development and deployment of the ML pipeline. While the FeedbackLog contains a starting point and final summary to document the start and end of stakeholder involvement, the primary aspect of FeedbackLog are the records that document practitioners' interactions with stakeholders. Each record contains the content of the feedback provided by a particular stakeholder and how it was incorporated into the ML pipeline. The process for adding records to the FeedbackLogis shown in purple in Figure 1 (Right). Over time, a FeedbackLog reflects how the ML pipeline has evolved as a result of these interactions between practitioners and stakeholders.
Why Use a FeedbackLog?
Stakeholders, who interact with or are affected by machine learning (ML) models, should be involved in the model development. Their unique perspectives, however, may be ignored by practitioners, who are responsible for developing and deploying models (e.g., ML engineers, data scientists, UX researchers). Indeed, we notice a gap in the existing literature around documenting how stakeholder input was collected and incorporated in the ML pipeline. A lack of documentation can create difficulties when practitioners attempt to justify why certain design decisions were made through the pipeline: this may be important for compiling defensible evidence of compliance to governance practices, anticipating stakeholder needs, or participating in the model auditing process. While existing documentation literature (e.g., Model Cards and FactSheets) focuses on providing static snapshots of an ML model, as shown in Figure 1 (Left), we propose FeedbackLogs, a systematic way of recording the iterative process of collecting and incorporating stakeholder feedback.
How do I use FeedbackLogs?
- This website provides an interface to view and update feedback logs, designed for use by experts and machine learning practioners. An example log based on TV content recommendation is accessible here, while a blank log for you to fill out is available here.
- A command line interface (CLI) designed for use by practitioners to track updates to code based on expert feedback. Accessible in the GitHub repo here.