No Black Boxes (Procedural Transparency): Refers to the transparency necessary to understand and measure the effects of outcomes and outputs. This requires proper documentation of the algorithm as well as proper retention policies to ensure the system designs, database schemas, and major product decisions have been clearly defined in detail. Ethical concerns caused by inconclusive evidence for algorithmic outputs are compounded by challenges with tracing the decision process of machine intelligence but being able to reference documentation that explains the decision tree of the algorithm ensures responsibility for agency through the attribution of action to designers, circumstances, and/or machine intelligence.
Predictability and Explainability: To properly attribute agency of the algorithmic outputs, the forementioned transparency is a useful tool, but additionally, the designers and technical teams that support the development of the algorithm need to ensure that their design has consistently produced the expected outputs along with the evidence to explain the logic the algorithm used to make its decisions. Most notably in unsupervised machine learning algorithms, it is difficult to differentiate between a designed outcome of an algorithm vs. an unexpected outcome based on what the algorithm learned independently. As Safiya Noble noted, algorithms are “automated decisions” that we must trust, most easily accomplished by inspection of the decision-making process to measure its predictability. Aggressive Collection of Feedback (user and technical): Solicit input from the most varied group of stakeholders possible. Similar to a Works Council in Germany that includes a representative from every facet of the business – from janitorial to management to senior leadership – consideration of any new updates to the algorithm must be authorized by large, broadly representative groups of users and stakeholders. Users often have more detailed product knowledge than designers, like the poet in this week’s video, Joy Buolamwini, who noticed the problem with facial recognition technology and black skin. Stakeholders often possess a broader awareness of extra-contextual factors that may influence the algorithm. Both groups must be frequently consulted by the teams that design, build, and maintain the algorithm. Ongoing maintenance: Systematic collection of feedback from users will identify problems with the current design and opportunities for improvement based on new technology and the wisdom gained through experience. Sometimes, an algorithm will do harm to its users, so it is essential to continuously solicit feedback, analyze reporting produced by the algorithm, and aggressively monitor performance trends. Observations or reports of harm to users should generate an immediate maintenance response from the design team and a fair evaluation of the potential need to suspend the algorithm based on the scale, and correctability, of the harm. Since most algorithms are more complex than a checkers game (which we learned has been process-mapped to prevent algorithms from losing a match) the proper balance of accuracy and efficiency, a core value of algorithms, will almost always require more updates in the future. Data Security: Require a minimum standard for the protection of data created by the algorithm, stored by technical teams, and used by additional parties. To accomplish this goal, using the best hardware, firmware, and software will be expected, but most importantly, using the right technology to minimize the risk of a data breach, and any potential misuse of the data. Privacy is a right in some countries, protecting individual autonomy by limiting the unauthorized collection of data, as well as improper use. Most algorithms create more data than a human mind can monitor, so implementing the right tools – building them if necessary for special requirements – will protect data throughout the end-to-end process. Processes should be designed to protect data and all relevant work teams should be fully trained in data security.
0 Comments
|
AuthorStudent of Education, English, and Learning Technology at UMN. Archives
May 2022
Categories
All
|