Monday, April 9, 2018

Legal Tech Needs to Abandon UX

Originally published by Greg Lambert.

[Ed. Note: Please welcome guest blogger, Casandra M. Laskowski from FirebrandLib blog. Cas is a Reference Librarian and Lecturing Fellow at Duke University School of Law, and a total geek – so she fits in well here! I was happy that she reached out to talk about how UX design facilitates discrimination and inhibits legal tech from achieving ethical goals. – GL]

In 2015, Google faced a scandal with its image-tagging feature on its photo app. Despite promising to “work on long-term fixes,” Wired revealed earlier this year that the “fix” was to remove gorillas from its image recognition system, and the system remains blind to this day. This chain of events seems almost cliché at this point: software is released with a predictably offensive or impactful implementation, the company expresses shock and shame, and a hasty patch is made that fails to address the cause of the issue.

With most mainstream technology this is troubling but not impactful to our democracy. Users are harmed, they voice their outrage, and companies must do something to repair the relationship if they want the continued business. Additionally, like the example above, the people who are affected are end users. User-centered design can help prevent these issues—when developers think of all possible users. With legal technology, however, the problem is much more grave and harder to solve as the person who often has the most to lose is not the user.

When a person is denied bail because of an algorithm, they cannot call customer support to dispute the result and request restitution. With the reasoning locked tightly inside the black box, the user might not be able to answer why a decision was made. User-centered design fails because it puts the people who are ultimately affected out of focus.

By not looking beyond the user during the development stage we increase the chance that the technology we create will have a negative impact once implemented.If the legal technology community truly wants to build more equitable and accountable systems, we need to replace the design philosophy that guides most software development.

It needs to be said that I am using the term legal technology broadly to mean any technology that is implemented by a government or the legal community that impacts the legal rights of individuals. This definition includes e-discovery platforms, automated legal decision systems (e.g., parole determinations), and even the TSA’s body scanners. When legal technology goes awry, it can affect people’s rights and liberty, and we cannot merely wait to solve the problem ex-post facto.

At the opening keynote for ER&L 2016, Anna Lauren Hoffman recounted a recurring experience she has going through TSA checkpoints. She goes into the body scanner, the operator presses the button for Female starting the scan, and the machine flags Anna’s genitalia. Anna is transgender, and the machine declares her body dangerous each time. All because the system was designed with only two gender options. It is a stunning keynote, and you should take the time to watch it.

What should be more shocking, but is not, is that Anna’s experience is not a unique one. Shadi Petosky live-tweeted a similar event in Orlando. It is a common enough experience that the National Center for Transgender Equality published a resource to help transgender individuals prepare for airport security.

These stories stand in striking contrast to the Google scandal. While being so offensively labeled is reprehensible, it does not involve government use of force to encroach on a person’s liberty and subject them to invasive body searches because the developers did not consider the possible impact the system would have on transgender individuals. Legal software can prolong jail stays, increase racial disparities in pretrial detention, and cause unlawful arrests. We need a new design philosophy to help mitigate these problems.

Recently, conversations have used the terminology human-centered design instead of user-centered design to highlight the humanity of the user in the hope of imbuing ethical considerations into the design process. Human-centered design has been used for some time to emphasize the human in the human-computer interface equation. The focus is still on the user and their needs. This language change does not go far enough to address the problems we face. We need to look beyond the human-computer interface, for those whose legal rights are affected may never interface with the technology at all.

I suggest the adoption of what I am calling impact-conscious design (ICD). ICD would not focus on the impact; it would be mindful of the impact in a way that user-centered design is not.

Adopting an impact-conscious design would require developers to look beyond the user. It would ask who might be affected and how.

The idea that we should consider the impact of the technology we create is not new. Brad Smith and Harry Shum proposed a Hippocratic oath for AI practitioners where they promise to consider the impact of their design and the humans impacted. Hoffman has written multiple articles about adding ethical considerations to our design-planning phase. There are conversations in industry journals, mainstream publications, and at conferences about bias in AI and failures of legal technology. The community wants to do better. So why do we need a new design philosophy? Because naming this philosophy grounds all the ideas floating in the ether and unifies them under one banner.

Giving it a name allows for greater accountability, visibility, adaptability, and engagement on the topic. It places ethics concretely as a goal of our design process, not as an afterthought. Future discussions will give the philosophy more meaning and weight behind the term, and it will later serve as a quick way to express a mostly agreed upon set of ideas. It would allow for the development of a discipline where researchers study best practices and test design theories that reduce harmful impacts.

What might be some of the principles of ICD? While the legal community will be the ultimate arbiters, here are my suggestions, which are in addition to the established principles of UX, like usability and accessibility.

  • Transparency & Accountability
    • Provide mechanisms for non-users to question the system.
    • Make decision reasoning discoverable to allow adjustment for error or unconsidered mitigating circumstances.
  • Equitability
    • Ensure the software provides equitable results should be a goal as soon as development commences.
    • Review and test using as diverse a sample pool as possible.
  • Impact testing
    • Test potential impacts of implementation to discover problems before they affect real people.
  • Reactivity
    • Build in safeguards that allow for quick responses when there are unforeseen consequences that impact people’s legal rights (e.g., software downtime that prevents prisoner release).

There are calls to regulate AI or find ways to hold the programmers accountable to incentivize better behavior through negative reinforcement. Is it not better to use the carrot before the stick? Let’s begin discussing impact-conscious design and the parameters therein so we can start to develop our own social norms (see #3) and expectations that allow for better behavior through collaboration instead of the threat of punishment.

 

Curated by Texas Bar Today. Follow us on Twitter @texasbartoday.



from Texas Bar Today https://ift.tt/2HnwcsC
via Abogado Aly Website

No comments:

Post a Comment