Last week, both the DOJ and the EEOC issued technical assistance memorandums/documents detailing their concerns about using AI in employment. It definitely made big news. As someone who knows individuals have gone through AI processes in hiring, these guidances are not surprising as one just had to figure that AI was being used to screen out people with disabilities. This blog entry is going to be organized a bit differently. The categories are: DOJ AI document key takeaways; and EEOC technical assistance document on AI. My thoughts/takeaways for the EEOC document appear in my thoughts § underneath each section where a thoughts/takeaways exists.
I
DOJ AI Document Key Takeaways
- In employment matters, DOJ enforces disability discrimination laws with respect to state and local government employers.
- Still a good idea to exhaust administrative remedies with EEOC first.
- DOJ will look seriously at whether the AI screens out persons with disabilities.
- Employers must use accessible tests measuring the applicant’s job skills and not the disability, or they must make other adjustments to the hiring process so that a qualified person is not eliminated because of a disability.
- Know what a reasonable accommodation is.
- Starting line analogy.
- DOJ Guidance on AI is here.
- Don’t forget about the EEOC guidance on AI in employment, here and immediately below.
II
EEOC Technical Assistance Document on AI
Employers now have a wide variety of computer-based tools available to assist them in hiring workers, monitoring worker performance, determining pay or promotions, and establishing the terms and conditions of employment. Employers may utilize these tools in an attempt to save time and effort, increase objectivity, or decrease bias. However, the use of these tools may disadvantage job applicants and employees with disabilities. When this occurs, employers may risk violating federal Equal Employment Opportunity (“EEO”) laws that protect individuals with disabilities.
The Questions and Answers in this document explain how employers’ use of software that relies on algorithmic decision-making may violate existing requirements under Title I of the Americans with Disabilities Act (“ADA”). This technical assistance also provides practical tips to employers on how to comply with the ADA, and to job applicants and employees who think that their rights may have been violated.
The Equal Employment Opportunity Commission (“EEOC” or “the Commission”) enforces, and provides leadership and guidance on, the federal EEO laws prohibiting employment discrimination on the basis of race, color, national origin, religion, and sex (including pregnancy, sexual orientation, and gender identity), disability, age (over 40) and genetic information. This publication is part of an ongoing effort by the EEOC to educate employers, employees, and other stakeholders about the application of EEO laws when employers use employment software and applications, some of which incorporate algorithmic decision-making.
Background
As a starting point, this section explains the meaning of three, central terms used in this document—software, algorithms, and artificial intelligence (“AI”) —and how, when used in a workplace, they relate to each other.
- Software: Broadly, “software” refers to information technology programs or procedures that provide instructions to a computer on how to perform a given task or function. “Application software” (also known as an “application” or “app”) is a type of software designed to perform or to help the user perform a specific task or tasks. The United States Access Board is the source of these definitions.
There are many different types of software and applications used in employment, including: automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.
- Algorithms: Generally, an “algorithm” is a set of instructions that can be followed by a computer to accomplish some end. Human resources software and applications use algorithms to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employees. Software or applications that include algorithmic decision-making tools may be used at various stages of employment, including hiring, performance evaluation, promotion, and termination.
- Artificial Intelligence (“AI”): Some employers and software vendors use AI when developing algorithms that help employers evaluate, rate, and make other decisions about job applicants and employees. In the National Artificial Intelligence Initiative Act of 2020 at section 5002(3), Congress defined “AI” to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems. For a general discussion of AI, which includes machine learning, see National Institute of Standards and Technology Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.
Employers may rely on different types of software that incorporate algorithmic decision-making at a number of stages of the employment process. Examples include: resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test. Each of these types of software may include AI.
My Thoughts: a nice job of describing the background for the guidance document and providing definitions of key terms.
ADA Basics
- What is the ADA and how does it define “disability”?
The ADA is a federal civil rights law. Title I of the ADA prohibits employers, employment agencies, labor organizations, and joint labor-management committees with 15 or more employees from discriminating on the basis of disability. Other parts of the ADA, not discussed here, ensure that people with disabilities have full access to public and private services and facilities.
The ADA has a very specific definition of a current “disability.” A physical or mental impairment meets the ADA’s definition of a current “disability” if it would, when left untreated, “substantially limit” one or more “major life activities.” Major life activities include, for example, seeing, reaching, communicating, speaking concentrating, or the operation of major bodily functions, such as brain or neurological functions. (There are two other definitions of “disability” that are not the subject of this discussion. For more information on the definition of “disability” under the ADA, see EEOC’s Questions and Answers on the ADA Amendments Act.
My Thoughts: I am not sure why the focus is on “current disability.” The ADA prongs are: actual disability, record of disability, and regarded as having a disability. You could have a disability under the ADA if it is not current if you have a record of a disability or you are regarded as having a disability. Certainly, an actual disability needs to be current, but that isn’t how this document is explaining things.
A condition does not need to be permanent or severe, or cause a high degree of functional limitation, to be “substantially limiting.” It may qualify as substantially limiting, for example, by making activities more difficult, painful, or time-consuming to perform as compared to the way that most people perform them. In addition, if the symptoms of the condition come and go, the condition still will qualify as a disability if it substantially limits a major life activity when active. Many common and ordinary medical conditions will qualify.
My Thoughts: interesting that the EEOC refers to “painful.” See this blog entry as to why I found that interesting.
- How could an employer’s use of algorithmic decision-making tools violate the ADA?
The most common ways that an employer’s use of algorithmic decision-making tools could violate the ADA are:
- The employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. (See Questions 4–7 below.)
- The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. “Screen out” occurs when a disability prevents a job applicant or employee from meeting—or lowers their performance on—a selection criterion, and the applicant or employee loses a job opportunity as a result. A disability could have this effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether. (See Questions 8–12 below.)
- The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations. (See Question 13 below.)
An employer’s use of an algorithmic decision-making tool may be unlawful for one of the above reasons, or for several such reasons.
My Thoughts: for those in the disability rights field and aware of how AI is used in the hiring process, the first thing that immediately comes to mind is screen out. This particular section of the document lets you know that there may be other issues as well, such as the disability related inquiries and medical examinations scheme, which we discussed here among other places.
- Is an employer responsible under the ADA for its use of algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor?
In many cases, yes. For example, if an employer administers a pre-employment test, it may be responsible for ADA discrimination if the test discriminates against individuals with disabilities, even if the test was developed by an outside vendor. In addition, employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.
My Thoughts:
- As we discussed here, the ADA is a nondelegable duty. Accordingly, indemnification agreements may be problematic. However, an employer may strongly wish to consider a reimbursement agreement with the AI vendor.
- Don’t forget about 29 C.F.R. §1630.6, which provides: “It is unlawful for a covered entity to participate in a contractual or other arrangement or relationship that has the effect of subjecting the covered entity‘s own qualified applicant or employee with a disability to the discrimination prohibited by this part.”
Algorithmic Decision-Making Tools and Reasonable Accommodation
- What is a reasonable accommodation?
A reasonable accommodation is a change in the way things are done that helps a job applicant or employee with a disability apply for a job, do a job, or enjoy equal benefits and privileges of employment. Examples of reasonable accommodations may include specialized equipment, alternative tests or testing formats, permission to work in a quiet setting, and exceptions to workplace policies. These are just examples—almost any change can be a reasonable accommodation—although an employer never has to lower production or performance standards or eliminate an essential job function as a reasonable accommodation.
My Thoughts:
- I like to think of reasonable accommodations as anything that gets the person with a disability to the same starting line as a person without a disability. Once you have that, it is then up to the person with the disability demonstrate what they can do.
- It is a good idea to keep the essential functions of the job in your job descriptions current.
5. May an employer announce generally (or use software that announces generally) that reasonable accommodations are available to job applicants and employees who are asked to use or be evaluated by an algorithmic decision-making tool, and invite them to request reasonable accommodations when needed?
Yes. An employer may tell applicants or employees what steps an evaluation process includes and may ask them whether they will need reasonable accommodations to complete it. For example, if a hiring process includes a video interview, the employer or software vendor may tell applicants that the job application process will involve a video interview and provide a way to request a reasonable accommodation. Doing so is a “promising practice” to avoid violating the ADA.
My Thoughts: I am not sure that this is a common practice as of this moment. The problem that is likely to be run into is a debate over whether the reasonable accommodation requested would fundamentally alter the nature of the AI. However, that isn’t the end of the story because the very nature of the AI may be screening out people with disabilities. So, there is a tension between the utility of the AI altogether and the screen out prohibitions of the ADA.
- When an employer uses algorithmic decision-making tools to assess job applicants or employees, does the ADA require the employer to provide reasonable accommodations?
If an applicant or employee tells the employer that a medical condition may make it difficult to take a test, or that it may cause an assessment result that is less acceptable to the employer, the applicant or employee has requested a reasonable accommodation. To request an accommodation, it is not necessary to mention the ADA or use the phrase “reasonable accommodation.”
My thoughts:
- “Medical condition,” is an interesting turn of phrase as the ADA uses the term, “physical or mental impairment.” I suppose a physical or mental impairment is a “medical condition,” but that isn’t the statutory language.
- Magic words, as we have discussed numerous times, such as here, are not required for seeking a reasonable accommodation.
Under the ADA, employers need to respond promptly to requests for reasonable accommodation.
If it is not obvious or already known whether the requesting applicant or employee has an ADA disability and needs a reasonable accommodation because of it, the employer may request supporting medical documentation.
My Thoughts:
- You don’t have an automatic right to request medical documentation. That right exist if it is not obvious or already known whether the requesting applicant or employee has an ADA disability. That said, “obvious,” and “already known,” can be very elastic terms.
- Keep any request for medical documentation reasonable and narrowly focused to the situation at hand.
- I never like people referring to “undue hardship,” as involving significant difficulty or expense because there is a lot more to it than just that statement. For example, the concept of undue hardship includes both logistical undue hardship as well as financial undue hardship. Logistical undue hardship is akin to the title II and title III concept of fundamental alteration, which basically requires your business being turned upside down. Financial undue hardship means looking to the entire resources of the entity.
When the documentation shows that a disability might make a test more difficult to take or that it might reduce the accuracy of an assessment, the employer must provide an alternative testing format or a more accurate assessment of the applicant’s or employee’s skills as a reasonable accommodation, unless doing so would involve significant difficulty or expense (also called “undue hardship”).
For example, a job applicant who has limited manual dexterity because of a disability may report that they would have difficulty taking a knowledge test that requires the use of a keyboard, trackpad, or other manual input device. Especially if the responses are timed, this kind of test will not accurately measure this particular applicant’s knowledge. In this situation, the employer would need to provide an accessible version of the test (for example, one in which the applicant is able to provide responses orally, rather than manually) as a reasonable accommodation, unless doing so would cause undue hardship. If it is not possible to make the test accessible, the ADA requires the employer to consider providing an alternative test of the applicant’s knowledge as a reasonable accommodation, barring undue hardship.
Other examples of reasonable accommodations that may be effective for some individuals with disabilities include extended time or an alternative version of the test, including one that is compatible with accessible technology (like a screen-reader) if the applicant or employee uses such technology. Employers must give individuals receiving reasonable accommodation equal consideration with other applicants or employees not receiving reasonable accommodations.
The ADA requires employers to keep all medical information obtained in connection with a request for reasonable accommodation confidential and must store all such information separately from the applicant’s or employee’s personnel file.
My Thoughts: the confidentiality requirements of all medical information is an easy one to forget about. Don’t do that.
- Is an employer responsible for providing reasonable accommodations related to the use of algorithmic decision-making tools, even if the software or application is developed or administered by another entity?
In many cases, yes. As explained in Question 3 above, an employer may be held responsible for the actions of other entities, such as software vendors, that the employer has authorized to act on its behalf. For example, if an employer were to contract with a software vendor to administer and score on its behalf a pre-employment test, the employer likely would be held responsible for actions that the vendor performed—or did not perform—on its behalf. Thus, if an applicant were to tell the vendor that a medical condition was making it difficult to take the test (which qualifies as a request for reasonable accommodation), and the vendor did not provide an accommodation that was required under the ADA, the employer likely would be responsible even if it was unaware that the applicant reported a problem to the vendor.
My Thoughts: this is a very respondeat superior type approach. Don’t forget about 29 C.F.R. §1630.6, which makes it clear that an employer cannot discriminate against employees or prospective applicants by way of contracting.
Algorithmic Decision-Making Tools That Screen Out Qualified Individuals with Disabilities
- When is an individual “screened out” because of a disability, and when is screen out potentially unlawful?
Screen out occurs when a disability prevents a job applicant or employee from meeting—or lowers their performance on—a selection criterion, and the applicant or employee loses a job opportunity as a result. The ADA says that screen out is unlawful if the individual who is screened out is able to perform the essential functions of the job with a reasonable accommodation if one is legally required.[1] Questions 9 and 10 explain the meaning of “screen out” and Question 11 provides examples of when a person who is screened out due to a disability nevertheless can do the job with a reasonable accommodation.
My Thoughts: this is an easy to understand meaning of the term, “screen out.”
- Could algorithmic decision-making tools screen out an individual because of a disability? What are some examples?
Yes, an algorithmic decision-making tool could screen out an individual because of a disability if the disability causes that individual to receive a lower score or an assessment result that is less acceptable to the employer, and the individual loses a job opportunity as a result.
My Thoughts: proving up that a person with a disability got a lower score on an AI assessment is probably not all that difficult. However, the EEOC makes it clear that you also have to show that the individual lost a job opportunity as a result of that, which would be much harder to show.
An example of screen out might involve a chatbot, which is software designed to engage in communications online and through texts and emails. A chatbot might be programmed with a simple algorithm that rejects all applicants who, during the course of their “conversation” with the chatbot, indicate that they have significant gaps in their employment history. If a particular applicant had a gap in employment, and if the gap had been caused by a disability (for example, if the individual needed to stop working to undergo treatment), then the chatbot may function to screen out that person because of the disability.
My Thoughts: many labor and employment management side attorneys are saying now in their blogs and on social media that using gaps in employment as a negative factor for an applicant is just a really bad idea, especially with what has happened during the Covid-19 pandemic.
Another kind of screen out may occur if a person’s disability prevents the algorithmic decision-making tool from measuring what it is intended to measure. For example, video interviewing software that analyzes applicants’ speech patterns in order to reach conclusions about their ability to solve problems is not likely to score an applicant fairly if the applicant has a speech impediment that causes significant differences in speech patterns. If such an applicant is rejected because the applicant’s speech impediment resulted in a low or unacceptable rating, the applicant may effectively have been screened out because of the speech impediment.
My Thoughts: AI that uses speech patterns to reach conclusions about prospective candidates abilities is terribly problematic for persons with disabilities. For example, I have a slight deaf accent. That accent is imperceptible to most hearing people unless they have worked with deaf individuals or have a background in speech therapy. Nevertheless, voice dictation technology, which I have used for years due to joint issues, is a lot harder for me to use because of that accent. Voice dictation simply takes a lot longer to get used to my accent than it does for hearing people. Also, keep in mind that many disabilities have speech impediment that are associated with it. Finally, a culturally deaf individual quite often doesn’t use their voice at all. In short, if an AI tool is using speech patterns to influence the results, they would do well to eliminate that altogether because too many people with disabilities have speech patterns that are not typical.
- Some algorithmic decision-making tools may say that they are “bias-free.” If a particular tool makes this claim, does that mean that the tool will not screen out individuals with disabilities?
When employers (or entities acting on their behalf such as software vendors) say that they have designed an algorithmic decision-making tool to be “bias-free,” it typically means that they have taken steps to prevent a type of discrimination known as “adverse impact” or “disparate impact” discrimination under Title VII, based on race, sex, national origin, color, or religion. This type of Title VII discrimination involves an employment policy or practice that has a disproportionately negative effect on a group of individuals who share one of these characteristics, like a particular race or sex.[2]
To reduce the chances that the use of an algorithmic decision-making tool results in disparate impact discrimination on bases like race and sex, employers and vendors sometimes use the tool to assess subjects in different demographic groups, and then compare the average results for each group. If the average results for one demographic group are less favorable than those of another (for example, if the average results for individuals of a particular race are less favorable than the average results for individuals of a different race), the tool may be modified to reduce or eliminate the difference.
The steps taken to avoid that kind of Title VII discrimination are typically distinct from the steps needed to address the problem of disability bias.[3] If an employer or vendor were to try to reduce disability bias in the way described above, doing so would not mean that the algorithmic decision-making tool could never screen out an individual with a disability. Each disability is unique. An individual may fare poorly on an assessment because of a disability, and be screened out as a result, regardless of how well other individuals with disabilities fare on the assessment. Therefore, to avoid screen out, employers may need to take different steps beyond the steps taken to address other forms of discrimination. (See Question 12.)
My Thoughts: the very last ¶ of question 10, should be a very big cautionary note for the use of AI in hiring.
- Screen out because of a disability is unlawful if the individual who is screened out is able to perform the essential functions of the job, with a reasonable accommodation if one is legally required. If an individual is screened out by an algorithmic decision-making tool, is it still possible that the individual is able to perform the essential functions of the job?
In some cases, yes. For example, some employers rely on “gamified” tests, which use video games to measure abilities, personality traits, and other qualities, to assess applicants and employees. If a business requires a 90 percent score on a gamified assessment of memory, an applicant who is blind and therefore cannot play these particular games would not be able to score 90 percent on the assessment and would be rejected. But the applicant still might have a very good memory and be perfectly able to perform the essential functions of a job that requires a good memory.
Even an algorithmic decision-making tool that has been “validated” for some purposes might screen out an individual who is able to perform well on the job. To say that a decision-making tool has been “validated”[4] means that there is evidence meeting certain professional standards showing that the tool accurately measures or predicts a trait or characteristic that is important for a specific job. Algorithmic decision-making tools may be validated in this sense, and still be inaccurate when applied to particular individuals with disabilities. For example, the gamified assessment of memory may be validated because it has been shown to be an accurate measure of memory for most people in the general population, yet still screen out particular individuals who have good memories but are blind, and who therefore cannot see the computer screen to play the games.
An algorithmic decision-making tool also may sometimes screen out individuals with disabilities who could do the job because the tool does not take into account the possibility that such individuals are entitled to reasonable accommodations on the job. Algorithmic decision-making tools are often designed to predict whether applicants can do a job under typical working conditions. But people with disabilities do not always work under typical conditions if they are entitled to on-the-job reasonable accommodations.
My Thoughts: the question is whether a person can perform the essential functions of the job with it without reasonable accommodations. If the AI tool is only measuring how a person can perform the essential functions of the job without reasonable accommodations, that tool has a problem.
For example, some pre-employment personality tests are designed to look for candidates who are similar to the employer’s most successful employees—employees who most likely work under conditions that are typical for that employer.
My Thoughts: for what can happen when an employer uses personality tests to evaluate whether a person can do the essential functions of the job or to evaluate whether a person should be promoted, see Karraker v. Rent-A-Car Center, Inc., 411 F.3d 831 (7th Cir. 2005).
Someone who has Posttraumatic Stress Disorder (“PTSD”) might be rated poorly by one of these tests if the test measures a trait that may be affected by that particular individual’s PTSD, such as the ability to ignore distractions. Even if the test is generally valid and accurately predicts that this individual would have difficulty handling distractions under typical working conditions, it might not accurately predict whether the individual still would experience those same difficulties under modified working conditions—specifically, conditions in which the employer provides required on-the-job reasonable accommodations such as a quiet workstation or permission to use noise-cancelling headphones. If such a person were to apply for the job and be screened out because of a low score on the distraction test, the screen out may be unlawful under the ADA. Some individuals who may test poorly in certain areas due to a medical condition may not even need a reasonable accommodation to perform a job successfully.
My Thoughts: is it the disability that is being accommodated or is it the essential functions of the job that are being accommodated? You get two different places depending upon which if the question. For a discussion of this issue, see this blog entry.
- What could an employer do to reduce the chances that algorithmic decision-making tools will screen out someone because of a disability, even though that individual is able to perform the essential functions of the job (with a reasonable accommodation if one is legally required)?
First, if an employer is deciding whether to rely on an algorithmic decision-making tool developed by a software vendor, it may want to ask the vendor whether the tool was developed with individuals with disabilities in mind. Some possible inquiries about the development of the tool that an employer might consider include, but are not limited to:
- If the tool requires applicants or employees to engage a user interface, did the vendor make the interface accessible to as many individuals with disabilities as possible?
My Thoughts: it is not a legal defense to my mind to say that the interface is accessible with many individuals with disabilities but not to a particular employee or applicant with the disability. Remember, the ADA requires an individualized analysis in every case.
- Are the materials presented to job applicants or employees in alternative formats? If so, which formats? Are there any kinds of disabilities for which the vendor will not be able to provide accessible formats, in which case the employer may have to provide them (absent undue hardship)?
My Thoughts: don’t forget about 29 C.F.R. §1630.6.
- Did the vendor attempt to determine whether use of the algorithm disadvantages individuals with disabilities? For example, did the vendor determine whether any of the traits or characteristics that are measured by the tool are correlated with certain disabilities?
My Thoughts: this should be a mandatory item on any AI vendor checklist.
If an employer is developing its own algorithmic decision-making tool, it could reduce the chances of unintentional screen out by taking the same considerations into account during its development process. Depending on the type of tool in question, reliance on experts on various types of disabilities throughout the development process may be effective. For example, if an employer is developing pre-employment tests that measure personality, cognitive, or neurocognitive traits, it may be helpful to employ psychologists, including neurocognitive psychologists, throughout the development process in order to spot ways in which the test may screen out people with autism or cognitive, intellectual, or mental health-related disabilities.
My Thoughts:
- You want to make sure that such individuals are not practitioners of ableism. That is, do they believe as persons without disabilities that they know what is best for persons with disabilities. The focus should be on whether the person with the disability can do the essential functions of the job with or without reasonable accommodations.
- Beta testing utilizing persons with disabilities is always a good idea for any AI tool.
Second, regardless of whether the employer or another entity is developing an algorithmic decision-making tool, the employer may be able to take additional steps during implementation and deployment to reduce the chances that the tool will screen out someone because of a disability, either intentionally or unintentionally. Such steps include:
- clearly indicating that reasonable accommodations, including alternative formats and alternative tests, are available to people with disabilities;
- providing clear instructions for requesting reasonable accommodations; and
- in advance of the assessment, providing all job applicants and employees who are undergoing assessment by the algorithmic decision-making tool with as much information about the tool as possible, including information about which traits or characteristics the tool is designed to measure, the methods by which those traits or characteristics are to be measured, and the disabilities, if any, that might potentially lower the assessment results or cause screen out.
My Thoughts: the final bullet in this section is very interesting because of the proprietary information involved. One wonders what kind of resistance the AI company will put up with respect to this bullet. It seems to me there would be an argument that proprietary information is involved. Even so, I am not sure that approach will work in the face of a lawsuit alleging screen out as this information would certainly be related to whether screen out is occurring.
Taking these steps will provide individuals with disabilities an opportunity to decide whether a reasonable accommodation may be necessary. For example, suppose that an employer uses an algorithm to evaluate its employees’ productivity, and the algorithm takes into account the employee’s average number of keystrokes per minute. If the employer does not inform its employees that it is using this algorithm, an employee who is blind or has a visual impairment and who uses voice recognition software instead of a keyboard may be rated poorly and lose out on a promotion or other job opportunity as a result. If the employer informs its employees that they will be assessed partly on the basis of keyboard usage, however, that same employee would know to request an alternative means of measuring productivity—perhaps one that takes into account the use of voice recognition software rather than keystrokes—as a reasonable accommodation.
My Thoughts: I am delighted to see that voice recognition software is specifically mentioned in this document because voice recognition software often gets lost in favor of screen readers. They both work on coding technology, but the results aren’t always the same. So, you need to evaluate for screen reading capabilities and separately for voice dictation capabilities.
Another way for employers to avoid ADA discrimination when using algorithmic decision-making tools is to try to ensure that no one is screened out unless they are unable to do the job, even when provided with reasonable accommodations. A promising practice is to only develop and select tools that measure abilities or qualifications that are truly necessary for the job—even for people who are entitled to an on-the-job reasonable accommodation. For example, an employer who is hiring cashiers might want to ensure that the chatbot software it is using does not reject applicants who are unable to stand for long periods. Otherwise, a chatbot might reject an applicant who uses a wheelchair and may be entitled to a lowered cash register as a reasonable accommodation.
My Thoughts:
- This is excellent advice. That is, AI should not screen out anyone unless they are unable to do the job with or without reasonable accommodations.
- Same question as earlier. That is, is it the disability being accommodated or the essential functions of the job?
As a further measure, employers may wish to avoid using algorithmic decision-making tools that do not directly measure necessary abilities and qualifications for performing a job, but instead make inferences about those abilities and qualifications based on characteristics that are correlated with them. For example, if an open position requires the ability to write reports, the employer may wish to avoid algorithmic decision-making tools that rate this ability by measuring the similarity between an applicant’s personality and the typical personality for currently successful report writers. By doing so, the employer lessens the likelihood of rejecting someone who is good at writing reports, but whose personality, because of a disability, is uncommon among successful report writers.
My Thoughts: as a preventive law matter, I would definitely avoid using logarithmic decision-making tools that do not directly measure necessary abilities and qualifications for performing a job, but instead make inferences about those abilities and qualification based on characteristics correlated with them. It is just a bad idea. It also leads ableism interfering with employment decisions.
Algorithmic Decision-Making Tools and Disability-Related Inquiries and Medical Examinations
- How could an employer’s use of algorithmic decision-making tools violate ADA restrictions on disability-related inquiries and medical examinations?
An employer might violate the ADA if it uses an algorithmic decision-making tool that poses “disability-related inquiries” or seeks information that qualifies as a “medical examination” before giving the candidate a conditional offer of employment.[5] This type of violation may occur even if the individual does not have a disability.
My Thoughts:
- For a discussion of the medical exam/disability related inquiries scheme, see this blog entry.
- You do not have to be a person with a disability to benefit from violations of the disability related and medical examination scheme.
An assessment includes “disability-related inquiries” if it asks job applicants or employees questions that are likely to elicit information about a disability or directly asks whether an applicant or employee is an individual with disability.
My Thoughts: no argument from me.
It qualifies as a “medical examination” if it seeks information about an individual’s physical or mental impairments or health.
My thoughts: this is an oversimplification. See this blog entry for example.
An algorithmic decision-making tool that could be used to identify an applicant’s medical conditions would violate these restrictions if it were administered prior to a conditional offer of employment. Not all algorithmic decision-making tools that ask for health-related information are “disability-related inquiries or medical examinations,” however. For example, a personality test is not posing “disability-related inquiries” because it asks whether the individual is “described by friends as being ‘generally optimistic,’” even if being described by friends as generally optimistic might somehow be related to some kinds of mental health diagnoses.
My Thoughts: but see Karraker.
Note, however, that even if a request for health-related information does not violate the ADA’s restrictions on disability-related inquiries and medical examinations, it still might violate other parts of the ADA. For example, if a personality test asks questions about optimism, and if someone with Major Depressive Disorder (“MDD”) answers those questions negatively and loses an employment opportunity as a result, the test may “screen out” the applicant because of MDD. As explained in Questions 8–11 above, such screen out may be unlawful if the individual who is screened out can perform the essential functions of the job, with or without reasonable accommodation.
My Thoughts: see Karraker.
Once employment has begun, disability-related inquiries may be made and medical examinations may be required only if they are legally justified under the ADA.
For more information on disability-related inquiries and medical examinations, see Pre-Employment Inquiries and Medical Questions & Examinations, and Enforcement Guidance on Disability-Related Inquiries and Medical Examinations of Employees under the ADA.
Promising Practices for Employers
- What can employers do to comply with the ADA when using algorithmic decision-making tools?
- As discussed in Questions 4–7 above, employers must provide reasonable accommodations when legally required. Promising practices that may help employers to meet this requirement include:
- Training staff to recognize and process requests for reasonable accommodation as quickly as possible, including requests to retake a test in an alternative format, or to be assessed in an alternative way, after the individual has already received poor results.
- Training staff to develop or obtain alternative means of rating job applicants and employees when the current evaluation process is inaccessible or otherwise unfairly disadvantages someone who has requested a reasonable accommodation because of a disability.
My thoughts: regular training by competent and knowledgeable individuals is always a good idea.
-
- If the algorithmic decision-making tool is administered by an entity with authority to act on the employer’s behalf, such as a testing company, asking the entity to forward all requests for accommodation promptly to be processed by the employer in accordance with ADA requirements. Alternatively, the employer could seek to enter into an agreement with the third party requiring it to provide reasonable accommodations on the employer’s behalf, in accordance with the employer’s obligations under the ADA.
My Thoughts: don’t forget about 29 C.F.R. §1630.6.
- Employers should minimize the chances that algorithmic decision-making tools will disadvantage individuals with disabilities, either intentionally or unintentionally. Promising practices include:
- Using algorithmic decision-making tools that have been designed to be accessible to individuals with as many different kinds of disabilities as possible, thereby minimizing the chances that individuals with different kinds of disabilities will be unfairly disadvantaged in the assessments. User testing is a promising practice.
My Thoughts: don’t forget that the ADA is an individualized analysis with no exceptions
-
- Informing all job applicants and employees who are being rated that reasonable accommodations are available for individuals with disabilities, and providing clear and accessible instructions for requesting such accommodations.
My thoughts: always a good idea.
-
- Describing, in plain language and in accessible formats, the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating.
My Thoughts: I am a big believer in plain language. The rest of this particular paragraph sets up the tension between proprietary information and proving up a screen out claim.
- Employers may also seek to minimize the chances that algorithmic decision-making tools will assign poor ratings to individuals who are able to perform the essential functions of the job, with a reasonable accommodation if one is legally required. Promising practices include:
- Ensuring that the algorithmic decision-making tools only measure abilities or qualifications that are truly necessary for the job—even for people who are entitled to an on-the-job reasonable accommodation.
- Ensuring that necessary abilities or qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications.
My Thoughts: both of the bullets immediately above are excellent preventive law approaches.
- Before purchasing an algorithmic decision-making tool, an employer should ask the vendor to confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation. (The ADA permits an employer to request reasonable medical documentation in support of a request for reasonable accommodation that is received prior to a conditional offer of employment, when necessary, if the requested accommodation is needed to help the individual complete the job application process.)
My thoughts: this assumes that the disability is not obvious or known, which as mentioned before are elastic terms.
Promising Practices for Job Applicants and Employees Who Are Being Assessed by Algorithmic Decision-Making Tools
- What should I do to ensure that I am being assessed fairly by algorithmic decision-making tools?
If you have a medical condition that you think might qualify as an ADA disability and that could negatively affect the results of an evaluation performed by algorithmic decision-making tools, you may want to begin by asking for details about the employer’s use of such tools to determine if it might pose any problems related to your disability.
My Thoughts: it will be interesting to see how receptive the AI vendor is to this approach because of the worry about disclosing proprietary information. Vendors and employers want to be careful about retaliating against any individual that seeks this information, especially since the EEOC is suggesting that the information should be sought out in the first place.
If so, you may want to ask for a reasonable accommodation that allows you to compete on equal footing with other applicants or employees.
For example, if an employer’s hiring process includes a test, you may wish to ask for an accessible format or an alternative test that measures your ability to do the job in a way that is not affected by your disability. To request a reasonable accommodation, you need to notify an employer representative or official (for example, someone in Human Resources) or, if the employer is contracting with a software vendor, the vendor’s representative or the employer, that you have a medical condition, and that you need something changed because of the medical condition to ensure that your abilities are evaluated accurately.
Note that if your disability and need for accommodation are not obvious or already known, you may be asked to submit some medical documentation in support of your request for accommodation.
My Thoughts: keep any request for medical documentation reasonable and narrowly focused.
To find out more about asking for reasonable accommodations, see Enforcement Guidance on Reasonable Accommodation and Undue Hardship under the ADA, available at https://www.eeoc.gov/laws/guidance/enforcement-guidance-reasonable-accommodation-and-undue-hardship-under-ada.
If you only discover that an algorithmic decision-making tool poses a problem due to your disability after the evaluation process is underway, you should notify the employer or software vendor as soon as you are aware of the problem and ask to be evaluated in a way that accurately reflects your ability to do the job, with a reasonable accommodation if one is legally required.
If you have already received a poor rating generated by an employer’s use of an algorithmic decision-making tool, you should think about whether your health condition might have prevented you from achieving a higher rating. For example, might a disability have negatively affected the results of an assessment, or made it impossible for you to complete an assessment? If so, you could contact the employer or software vendor immediately, explain the disability-related problem, and ask to be reassessed using a different format or test, or to explain how you could perform at a high level despite your performance on the test.
- What do I do if I think my rights have been violated?
If you believe that your employment-related ADA rights may have been violated, the EEOC can help you decide what to do next. For example, if the employer or software vendor refuses to consider your request for a reasonable accommodation to take or re-take a test, and if you think that you would be able to do the job with a reasonable accommodation, you might consider filing a charge of discrimination with the EEOC. A discrimination charge is an applicant’s or employee’s statement alleging that an employer engaged in employment discrimination and asking the EEOC to help find a remedy under the EEO laws.
If you file a charge of discrimination, the EEOC will conduct an investigation. Mediation, which is an informal and confidential way for people to resolve disputes with the help of a neutral mediator, may also be available. Because you must file an EEOC charge within 180 days of the alleged violation in order to take further legal action (or 300 days if the employer is also covered by a state or local employment discrimination law), it is best to begin the process early. It is unlawful for an employer to retaliate against you for contacting the EEOC or filing a charge.
If you would like to begin the process of filing a charge, go to our Online Public Portal at https://publicportal.eeoc.gov, visit your local EEOC office (see https://www.eeoc.gov/field-office for contact information), or contact us by phone at 1-800-669-4000 (voice), 1-800-669-6820 (TTY), or 1-844-234-5122 (ASL Video Phone).
For general information, visit the EEOC’s website (https://www.eeoc.gov).
This information is not new policy; rather, this document applies principles already established in the ADA’s statutory and regulatory provisions as well as previously issued guidance. The contents of this publication do not have the force and effect of law and are not meant to bind the public in any way. This publication is intended only to provide clarity to the public regarding existing requirements under the law. As with any charge of discrimination filed with the EEOC, the Commission will evaluate alleged ADA violations involving the use of software, algorithms, and artificial intelligence based on all of the facts and circumstances of the particular matter and applicable legal principles.
[1] To establish a screen out claim, the individual alleging discrimination must show that the challenged selection criterion screens out or tends to screen out an individual with a disability or a class of individuals with disabilities. See 42 U.S.C. § 12112(b)(6); 29 C.F.R. § 1630.10(a). To establish a defense, the employer must demonstrate that the challenged application of the criterion is “job related and consistent with business necessity,” as that term is understood under the ADA, and that “such performance cannot be accomplished by reasonable accommodation.” 42 U.S.C. §§ 12112(b)(6), 12113(a); 29 C.F.R. §§ 1630.10(a), 1630.15(b); 29 C.F.R. pt. 1630 app. §§ 1630.10, 1630.15 (b) and (c). A different defense to a claim that a selection criterion screens out or tends to screen out an individual with a disability or a class of individuals with disabilities is available when the challenged selection criterion is safety-based. See 2 U.S.C. § 12113(b); 29 C.F.R. § 1630.15(b)(2).
My Thoughts:
- These concepts blur into each other making it really complicated to figure out the burden of proof. What the EEOC is saying is that the person with the disability has two allege that the criterion screens out or tend to screen out individuals with disabilities or class of individuals with disabilities. Then, the employer has to demonstrate that the challenged application of the criterion is job-related and consistent with business necessity as the terms are understood by the ADA and that such performance cannot be accomplished by reasonable accommodations. For what is job-related and business necessity, see this blog entry for example.
- “Safety-based,” can be so vague as to run a semi-truck through it. That is certainly what the PHP industry is trying to do to circumvent the requirements of the ADA. If safety-based is the claim, plaintiff’s attorneys need to thoroughly analyze such claims so as to keep them in check.
[2] 42 U.S.C. § 2000e-2(a)(2), (k).
[3] When applying the tool to current employees or other subjects, there will generally be no way to know who has a disability and who does not.
[4] When employers or vendors claims that a tool designed to help employers decide which job applicants to hire has been “validated,” or that such a tool is a “valid predictor” of job performance, they may mean that there is evidence that the tool measures a trait or characteristic that is important for the job, and that the evidence meets the standards articulated in the Uniform Guidelines on Employee Selection Procedures (“UGESP”), 29 C.F.R. §§ 1607.5–9. UGESP articulates standards for compliance with certain requirements under Title VII. UGESP does not apply to disability discrimination. 29 C.F.R. pt. 1630 app. § 1630.10 (a) (“The Uniform Guidelines on Employee Selection Procedures . . . do not apply to the Rehabilitation Act and are similarly inapplicable to this part.”).
[5] Note, however, that the ADA permits employers to request reasonable medical documentation in support of a request for reasonable accommodation, when necessary. This may be done prior to a conditional offer of employment if the request is for a reasonable accommodation that is needed to help the individual complete the job application process.
from Texas Bar Today https://ift.tt/03AKYTL
via Abogado Aly Website
No comments:
Post a Comment