Originally published by Adam Faderewski.
With the growing presence of artificial intelligence in daily life—self-driving cars, facial recognition, personalized Amazon search results, and smart personal assistants like Alexa and Echo—there are questions forming about what rules should be followed.
Andrew Burt, chief privacy officer and legal engineer at Immuta, discussed what laws currently exist for AI regulation and suggested how laws that govern AI should be created during “Regulating AI: How to Control the Unexplained” at SXSW in Austin.
A new law—General Data Protection Regulation, or GDPR—will be going into effect in May in the European Union. The GDPR will prohibit any artificial intelligence from making important decisions about consumers without direct human input.
The Future of AI Act, which was proposed in Congress, would establish a federal advisory committee to examine how technologies like automation and machine learning impact society. Senators Maria Cantwell, Ed Markey, and Todd Young proposed the bill in the Senate, and Representatives John Delaney and Pete Olsen sponsored a similar bill in the House.
The city of New York formed a committee in January that required city agencies to make information about automated processes and AI available to the public.
Burt said these measures have their upsides, but that one singular regulation should not and could not be the answer to AI.
“We absolutely should not setup a Federal Department of AI today,” Burt said. “AI is simply too many different applications of technologies in too many different areas to have one meaningful solution.”
As an example, Burt said the laws guiding AI in medicine should not apply to how AI works in spam filtering or recommending news articles on one’s Facebook feed. Burt said AI laws need to be tailored to specific industries and specific use cases.
These laws would have similarities in addressing such things as AI choices, the tradeoff between accuracy and transparency, data management, maturity and volume of input, and the output of data. Burt said these areas would be central in creating “clearer standards to data scientists and developers who create these models.”
Burt also said that the makers of software need to have some liability for the programs they create and the decisions those programs make.
“The makers of software, I think, need to be held clearly liable—when that software causes specific harms,” Burt said. “They need to understand where this liability exists before they start creating these models.”
Curated by Texas Bar Today. Follow us on Twitter @texasbartoday.
from Texas Bar Today http://ift.tt/2tOD8vM
via Abogado Aly Website
No comments:
Post a Comment