Artificial intelligence and the law

morris

Superstar
Joined
Oct 8, 2014
Messages
16,445
Reputation
4,895
Daps
36,069
by Jeremy Elman
gettyimages-136799367.jpg


Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars become human-like, as in artificial intelligence that can drive cars? Who is responsible for any laws that are violated by the AI?

This article, written by a technologist and a lawyer, examines that future of AI law.

The field of AI is in a sort of renaissance, with research institutions and R&D giants pushing the boundaries of what AI is capable of. Although most of us are unaware of it, AI systems are everywhere, from bank apps that let us deposit checks with a picture, to everyone’s favorite Snapchat filter, to our handheld mobile assistants.

Currently, one of the next big challenges that AI researchers are tackling is reinforcement learning, which is a training method that allows AI models to learn from its past experiences. Unlike other methods of generating AI models, reinforcement learning lends itself to be more like sci-fi than reality. With reinforcement learning, we create a grading system for our model and the AI must determine the best course of action in order to get a high score.

Research into complex reinforcement learning problems has shown that AI models are capable of finding varying methods to achieve positive results. In the years to come, it might be common to see reinforcement learning AI integrated with more hardware and software solutions, from AI-controlled traffic signals capable of adjusting light timing to optimize the flow of traffic to AI-controlled drones capable of optimizing motor revolutions to stabilize videos.

How will the legal system treat reinforcement learning? What if the AI-controlled traffic signal learns that it’s most efficient to change the light one second earlier than previously done, but that causes more drivers to run the light and causes more accidents?

Traditionally, the legal system’s interactions with software like robotics only finds liability where the developer was negligent or could foresee harm. For example, Jones v. W + M Automation, Inc., a case from New York state in 2007, did not find the defendant liable where a robotic gantry loading system injured a worker, because the court found that the manufacturer had complied with regulations.

It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions.
But in reinforcement learning, there’s no fault by humans and no foreseeability of such an injury, so traditional tort law would say that the developer is not liable
. That certainly will pose Terminator-like dangers if AI keeps proliferating with no responsibility.

The law will need to adapt to this technological change in the near future. It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions, given personhood and hauled into court. That would assume that the legal system, which has been developed for over 500 years in common law and various courts around the world, would be adaptable to the new situation of an AI.

An AI by design is artificial, and thus ideas such as liability or a jury of peers appears meaningless. A criminal courtroom would be incompatible with AI (unless the developer is intending to create harm, which would be its own crime).

But really the question is whether the AI should be liable if something goes wrong and someone gets hurts. Isn’t that the natural order of things? We don’t regulate non-human behavior, like animals or plants or other parts of nature. Bees aren’t liable for stinging you. After considering the ability of the court system, the most likely reality is that the world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, such as through a technical standard mandated by treaty or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.

This likely will mean convening a group of leading AI experts, such as OpenAI, and establishing a standard that includes explicit definitions for neural network architectures (a neural network contains instructions to train an AI model and interpret an AI model), as well as quality standards to which AI must adhere.

Standardizing what the ideal neural network architecture should be is somewhat difficult, as some architectures handle certain tasks better than others. One of the biggest benefits that would arise from such a standard would be the ability to substitute AI models as needed without much hassle for developers.

Currently, switching from an AI designed to recognize faces to one designed to understand human speech would require a complete overhaul of the neural network associated with it. While there are benefits to creating an architecture standard, many researchers will feel limited in what they can accomplish while sticking to the standard, and proprietary network architectures might be common even when the standard is present. But it is likely that some universal ethical code will emerge as conveyed by a technical standard for developers, formally or informally.

The concern for “quality,” including avoidance of harm to humans, will increase as we start seeing AI in control of more and more hardware. Not all AI models are created the same, as two models created for the same task by two different developers will work very differently from each other. Training an AI can be affected by a multitude of things, including random chance. A quality standard ensures that only AI models trained properly and working as expected would make it into the market.

For such a standard to actually have any power, we will most likely need some sort of government interference, which does not seem too far off, considering recent talks in British parliament regarding the future regulation of AI and robotics research and applications. Although no concrete plans have been laid out, parliament seems conscious of the need to create laws and regulations before the field matures. As stated by the House of Commons Science and Technology Committee, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.
 

Prevail

Pro
Joined
Nov 20, 2016
Messages
204
Reputation
-30
Daps
562
Reppin
Somewhere
reinforcement learning, which is a training method that allows AI models to learn from its past experiences
Oh, supervised learning doesn't?:shaq:
What if the AI-controlled traffic signal learns that it’s most efficient to change the light one second earlier than previously done, but that causes more drivers to run the light and causes more accidents?
Really this should never happen and it should never be learning in place. Also how the fukk would it be more efficient to cause more accidents? Did the person who wrote the article program this? Does he realize you can punish the AI for behavior that causes more accidents?:stopitslime:
establishing a standard that includes explicit definitions for neural network architectures (a neural network contains instructions to train an AI model and interpret an AI model), as well as quality standards to which AI must adhere.
Nevermind decision trees/SVMs/state-action maps/KNNs/etc exist:mjgrin:

Currently, switching from an AI designed to recognize faces to one designed to understand human speech would require a complete overhaul of the neural network associated with it.
Except CNNs are at the top of both text classification and computer vision right now because they focus on how near things are to each other (features of objects for object detection/words for sentence detection)


This article, written by a technologist and a lawyer
And this is why people need to stop quoting shyt by "technologist" for their idea of AI.
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,445
Reputation
4,895
Daps
36,069
Oh, supervised learning doesn't?:shaq:

Really this should never happen and it should never be learning in place. Also how the fukk would it be more efficient to cause more accidents? Did the person who wrote the article program this? Does he realize you can punish the AI for behavior that causes more accidents?:stopitslime:

Nevermind decision trees/SVMs/state-action maps/KNNs/etc exist:mjgrin:


Except CNNs are at the top of both text classification and computer vision right now because they focus on how near things are to each other (features of objects for object detection/words for sentence detection)



And this is why people need to stop quoting shyt by "technologist" for their idea of AI.

But then the damage is done. Time for the Law to puni$h someone
 

Prevail

Pro
Joined
Nov 20, 2016
Messages
204
Reputation
-30
Daps
562
Reppin
Somewhere
But then the damage is done. Time for the Law to puni$h someone
That's why you teach the agent before you put it out. If failure isn't accounted for, whoever trained it is at fault.

In layman's terms, you cannot just throw a "child" into a drivers seat and expect him to drive. You have to teach him to drive; if he's fukking up constantly, apparently you aren't a good teacher or didn't teach him long enough
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,445
Reputation
4,895
Daps
36,069
That's why you teach the agent before you put it out. If failure isn't accounted for, whoever trained it is at fault.

In layman's terms, you cannot just throw a "child" into a drivers seat and expect him to drive. You have to teach him to drive; if he's fukking up constantly, apparently you aren't a good teacher or didn't teach him long enough
Oh, I agree. Which is why they laws will have to be changed. Since developers are not held accountable
 

Prevail

Pro
Joined
Nov 20, 2016
Messages
204
Reputation
-30
Daps
562
Reppin
Somewhere
Oh, I agree. Which is why they laws will have to be changed. Since developers are not held accountable
The developers need not be held accountable-they won't be feeding the AI the application specific data.
The data scientist will.
The entity that should be held accountable is the one that makes their methods available. Its just like any other tech. If a planes avionics fukk up, Lockheed/Northrop/etc are held responsible. Same should and will apply to the entity that trained the AI and sold it.
Again, AI/machine learning doesn't work where you just throw in a retarded agent and expect it to work; any agent on the market will be pretrained or in continuous training like elon musk's tesla system
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,445
Reputation
4,895
Daps
36,069
The developers need not be held accountable-they won't be feeding the AI the application specific data.
The data scientist will.
The entity that should be held accountable is the one that makes their methods available. Its just like any other tech. If a planes avionics fukk up, Lockheed/Northrop/etc are held responsible. Same should and will apply to the entity that trained the AI and sold it.
Again, AI/machine learning doesn't work where you just throw in a retarded agent and expect it to work; any agent on the market will be pretrained or in continuous training like elon musk's tesla system
I'm with you, but big corporations always find a fall guy. Just like oil spills.

But you are correct
 
Top