It is unlikely that artificial intelligence is “to blame” for any of its “actions” and consequences. However, AI can lead to events related to encroachment on the property of other entities. What is the responsibility for the actions of artificial intelligence in this case?
AI can cause the following offenses:
- accidents caused by an unmanned vehicle.
- broken window glass by an AI-controlled drone.
- injuries caused at the factory by an artificial intelligence robot.
- destruction of the contents of the smart refrigerator compartment due to an error in automatic temperature selection;
- sending a bailiff to the wrong person by the AI system, etc.
The fundamental question that needs to be answered is this:
What is the basis of responsibility for such offenses and who should bear this responsibility? How should we legally classify offenses related to the functioning of artificial intelligence?
Contrary to appearances, this question is not too new. There are many possible answers to this question. The legal debate on this topic has been going on for at least several years. They are becoming increasingly important, for example, for running a business.
Let’s consider specific cases of responsibility for the actions of artificial intelligence.
- Damage caused by AI and the person who uses it
- “Responsibility gap”
- Responsibility for the actions of artificial intelligence: choosing policies and laws
- Responsibility of the owner of an autonomous vehicle
- Who is to blame?
- Responsibility for AI is similar to pets’
- AI “supervision”
- Production powered by natural energy
- Responsibility for the actions of artificial intelligence: the principle of contractual responsibility
- AI sanctions
- Liability for hazardous products
- Responsibility for the actions of artificial intelligence: how does the hazardous product regime work?
- Legal responsibility for the actions of artificial intelligence: what should be our approach?
Damage caused by AI and the person who uses it
There are potential harms that can be caused by the use of artificial intelligence. The causes of this harm are also diverse. Keep in mind that AI can be:
- a “pure software” tool for working with data;
- an operator of autonomous vehicles and vehicles (cars, drones, robots, etc.).
The damage can be:
- material resources.
- non-material (harm to a person);
- caused by a person using artificial intelligence (AI as a tool);
- caused by humans “autonomously”, without direct AI involvement.
Offenses caused by humans, such as a hacker attack using artificial intelligence, do not pose a big problem from the point of view of analyzing responsibility for offenses. They can be classified as traditional offenses committed with the use of a certain tool. However, a global problem arises when artificial intelligence” independently ” – without the knowledge and will of a person – causes an actual event that leads to damage to a given object.
“Responsibility gap”
There is a concept of “responsibility gap”. It means a situation in which there is a legal gap regarding the possibility of imposing liability. Can such a situation exist in the legal system?
First, at first glance, there are so many actors who can potentially be held responsible that it is difficult to determine the correct “culprit”:
- the algorithm developer.
- product manufacturer.
- an entity that implements or trains an AI model.
- the end user, etc.
Secondly, we should remember that reality is always one step ahead of the law. Therefore, loopholes must inevitably appear in the law. In such cases, civil law has been using its main weapon for many years – the analogy. Analogy is an absolutely basic, common procedure in civil law and an indisputable useful principle. For this reason, even if we were dealing with something like the “responsibility gap”, this gap can (and should) be filled with an analogy. The civil law system will not survive the vacuum.
Responsibility for the actions of artificial intelligence: choosing policies and laws

The key issue in the context of civil liability for AI is the choice of the principle of this liability:
- guilt (often considered in civil law as non-performance, associated with the greatest difficulties in proving for the victim);
- blame for supervision;
- risk (much more favorable for the aggrieved party);
- equity;
- guarantees (in a broader sense, it is present only in insurance contracts).
For so-called artificial intelligence, there are no barriers to applying the guilt principle. Simple (or “weak”) AI is not designed to perform similar tasks of relatively low complexity, if it is really only a tool in the hands of a person (i.e., its level of autonomy is low). Using artificial intelligence in this case is no different, for example, from deliberately setting a dog on a passerby (if the owner gives the dog an attack command) or using a crowbar to break in.
The problem arises when a person does not control artificial intelligence, i.e. the level of AI autonomy is high. Then there is a feeling that the principle of guilt may be inadequate. An AI operator or manufacturer should not be able to “wash their hands of it” because artificial intelligence did not behave as expected.
In practice, the entrepreneur will be liable for all consequences of the work of AI, unless they are caused by circumstances unrelated to AI. For example, when:
- someone jumps under the wheels of an unmanned vehicle.
- an AI error occurs due to a hacker whose attack was impossible to defend against, etc.
The consumer will be liable for damages caused by artificial intelligence if they use it incorrectly (“not in accordance with the instructions”). It’s an interesting concept, but it’s just a concept. So far, there are no legal grounds. Responsibility for the actions of artificial intelligence will be further transformed in this aspect.
Responsibility of the owner of an autonomous vehicle

There is a liability of the owner of a power-driven vehicle. An autonomous vehicle has all the functions of a traditional vehicle. It also has an additional software-based management feature.
The issue of responsibility for an unmanned vehicle remains a matter for the responsible person. Should the creator of the software, the car manufacturer, or perhaps its user be responsible?
Responsibility still lies with the user (driver) – the person who actually drives the car, and not only with its owner. Such liability is based on the risk principle. Liability is excluded only by circumstances of force majeure or the exclusive fault of the victim or a third party). This is justified by the aspect of significant danger associated with the use of such a vehicle.
It seems that in the future, the use of AI-controlled cars may cause less danger on the roads than human driving.
The essence of self-driving cars is that the role of the “operator”, in addition to determining the destination of the trip, is actually reduced to the role of a passive passenger. So why should he bear the risk of being held responsible for a bug in the software that controls the car or its sensors?
Legislative intervention is necessary here. It assigns responsibility for incidents involving driverless cars to their manufacturers or distributors. The debate on this topic is complicated by the fact that the unquestionable future dominance of driverless cars on city streets could profoundly change the model of car ownership.
As creators or buyers and entities that control the AI models that drive cars (and to some extent are responsible for the physical sensors installed in the car), manufacturers or distributors are the most appropriate entities that have obligations.
Who is to blame?
The obligations under consideration are not absolute and are subject to restrictions. These include situations where damage is caused:
- force majeure circumstances;
- due to the exclusive fault of the aggrieved party;
- solely due to the fault of a third party, for which the owner is not responsible.
What immediately draws attention is the exceptional fault of the third party. It seems that the guilt is absolute. Unfortunately, the chances of proving that the damage was caused solely by the fault of one of these two entities are slim.
There is clearly a need to change the laws before allowing self-driving vehicles for general use. By the way, this is important for the compulsory insurance system. The provisions of the laws on compulsory insurance establish the amount of compensation within the “limits of civil liability” of the owner or driver of the vehicle.
In addition, it is unclear how this will apply to drones, i.e. objects moving in the air that are used for:
- entertainment;
- taking photos.
- video recordings.
- for transportation of parcels.
In our opinion, дроныAI-controlled drones should be regulated in terms of responsibility in the same way as self-driving cars.
Responsibility for AI is similar to pets’

It is not very obvious, but the possibility of applying the same articles of laws that establish responsibility for an animal deserves attention. Comparing AI to an animal causes a certain internal conflict, because it seems both adequate and inaccurate:
- on the one hand, artificial intelligence is an intangible, artificial creation created by man, the result of complex programming and mathematical work.
- on the other hand, an animal is a living being with a limited level of consciousness and intelligence compared to humans.
When” buying ” AI that companies will use to serve customers, they must use appropriate contractual mechanisms. These mechanisms, if the consumer reports damage, will allow companies to “shift” the costs incurred to the supplier (the so-called recourse) or involve it in active protection against third-party claims.
At the same time, both artificial intelligence (at its current level of development) and animals have certain common features that can become legally significant. First of all, both artificial intelligence and animals are characterized by at least some degree of autonomy. They are able to learn, acquire new skills, and act independently. However, their awareness (in the full sense) or intention cannot be said. People have a huge influence on them, but they can’t determine or predict their every behavior.
AI “supervision”
Animals and AI models remain under human control, which in itself suggests the term “supervision”. For these reasons, neither the animal nor artificial intelligence can currently be attributed anything like blame for any of its actions. But this blame, in turn, can be placed on the controlling person. Can liability for AI behavior be based on the same legal provisions as liability for pets?
Anyone who keeps or uses an animal is obliged to compensate for the damage caused to them. This does not depend on whether the animal was under supervision, got lost or ran away (except in cases where neither the animal nor the person for whom it is responsible are to blame). Even if the person using the animal is not responsible, the injured party can demand full or partial compensation from him.
When applied to AI, this means that the AI operator is responsible for its actions, unless, despite carefully following instructions, the AI gets out of control. Legislation may explicitly provide for a similar mechanism for AI.
Production powered by natural energy
Sometimes responsibility is offered:
- for an AI administrator who is not an entrepreneur, based on fault;
- for an AI administrator who is an entrepreneur, based on risk.
It’s an interesting concept, but it’s just a concept.
Strict responsibility is borne by every person who manages, at his own expense, an enterprise or installation powered by natural energy:
- ferry service.
- with gas;
- with electricity;
- liquid fuel, etc.
It is assumed that this “driving by natural energy” means that the enterprise directly uses the energy of nature as part of the processes of converting natural forms of energy. This is done by machines or other devices set in motion in general by these natural resources. The operation of the enterprise depends on the use of these natural energies.
Thus, it appears that companies that use robots or machines controlled to some extent by artificial intelligence will be subject to the regulation of future artificial intelligence laws.
This will affect power plants, mines, factories, automated warehouses and airports. In these examples, the premise of “converting” energy is realized in order to” set in motion “the enterprise – and, consequently, its functioning through the” work ” of robots or machines.
It is worth noting that the issue of AI management is irrelevant here (the energy source for the enterprise is important).
What is the situation with an enterprise where software is an important driving part? Or an enterprise whose activity is limited exclusively to a virtual area (for example, a virtual power plant controlled by artificial intelligence)? The risk principle in relation to the movement of such an enterprise will mean the following. The administrator of this business will be responsible for any damage caused to a third party. He / she is liable for damages even if he / she has not previously incurred any obligations (for example, when something unexpected has happened, despite the fact that he / she has carried out good-faith supervision of the enterprise).
The limit of liability here will be force majeure or the exclusive fault of the aggrieved party or a third party who is not responsible for the company’s administrator (i.e., damage caused, for example, by an employee’s violation of obvious safety standards is exempt from liability).
In our opinion, it would be desirable to create a similar mechanism in the area of legal requirements.
Responsibility for the actions of artificial intelligence: the principle of contractual responsibility

Contractual liability (for non-performance or improper performance of a contract) does not require special changes caused by the development of AI.
This type of obligation is based on the principle of freedom of the parties:
If you can agree on what will be done, you’d better agree on the consequences of not doing what was intended.
It is based on contracts and has certain restrictions.
For many years, the IT market has had a set of standards that are repeated in almost every contract that includes:
- exclusion of liability for lost profits (for example, if the bank’s transaction system “breaks down” due to the fault of the IT supplier, then the supplier is responsible within the cost of repairing the “broken” elements, but is not responsible for the cost of bank transactions that the bank would have made if the system had not failed);
- limit the total liability for damages to the contract value (for example, 100% of the fee for implementing and licensing the system).
- exceptions to this limitation are obligations that currently most often cover (except for intentional fault resulting from the rules: gross negligence, damage caused by the presence of a legal defect in the product, disclosure of confidential information, violation of personal data, violation of important cybersecurity principles);
- exclusion of warranty for physical defects.
It is expected that this standard will be maintained for AI contracts as well. Changes that will occur in the AI law may include expanding the list of exceptions to the liability limit.
AI sanctions
The sanctions resulting from the contract should be appropriately aggregated with the AI model requirements properly described, including the quality parameters in the contract. These requirements and parameters cannot be a simple copy of the provisions of existing IT contracts (but in a new AI formulation). It is necessary to adapt them to the goals related to artificial intelligence.
Regardless of who is currently or in the future responsible for the damage caused by specific AI systems or models, today, when entering into contracts for their supply, training, development and maintenance, you need to show imagination and protect yourself accordingly.
Entities that are (or may be) responsible as manufacturers or distributors of AI products (or services) are not necessarily classical suppliers. These can be:
- transport companies that provide autonomous taxis.
- banks that use AI.
- medical institutions that use AI for diagnostics.
- factories that use artificial intelligence, control industrial robots, and so on.
When “buying” AI that will be used for customer service, companies should use appropriate contractual mechanisms. If the consumer reports a loss, these contractual mechanisms will allow companies to “pass” the costs incurred to the supplier or involve him in active protection against third-party claims.
Liability for hazardous products

When considering the possibility of applying the provisions on hazardous products to artificial intelligence, we face a fundamental problem. The fact is that a product is a movable thing, even if it is connected to another thing (animals and electricity are also considered products).
Is it possible to apply to artificial intelligence the provisions concerning things, and therefore material objects within the meaning of the Civil Code? For the AI model and software, the answer is no. Regulations on hazardous products should apply to things-tangible objects, not software.
However, there are no barriers to including AI in the product quality responsibility mode if it is an element of a tangible object, i.e. software that controls the object (for example, a robot). The premise that determines the validity of such a qualification is that artificial intelligence is “anchored” in the material world and “connected” to the thing. However, “pure” software cannot be enabled in this mode.
Responsibility for the actions of artificial intelligence: how does the hazardous product regime work?
The product may be dangerous – i.e., it may not provide the safety that would be expected given the normal use of the product. It may work unexpectedly. For example, if the phone overheats and causes burns, if the food processor explodes, etc. In this case, the manufacturer of the product (and to some extent also the distributor or importer) is responsible for the damage caused by the dangerous product.
This seems like an interesting legal option for using AI (in robots, drones, and other vehicles, household appliances, toys, sensors, and other devices).
Please note that liability for a dangerous product is subject to numerous restrictions, including when:
- the dangerous properties of the product became apparent after it was placed on the market, unless they arose for a reason previously inherent in the product;
- the hazardous properties of the product could not have been predicted based on the state of science and technology at the time the product was placed on the market (or when these properties arose as a result of the application of legal provisions);
- in relation to items destroyed by dangerous goods – when they were not primarily used for personal use (i.e., the regime of protection against property damage may actually apply to entrepreneurs to a small extent).
Regardless of the limitations, the hazardous product liability regime seems to really protect us in the “everyday life” realm from an AI perspective.
Legal responsibility for the actions of artificial intelligence: what should be our approach?
The issue of civil liability for AI is highly fragmented. Some existing institutions seem to be ready for the new challenges posed by artificial intelligence. Others require minor modifications, and sometimes just opening them up to the AI. We also face areas that can undergo significant transformation, especially as a result of legislative activity.
The accelerating presence of artificial intelligence in the reality around us will cause new losses caused by its action or inaction. Increasing the number of problems with AI will require legal measures:
- at the level of its creation.
- at the level of its application (courts can change the boundaries of civil institutions through a new interpretation of regulations).
The status quo is complex, but it provides a good foundation for dealing with AI.
Organizations considering whether to invest in artificial intelligence solutions should not treat the issue of civil liability as a “blocking factor”. On the contrary, in this respect, the law is sufficient for minor parameterizations and reconfigurations, without the need to rewrite and change the structure.
Nevertheless, you should expect surprises and be prepared for the fact that the situation will develop in different directions.
There are certain risks. They may arise as a result of the actions of legislators and courts regarding the delinquent liability for AI. They need to be anticipated and embedded in contracts between AI solution providers and their customers. These contracts will then further distribute products or services using artificial intelligence.
Overall, imagination, legal intuition, and knowledge of AI are important qualities. They are necessary for lawyers dealing with contracts with AI.
Legal responsibility for the actions of artificial intelligence will be constantly transformed. In the near future, we will expect a lot of changes and improvements in the legislation in the field of regulating the field of artificial intelligence.
What do you think about it? Leave your opinion in the comments below ☟.