Legal Status Of Artificial Intelligence - IILegal Responsibility From The Use Of Artficial Intelligence
In the law of liability, the rule is that one is liable for damages caused by someone else’s unlawful action. In the law of obligations, defect liability is main rule and the sanction of defect liability is in the form of debt payable. However, the fact that a person is liable to be liable for damages caused by an unlawful act by virtue of the defect, in other words, the existence of a defect, in other words, that it has been caused by negligence or any act involving caste, Some social considerations and the principle of equity require persons who do not have a flaw in the occurrence of the loss to be responsible for this damage. In such cases, an exceptional type of responsibility is referred to as ‘strict liability’. What is meant here is not that the person is absolutely flawless in the resulting loss; even if it is not the fault is responsible. In other words, if the person who is strict liability by law have also any defect in concrete case, the consequences of strict liability will be further aggravated and the responsibility will increase.
The Strict Liability in Turkish Legal System
In Turkish Legal System, it has not yet been accepted as a strict liability for damages due to the use of artificial intelligence. In other words, a strict liability under the responsibility of machines with artificial intelligence is not included in the Turkish Code of Obligations (TCO) or other laws. For this reason, the opinions expressed about the strict responsibility of artificial intelligence shall not be applicable in Turkish law until a concrete legal arrangement is made on this issue for the current law. However, it may be possible to consider artificial intelligence as a type of strict responsibility.
At this point, the first option may be to see artificial intelligence as an ‘auxiliary person’ and therefore to discuss the principle of strict responsibility for man-working. However, in order for this type of responsibility to exist, there must be an employment relationship. People can use and run smart machines. However, for this action to be in an employment relationship the operating machine must also be held accountable for its offense; in other words, firstly it must be a person. This is because it is the man-working relationship that is accepted in the sense of Article 66 of the TCO. In this context, secondly, it may be the responsibility of the animal holder. Because animals are not accepted as a person in Turkish law. Moreover, animals have a certain intelligence, impulses and instinctive features. However, the responsibility arising from the use of artificial intelligence is an uncomplicated liability situation that cannot be compared with the responsibility of the animal catcher. Because artificial intelligence has highly developed cognitive characteristics and extremely strong adaptability.
Regarding responsibility for damages caused by artificial intelligence, which is perhaps the first thing that spring to mind in the current legal order, responsibility for use of machines with artificial intelligence may be in particular the evaluation within scope of the responsibilities of producers and the principle of danger liability. The fact that the person who manufactures / operates the artificial intelligence in the eyes of consumers, especially in the case of artificial intelligence failures, seems to be a tangible solution. Nevertheless, none of these solutions offer humanity anything new, and they are not enough to respond to the advanced types of artificial intelligence,. In this respect, the responsibility of artificial intelligence must be solved by a new arrangement, not by existing rules.
Strict Liability of Artificial Intelligent
The first concrete legal text addressing the legal responsibility arising from the use of artificial intelligence is the European Parliament Report. This report first envisages giving a personality namely ‘electronic personality’ to artificial intelligence, then a new type of strict liability for who will be responsible for the damages caused by artificial intelligence.
According to the report, more intelligent types of artificial intelligence point to a new industrial revolution that will affect every segment of society. The task of the legislator in such a conjuncture is to review the legal effects of artificial intelligence in such a way as not to interfere with new technological developments. As artificial intelligence becomes increasingly autonomous, it becomes difficult for artificial intelligence to be seen as simple, ordinary technological devices in the hands of producers, providers, sellers, users and similar actors. Therefore, the existing rules of liability law are not sufficient, especially in cases where the damage cannot be directly related to the harmful action of a particular person. Although the report points out that there are many aspects of artificial intelligence, it is especially important to determine the legal responsibility of artificial intelligence, and therefore the legal regulations on artificial intelligence should first start with the private law issues. To this end, the Report proposes a new kind of strict liability for the compensation of damages caused by intelligent autonomous robots, where it is sufficient to justify the causal link between the harm and the action or inaction of artificial intelligence.
In accordance with the existing law of liability laws in the world, it is not possible for the person to be responsible for the damages caused by the third parties due to the self-destructive actions or negligence of artificial intelligence. In the case of disruption, failure or getting out of control of artificial intelligence, responsibility especially belongs to the manufacturer or user of artificial intelligence. However, as the artificial intelligence becomes autonomous and interacts with its environment in accordance with its unpredictable adaptation characteristics, the applicable rules on liability will not be sufficient. The Report therefore provides for a specific type of strict liability.
The regulation foreseen by the report is considered as a special kind of strict liability. For this reason, the strict liability for artificial intelligence is different from the existing types of liability in the debts laws. There are two main reasons for this qualification.
- First, this report should be proportionate to the level of commands given to artificial intelligence and the decision-making autonomy of artificial intelligence. Accordingly, as the learning capacity or autonomy of artificial intelligence increases, the responsibility of the natural or legal person which is responsible for the damages resulting from the actions or inaction of artificial intelligence should be reduced. In fact, theoretically, when the development is completed in the future, it may not have any responsibility of real or legal persons in any way from damages caused by completely autonomous robot, who are aware of their own consciousness. Therefore, this strict liability remains there as a transitional form and with the development of technology, when highly intelligent forms of artificial intelligence are created, theoretically, it has the potential to evolve into a situation where artificial intelligence alone is responsible. In this sense, the proposal recommends that no matter how much artificial intelligence is developed in the future, artificial intelligent will not be old; this new principle of strict liability will not become dysfunctional.
- Second, this strict liability type is quite different from our view of strict liability arrangements that are, in another respect, more present. According to the Report, a compulsory insurance system supported by a special compensation fund should be established in order to compensate the damages caused by artificial intelligence. This last proposal is groundbreaking in terms of insurance law and existing insurance systems; however, it is evident that practices need to become widespread in order to clarify how this system will be implemented.
Giving a personality to artificial intelligence has changed dimension especially after the European Parliament’s electronic personality and a new kind of perfect responsibility proposal. Furthermore, this new strict liability proposition, which the report proposes, does not contain a specific proof of salvation, but derives from the missing aspects of the existing legal system and aims to eliminate them. However, in the doctrine, these issues are still being discussed intensively, and opinions on seeing artificial intelligence as an item or legal entity still remain. The chosen solution will also affect how the responsibility arising from the damage caused by the artificiality will be determined.
In Turkey, there are not concrete legal regulations regarding artificial intelligence. However, it is possible to follow the developments of artificial intelligence in the world and to catch up with technological innovations and also not to be overlooked in the legal aspects of these issues. Therefore, in Turkey, it is essential to make legal regulations regarding artificial intelligent and to respond to issues concerning legal liability and status of artificial intelligent.