I. INTRODUCTION

With the technological developments, the machines used with artificial intelligence technology have become an indispensable part of human life. As it is known that the robotic vehicles produced with artificial intelligence technology have started to be used in many sectors such as the farming industry, food industry, and education.

Based on the definition of artificial intelligence as "software and hardware systems with many abilities such as displaying human behaviour, numerical logic, motion, speech and sound perception", it is important to know who will have legal responsibility apart from the damages caused by artificial intelligence created with software coded by humans and whether or not artificial intelligence will be included in the concept of "person" in the world. of law.

In this context, we would like to focus on some evaluations about whether artificial intelligence has e-personalities with its definition in the world of law and then whether legal responsibility can be imposed on artificial intelligence in the light of European Parliament Decisions.

II. THE CONCEPT OF PERSONALITY IN ARTIFICIAL INTELLIGENCE AND LEGAL RESPONSIBILITIES OF ARTIFICIAL INTELLIGENCE FROM TORTS

It is controversial that artificial intelligence, which has human-specific abilities, can make logic like a human, and has many abilities with the data it collects, and that rapidly changing technological developments are not subject to legal norms in the future, which can become a decision-making mechanism of humans that artificial intelligence is not given any legal status in the current legal regulations.

Although the responsibility of an error caused by negligence during the use of robotic tools developed with artificial intelligence technology is likely to be responsible for software programmers and / or related people developing robotic systems, the question of whether the robot itself can participate in these responsibilities together with the progress in robot technology. As it is known, in order to talk regarding the criminal and / or legal responsibility of something among the legal disciplines, it must have the capacity to have rights and the capacity to act. With this explanation, the subject of the rights and obligations regulated in the rules of law is accepted as human and the concept of the person means being able to qualify for rights and debts.

With this definition, all values ??constitute the concept of personalities, such as life and health consisting of a person's physical values, dignity, freedom of dignity, and professional and commercial dignity and dignity. 1

Thus, the act of competence regulated by Article 9 of the Turkish Civil Code ("TCC") No. 4721 basically means that the person is able to make a legal result with his own decision and aims to have the right and debt of the person with his own decisions.

Based on the above-mentioned definitions and TCC, although it is accepted that the actual driving license of robots and their legal personality cannot be possible with this expression, this issue is subject to some propositions both in terms of private and criminal law. Today, many countries are creating their own military defence systems by developing Unmanned Aerial Vehicles ("UAV") with artificial intelligence technology widely used.

As it is known, UAV's are accepted as autonomous vehicles which are self-moving and produced with the aim of carrying out critical tasks in military operations thanks to artificial intelligence and derivative algorithms. Unlike semi-autonomous vehicles, an autonomous aircraft can perform its tasks without being commanded, without being bound to human command and / or intervention with its pre-formed algorithms.

Under these circumstances, in the long-term, it has not clear yet whether the artificial intelligence tools and / or robots that can directly affect our daily life, as well as the defence industry, will have the highest level of autonomous systems, and that the responsibility for damages that may arise from decisions taken without human intervention in a manner in which it can take all decisions. In terms of local legislation, in addition to TCC, the responsibility of artificial intelligence, If it is evaluated with the "Defect Liability" which is taken under the provisions of Article 49 of the Turkish Code of Obligations ("TCO") No: 6098 and "Equity Liability" which is taken under the provisions of Article 69 of the TCO, It will be able to attribute to the company officials who code the artificial, intelligence, mechanize the coded technology or sell the product in its final form.

However, under favour of studies and technological developments in the field of artificial intelligence, it has become possible for artificial intelligence to do many things at their own will such as they can also have prejudices like people, they can multiply, learn new information and process this information without human help, they can make the decision to set a target and hit this target with their own decision. With these developments, in the event that artificial robots and other autonomous vehicles have human characteristics, artificial intelligence can have the concept of person defined in TCC.

In the evaluation of the subject in terms of criminal law, apart from the private law norms, if artificial intelligence has become a subject, not the object of the legal norms, it has been assumed that people who have developed autonomous systems have assumed that they have committed crimes in all these processes. To be held responsible will not be equitable.

However, many legal regulations regarding artificial intelligence are still controversial due to the fact that robots do not have the ability to act in terms of criminal law and private law norms, and unjust acts that arise from artificial intelligence should be evaluated separately in the light of criminal provisions apart from private law norms. As it is known, in the criminal law, in order for a person to be held responsible for an action that is accepted as a crime, it is stated that the person must have criminal capacity and in order for the criminal capacity to be accepted; It is stated that the person must have the capacity to act which is evaluated as a whole with some qualities such as age, perception, and orientation.

In this respect, the criminal capacity was evaluated as a whole with the conditions of "Ability to Distinguish" and "Majority" related to the capacity to act and the capacity to act in TCC. In this regard, although the robots developed by artificial intelligence, such as in TCC or criminal laws, are not fully expected, the concept of an alternative capacity to act must be defined and the limits of legal responsibility must be clearly drawn with the expansion of the limits of what artificial intelligence can do in the long term.

III. PERSONAL ASSESSMENT AND LEGAL RESPONSIBILITY OF ARTIFICIAL INTELLIGENCE UNDER THE EUROPEAN PARLIAMENT DECISIONS

In the statements made by the European Commission on 19.02.2020, It announced that new strategies for the future use of artificial intelligence and robots within the European Union have been developed, and in this context, it is decided to allocate 200 billion euro for development of artificial intelligence and robot technology by 2030. In addition, in the past years, the European Parliament has prepared "Electronic Personality" and some regulations on robot rights to detail the rights and responsibilities regarding robots and devices with artificial intelligence technology.

In these mentioned regulations; It is stated that robots are given a legal personality in the acquisition of personality, like companies so that robots can have legal capacity in both civil and defendant and legal proceedings in damages caused by them. 2 With these mentioned studies, it is aimed to determine the responsible people more clearly in criminal cases due to unfair action caused by artificial intelligence.

Due to the fact that legal entities have legal and criminal responsibility, as in natural persons, the view that artificial intelligence can have a criminal to act in criminal law is becoming more and more common.

However, contrary to this idea, there are many opinions to the defender that it is inappropriate to give robots a legal personality when viewed from a legal and ethical perspective. Undoubtedly, it is stated that if artificial intelligence is recognized as a legal entity, the legal responsibilities of manufacturers and other actors are completely eliminated and this situation may be misused in practice. 3

In addition, in the event that decision-making processes of machines with artificial intelligence become impossible, the risk of not being able to make a definitive opinion about who can be held responsible is envisaged.

Therefore, in case the robots are granted a legal personality by granting certain legal rights and responsibilities to companies; An approach such as signing a contract or issuing a capacity to sue is adopted. Thus, the rules regarding the criminal liability of legal entity may also be adapted for new regulations that can be developed for artificial intelligence.

IV. CONCLUSION

Along with the technological developments, the use of artificial intelligence-based robots in all industry areas and the changes that they can bring to human life with this narrative has become a widespread issue.

Especially, the responsibility arising as a result of torts caused by artificial intelligence which is expected to be strong enough to carry out all the tasks for defence systems, with the development of increasingly capable and even armed models in the military field, is a matter of debate today.

In the assumption that artificial intelligence is developed due to temporal and economic savings in works to be performed by human beings and it serves people in essence, attention should be paid not to contradict human rights and interests. Although studies have been made regarding the fact that artificial intelligence can be included in the concept of the person included in private law norms or that a new legal personality can be acquired with alternative norms, all these studies should be evaluated together with the loss of rights they may cause in the future.

Particularly, artificial intelligence practices need to be subjected to important tests before the introduction of artificial intelligence and / or robot laws, especially when the torts that autonomous vehicles with decision-making mechanisms without human intervention can cause in the future.

Footnotes

1. Helvaci Serap, Lawsuits Protecting the Right to Personality in Turkish and Swiss Laws, Beta Yay., Istanbul 2001, p. 3 ff.

2.European Parliament Commision on Civil Law Rules on Robotics bkz: https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect#title1

3. https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/

Originally published 17 June 2020

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.