In our first article2, we briefly examined some of the intriguing and hotly debated legal issues around ownership in the context of Artificial Intelligence (AI) generated work product. We looked at the ownership and AI, juxtaposed with the areas of Intellectual Property Rights (IPR), Contractual Agreements and Liability & Accountability. As we noted, "the established construct of ownership by a "person" will continue to be necessary for some time to enable the development of proper systems to exploit the potential of AI.

In August 2023, the United States District Court for the District of Columbia, in Thaler v. Perlmutter (Case No. 1:22-cv-01564, (D.D.C. 2022)), held that an AI system cannot be listed or recognized as an author of an autonomously created artwork. In December 2023, the UK (United Kingdom) Supreme Court, in Thaler v Comptroller-General of Patents, Designs and Trade Marks ([2023] UKSC 49), held that an AI system cannot be recognized and listed as an inventor in a patent application. These decisions appear to follow a general trend where AI systems are not being recognized as "persons" capable of holding IPR ownership. In fact, even in India, the Copyright Office and the Patents Office have taken a similar position.

Today, the use and applications of AI systems have become almost ubiquitous, across industries and geographies. It is virtually impossible, and certainly impractical, to bring the AI juggernaut to a complete halt (or even a temporary pause). More than 30,000 technology leaders and researchers (as of the date of writing of this article), have urged artificial intelligence labs to pause the development of more advanced AI systems, warning in an open letter3 that "AI systems with human-competitive intelligence can pose profound risks to society and humanity". Moreover, with the increasing and alarming instances of AI hallucinations, and misuse of AI (such as celebrity deepfakes and AI fuelled financial scams), there is an urgent need to hasten the conversations around creating frameworks and regulations within which AI systems should operate.

While currently laws and regulations, around the world, are playing catch-up, there are examples where AI systems are being, and have been, taken to Court for IPR breaches. Examples can be found in Getty Images suing Stability AI, and New York Times suing OpenAI, both for copyright infringement and over the unauthorized use of published work to train artificial intelligence technologies.

In the UK Supreme Court ruling of Thaler v Comptroller-General of Patents, Designs and Trade Marks, the Court did observe that patents can be granted to inventions/ innovations that are powered by AI systems. Recognition of contributions made by AI systems is notreally the issue. The real issue was recognizing DABUS, the AI system, which was at the centre of the dispute as a machine and not as a "person" under the law. Steps are being taken around the world to try and legislate for and regulate AI.

The European Union AI Act, which has been in the works since April 2021, saw noteworthy progress in December 2023, when the European Parliament and the Council of the European Union, the two co-legislative bodies, reached a political agreement. This agreement is a significant milestone in the European Union's ambition to become the first region in the world to adopt comprehensive legislation on AI. This, the world's first legislation on AI, is proposed to be adopted with the objectives of creating a regulatory framework for AI, mitigating risks associated with AI systems, and establishing clear guidelines for developers, users, and regulators. In India, there are ongoing conversations, at various levels of the Government, on recognizing AI rights and issues emanating from the use and impact of AI. However, presently India does not have any enforceable guidelines on the development or deployment of AI.

The entire issue of whether an AI system can hold IPR ownership rights is not just an IPR issue. It goes to the heart of whether an AI system can be recognized as a "person" under the law. Currently, Indian law, and indeed most legal systems around the world, recognize either a natural person or a legal entity (such as a company, partnership etc.) as a "person" under the eyes of the law. AI is not going away, it is only going to get bigger, more complex and have increased daily applications.

As we have asserted in our previous article, and given the developing regulatory models focussing on risk, an ownership construct will remain necessary to attenuate the use and impact of AI. Whether AI can be a juridical person, i.e., an entity to be in law a person which otherwise it is not, will be subject to discussions on determining whether AI can sue and be sued, can possess and transfer property.

Gods, corporations, rivers, and animals, have all been treated as juristic persons by courts in India. Since these juristic persons are voiceless, they perform the previously mentioned functions through representatives or guardians, who are natural persons. However, with AI there is unlikely to be a need for a natural person to give voice to AI or AI's actions. Hence merely treating AI as a juristic person, which was more convenient in the case of many nonnatural persons alluded to above because such persons relied on natural people, may not be as straightforward a model to use in the context of AI.

A model to de-link a natural person from AI, its development, use and impact does not seem to either immediately suggest itself or even be on the horizon. The dilemma of how a disembodied, yet potentially immortal construct adheres to principles of ownership, rights, responsibilities, powers, and duties that have been created in a system where such disembodied immortality was not even envisioned, remains to be solved.

From an IPR perspective, historically, it is humans who have been at the heart of innovation and creation. Even if the owner of the IPR has been a legal entity, there has been a human brain behind it. AI challenges this concept, even though a human may have created the base training set. An AI system, which is constantly self-learning, has the potential to create IP at any point in time and with minimal cost and effort input. Where, then, does one draw the line of when human input stopped, and AI self-learning took over from an IP creation standpoint?

Drawing this line, assuming it is relevant, may be time and cost-inefficient and may not be sufficiently determinative in solving the rights of AI to independently seek IPR protection and exploit IPRs. There remain post-IPR ownership issues such as liability and the ability to claim damages, some of which we examined in the first article.

An interesting point of debate, at least from a copyright law standpoint, is whether moral rights can be attributed to an AI system in a copyrighted work. Since there is no registration of moral rights (or the right of attribution) with the Copyright Office, the question that arises is whether an AI system may be given attribution rights? In this way, there is some accord being given to the contribution of the AI system to a copyrighted work. That said, if moral rights are accorded (and in India, for example, the law provides that moral rights can't be waived), can an AI system maintain a legal claim if such a right is not accorded? Under the present circumstances, we would argue probably not.

Another oft-considered conundrum to ownership of AI and AI work product is whether ownership rights to an AI system can be recognized or provided for in a contract. Even if done, can such a contract be enforced (if an IPR dispute arises)? If the law, as it stands, does not recognize an AI system as a "person", can a clause in a contract be relied upon to circumvent the accepted position? Again, probably not.

The path to granting AI juristic personhood is not a paved highway, but a winding mountain trail, fraught with ethical and legal hurdles. Current legal frameworks, crafted for beings with finite lifespans and distinct biological, social, and spiritual needs, cannot simply be stretched to encompass the potentially ageless, autonomous entities that advanced AI promises to become. The very notion of rights, duties, and penalties requires recalibration, when applied to artificial constructs that transcend human concepts of mortality, culpability, and even sentience. This journey demands a re-evaluation of what it means to be a "legal entity" or a "juristic person" in a world increasingly populated by intelligent machines.

The potential rewards of forging a path through this uncharted territory are immense. By grappling with the complexities of AI's legal status, we may not only ensure a more just and equitable future for both humans and machines but also unlock the full potential of this transformative technology for the benefit of all. The challenge lies not just in rewriting the laws, but in reimagining the very fabric of our legal and ethical landscape to accommodate a new kind of intelligence. It is a challenge we must embrace, because the future of our relationship with AI, and perhaps even our own, hangs in the balance.

Footnotes

1 By Kartik Ganapathy, Founding & Senior Partner, IndusLaw and Bharadwaj Jaishankar, Partner, IndusLaw. The views expressed in this article are the personal views of the authors, and do not reflect the views of IndusLaw.

2 https://www.lexology.com/library/detail.aspx?g=c46647ef-e4fb-4679-8a77-7d8ae7d1b436

3 https://futureoflife.org/open-letter/pause-giant-ai-experiments/

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.