2023 will likely be remembered as the year that AI went 'mainstream'. While it launched at the end of last year, the AI chatbot 'Chat GPT' quickly became the fastest growing platform or application in the history of the Internet. Chat GPT reached 100 million active users in just 2 months after its launch. To put this in context, it took Tik Tok 9 months to reach the same mark; going back further, Facebook and Google took 5 years to reach the same benchmark.

Chat GPT's astounding growth rate is indicative of the promise and peril of AI technology. AI has the potential to cause seismic changes in every aspect of human endeavor and cause them at such a scale and pace that defies previous models. Machine learning is already pervasive in everything from medical research to consumer service chatbots,and will become even more ubiquitous for businesses once 'enterprise AI' takes off as a service. The impact of deep learning models is multiplied manifold if large entities such as banks, utilities, or even governments start using them at scale.

The sheer speed at which AI has developed has also caused regulation to lag, even more than usual. While it is clear now that AI will be regulated, the questions that are confounding regulators worldwide is what to regulate, and even how to regulate it. In India, the view on regulating AI seems to fluctuate between "no regulation" to "regulate based on harms". Indian regulators have underwritten a few consultations around AI regulation, but there is no set position that the Government has subscribed to on regulating AI. India's unique position as a source of cheap skilled workers may make it adhere to this "wait and watch" approach; at least, until someone else gets AI regulation right!

There are a number of interlocking approaches in play in various jurisdictions around the world. The EU is developing legislation that is based on a "risk based" approach to regulation, with "high risk" activities being regulated more stringently than "low risk" ones. Regulations in the US have taken a slightly different tack, with more grounded issues of fixing liability and accountability at the forefront. Regulations proposed in the UK and China encompass related concerns around fairness, explainability, data security, and transparency for flagging false or damaging information. A discussion of these regulatory developments around the world forms the anchor for this 'State of Play' report.

While AI regulation develops and coalesces, certain aspects of regulation are being 'accelerated' due to market forces and pressures. An initial battleground pits human and AI actors against each other in the field of content creation. AI needs to be 'fed' data in order for it to create; what protection then to give to AI generated content is a pertinent topic. This issue has also been at the center of real life battles, like the current ongoing SAG AFTRA strike in the US. We have included a couple of think-pieces on this thorny topic.

Finally, the real world impact of AI is contingent on curbing its misuses in discrete use cases. Ascribing regulatory costs to harmful conduct of AI based systems will be the need of the hour.

BTG Legal_Al Report.pdf

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.