We've discussed the responsibility of leaders to carefully select how AI is integrated within their business. It is not just about defining the appropriate use case and assessing the ethical and secure usage but also the long-term impact it will have on society and future generations. It can also be fun.

Whilst we are not yet at the stage of fearing AI taking over the world in the way depicted in science fiction movies, our own knowledge is in some way or another being shaped by AI. For example, internet search requests are analysed by AI and the results retrieved are those that AI thinks are most relevant to us.

We set out a list of steps that a business can take to start preparing for an AI-embedded world without making the decision to implement it yet.

A clear AI strategy from leadership is the key - take a cautious approach

A needs analysis can be conducted today. Counterbalancing the very real risks of security breaches and IP loss, businesses must stay competitive. Providing your employees with productivity tools, such as LLMs will no doubt raise the bar in terms of human creativity and faster outputs. The starting point is getting the right minds together to consider the risk and investment appetite.

Building a custom solution internally or in partnership with a vendor can provide the greatest benefit and control over data requirements but incur significant investment. Conversely, using a pre-trained open source or vendor AI model may reduce the costs, but it may not be tailored to your specific needs nor provide the same transparency on how your data is being used.

Five small steps that can be taken now:

  • Speak with various stakeholders across different departments to identify any pain points that can be addressed with LLMs.
  • Strategise on the options for implementing LLMs in the future and start building relationships with trusted partners.
  • Consider how to incorporate ethics into the design process through training on the ethical considerations involved in implementing AI and assessing potential areas of bias to certain groups in the training data and output.
  • Consider the current and potential future AI strategy against any large upcoming investments (technology, staffing, retail space).
  • Work with LLMs to help brainstorm ideas and draft the generic needs analysis assessment – just don't tell it any company secrets. Work with in-house and external legal counsel before finalising it.

Small change management steps – because no one likes big changes anyway

Employees will be worried about how it may impact their jobs, whether there is a strategy in place or not. It is often said that employees will not be replaced by AI, but those who have embraced AI will replace employees that have not. When emails replaced posting and faxing, business did not need to replace those employees with experience in mailing letters with those who had skills in emailing. It was a natural change. Some took it up faster than others so reaped the rewards sooner. The same applies to other big disrupters, such as, the internet and video conferencing.

Five small steps that can be taken now:

  • Communicate clearly and often about the positive impact AI could have and encourage employees to read about the advancement of AI in their industry.
  • Embed other tried and tested technologies into the day-to-day functioning of businesses that are secure.
  • Set up an environment for employees to play with LLMs on demo (non-confidential) data to get a view of where it may assist and to help employees familiarise themselves with how it works. Set up processes that enable the analysis of questions being asked by employees to identify the pain points employees are looking to address through LLMs.
  • Run interactive games using AI, such as Friday drinks in the Metaverse with prizes for the best avatars, engage employees in creating video content from text about non-confidential matters like holiday planning or create workshops to code games.
  • Employees performing tasks that can be easily automated are at the greatest risk. The most creative and agile employees will prosper the most. Consider this when assessing skillsets in new recruitment drives. Another option is to consider outsourcing and flexible workers until the new dawn is clearer.

Preparing for the new dawn – without taking the plunge

Businesses do not need to invest in LLM products for the risks to be real today due to the accessibility on an individual basis. Parallels can be drawn to the usage of other public productivity tools, such as, Google Translate. Employees are still translating sensitive information using free internet tools without any consideration of where that data is being transferred and stored and how it is being used.

To get the best results from tools, such as, ChatGPT you need to be able to ask it the right questions. This is referred to as prompting. Prompt engineering roles in the AI industry attract large salaries demonstrating the crucial role they are anticipated to play in this new dawn.

Three steps that can be taken now:

  1. Train your employees and update your policy and procedures - no matter your appetite

It is always critical to provide a clear message to employees on the business's approach to the usage of public productivity tools. The legal, commercial and reputational risks associated with ChatGPT have resulted in businesses developing and implementing training, monitoring and policies and procedures specific to this tool, including the acceptable and prohibited uses of it. This includes adopting a risk based approach to individual use cases and, in some businesses, removing access to it entirely. However, if a business takes the stance of prohibiting or seriously restricting the use of LLMs in the workplace, it runs the risk of a shadow IT infrastructure emerging with the organisation where employees are use LLM technologies without any knowledge or approval of the company's relevant stakeholders. Unregulated use of LLMs in the workplace, exponentially increases the company's risk of cyberattacks and data breaches. Businesses should document an incident response plan and engage the support of external legal counsel.

Legal experts who are working with multiple businesses to protect and help leverage the benefits of new technology will be best placed to provide guidance.

  1. Information governance – getting your electronic house in order

According to Wikipedia, "Information governance, or IG, is the overall strategy for information at an organization. Information governance balances the risk that information presents with the value that information provides".

AI reads and learns from text. If the information you want to train and use it on is in hard copy, saved on desktops or lost in a folder somewhere in your infrastructure, you will not benefit from it. Whether your business is working in the metaverse or using collaboration tools, such as, MS Teams, you are no longer simply checking project folders, emails and hardcopy files in offices to gather your information. When you have multiple forms of business communication channels and need to organise your data, it can be very difficult. It is anticipated that chat messages will soon overtake emails as the most prevalent form of business communication.

Businesses can prepare by focusing on a centralised and organised digital transformation project.

  1. Knowledge management - digitising subject matter expertise

AI reads and learns from text. If the information you want to train and use it on is sitting in your former and current employees' minds, you will not benefit from it. There are various ways to digitise subject matter expertise from creating knowledge repositories to reusable AI models.

The latest role of a "legal engineer" for instance is both technically and legally skilled with the ability to draw information from a non-technical but highly skilled legal professional and translate the subject matter expertise (lawyers) into a variety of digital formats.

In matters, such as disputes, forensic investigations, data breaches and regulatory responses, the legal technology industry has been using AI for many years. This includes creating reusable AI models. On one legal matter, subject matter experts (lawyers, chartered accountants, etc.) train the machine to find helpful and harmful evidence as well as company-wide irrelevant communication (eg, all staff emails, bot mails, junk mail etc.) and then apply the same machine learnings to expedite the identification of evidence on similar, future matters. The ability to train AI to address industry and cultural-specific language, particularly in a diverse continent, such as in Africa, is crucial.

Many AI models have been trained predominantly on general data from the US or Europe. Whether you are training a chatbot to respond in colloquial or technical language or using AI to find relevant evidence across information generated locally, it is imperative that you work with legal engineers who understand your company's culture, dialect and technical expertise to enhance the performance and quality of the results.

Conclusion

There are still many unknowns. The question is not if LLMs will be incorporated into business, but when and how.

Leaders are faced with a great responsibility to carefully select how AI is integrated within their business. It is not just about defining the appropriate use case and assessing the ethical and secure usage but also the long-term impact it will have on society and future generations.

ENSafrica's legal experts work closely with our in-house technology experts to help businesses start their digital journey.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.