Regulation and compliance in the post-Hayne world is an absolute priority for directors and officers of companies and for trustee directors. Inattention to these evolving and complex duties can result in career destroying consequences. Even with internal compliance regimes to manage risk and protect directors, the present regulatory environment is spirit-crushing. This is far from the concept of a company to be entrepreneurial and risk-seeking, being for many directors the reason to accept a board position.
Complexity arises from statutory accretion, regulators adopting policy creation postures by regulation, a profusion of Inquiries and Commissions, Parliamentary responses to egregious behaviours (which can be ill-informed or subjected to lobbying), conflicting provisions in the general law, and globalisation of investment flows with conflicting interpretations of statutory and general law duties in multiple bespoke regulatory environments. Different statutes interpret commonly used legal language in different ways with different results. Regulators in recent senior court proceedings against directors and trustees have adopted postures which are not the law.
Regular themes include broadening of director and trustee duties, the abolition of corporate veils, a ‘litigate first’ regulatory posture, and subsuming of accepted general law by statutory intrusion. This pace of change has increased and may result in director and trustee flight unless there are additional tools embedded within the corporate governance of organisations to better manage these risks. D&O insurance providers will inevitably review organisational competencies to cope with increased compliance risks. These themes will require organisational change and a demand for more legal guidance, mostly at senior level. This will be costly and for some, unaffordable without efficiencies in the legal process.
Whilst there has been a global increase in the legal-tech start-up community, few data analytics companies service the legal profession. Fewer use Artificial Intelligence (AI) to manage unstructured data sets in deep learning environments. Often, what is termed AI is Robotic Process Automation (RPA), useful of itself, but mainly aimed at organisational efficiencies in the legal process. It does not resolve the compliance conundrum.
There are myriads of applications, and vast numbers of documents, mostly in unstructured datasets. What is required is embedded digital robots operating in unsupervised cognitive learning mode supporting highly trained legal professionals to increase speed and productivity, manage complexity, reduce client costs and enable scale. Ultimately, this will be delivered in AIaaS (AI as a service) mode, being a commoditised cognitive computing platform integrated with in-house RPA. Australia is presently a long way from this outcome.
Artificial Intelligence is a concept rather than one technology. Its definition is an important starting point for those wishing to take advantage of it. At its most basic, it is the development of computer systems able to perform tasks normally requiring human intelligence such as visual perception, speech recognition, decision making and translation between multiple languages concurrently. AI is a basket – six is oft-quoted – of complementary technologies. These include Natural Language Interfaces, emulation of manual processes (RPA), Smart Workflows (automation management and supervision), Machine Learning (supervised and unsupervised), Natural Language Generation (creation of text from data), and Cognitive Agents, being digital robots – the simulation of professional endeavour. Simulation involves the creation of multilayered Neural Networks which enable Deep Learning, a function of dataset size and computing power. Simulation creates a virtual workforce.
Supervised learning uses training data sets and real live examples with iterative prediction outcomes. Some of these have been demonstrably more efficacious than their human equivalent. Unsupervised learning evaluates unstructured data and has been used in forensic investigations.
Present day technologies include eDiscovery platforms with embedded text analytics to identify relevant documents, citation analysis of legal precedents globally enabling empirical predictive data, contract review, due diligence investigations, case predictions, competitive analysis, matter pricing, case and information management, search and retrieval, and compliance check-lists.
There are several legal tech contract review start-ups. These include analysis and management of basic documents, Non-Disclosure Agreements for example, which check for insertions and omissions of desired clauses. Some determine semantic equivalence (‘textual entailment’).
Due diligence investigations are often costly, time constrained and sometimes not of the required veracity. Most of the applications are RPA but a properly trained cognitive robot can deepen the investigation and may be of use in the auditing profession.
All of these developments will result in significant governance issues for organisations and the law firms that service them. Generally accepted law pre-dates AI, if not RPA. Directors will need to attend at least to — intellectual property, liability and risk management, causation, data privacy, discrimination and bias, competition law, licensing traps including warranties and insurance, artificial personhood and directors’ powers and duties.
Intellectual property issues include value and ownership, protection and security, ownership of derivative works, learned behaviours and information residuals, and ‘fake news’ or data including facial data.
Data privacy is a global problem. For example, in AI, what is learned cannot be unlearned and General Data Protection Regulation (GDPR) now proscribes some AI capabilities. But robot learned know-how and behaviours are retained. AI is not immune from machine-learned bias and directors will need bias warranty protection from the in-licensed AIaaS provider. Controversy in its application in criminal profiling (there are US cases) has resulted in regulatory interventions, usually requiring consumer warnings and/or ‘informed’ consent. Data held on a person/organisation can be more than observed data — it can also be derived data, inferred data, predicted data. Again, regulatory intervention (for example Data Protection Act 2018 (UK), EU GDPR and some Canadian regulation seeks to limit the power of AI applications. In some cases, directors can bear personal criminal liability for misuse of personal information.
The law has not kept up to date with AI technology. There are multiple risk management governance issues in causation, contract, tort, consumer protection, insurance, licensing and certification of AI systems. Similarly, the use of AI in investment decisions by funds managers is already fraught with risk since Australian law has not kept up to date with modern portfolio theory.
Concepts of artificial personhood are being explored — to date mainly with unanswered questions rather than legal solutions. All organisations which commence this journey will need board level discussion and analysis of its impact on their director and trustee duties. RPA and AI should be used as a tool — the cognitive slave — with control over delegation and adherence to the non-fetter power of directors. At all times, a natural person must retain responsibility. However, a digital robot cannot be arrested, a director can.
There are examples of a cognitive robot being appointed a board observer and a member of a management team, even though they do not have their own legal personality. They cannot therefore be appointed a director or trustee director. Constitutions and trust deeds almost certainly will require amendment to empower directors and trustees to use AI with its attendant and often unknown legal risks.
The author has created a global database of entrants in the legal tech space and maintains an ongoing interest in the relevant case law in multiple jurisdictions.