This article highlights predictions from Global KPMG Legal Services leadership from around the world on how data, privacy and cyber security issues will affect the future of legal functions and legal practice. As predictions, they are not intended to guarantee any future outcomes.
Today’s legal teams are challenged by rapid technological innovations. Generative artificial intelligence (gen AI) and other new technologies are being adopted across legal functions and broader businesses at breakneck speed. While productivity is being pushed to new heights, organizations are being exposed to a new range of risks, including data privacy breaches, loss of attorney-client privilege, heightened regulatory scrutiny, ransomware and related reputational damage.
At the same time, new abilities to access, manipulate and analyze huge pools of data are compelling legal professionals, regulators and policy innovators to balance technology’s potential to drive positive social change against the dangers of exposing large swathes of sensitive personal information.
How will these trends reshape the legal functions of the future? Here are KPMG professionals’ top five predictions:
1. As gen AI becomes ever more embedded into legal function processes, legal teams will need to understand how and when to keep humans in the loop to maintain the skills needed to guard against the related risks.
The application of gen AI and other new technologies to legal work will significantly increase efficiency and productivity. These gains will grow as legal professionals get more comfortable with these powerful solutions and continue to develop more constructive ways to employ them.
Dependence on gen AI will grow apace, however, and legal teams will need to stay vigilant about the attendant risks. For example, using gen AI to inform legal advice could lead to data breaches that could affect privilege. And eventually, as gen AI subsumes ever more routine legal activities previously done by junior lawyers and paralegals, there will be fewer people in the organization with the skills to do that type of work.
Legal professionals will need to avoid the tendency to simply accept that a computer’s output is correct without questioning the reasoning behind it. They will need to develop the skills to work backwards from the output to explain how a legal conclusion was derived and independently verify whether it is accurate. Attorneys will also need to be purposeful in determining which processes are a good fit for AI and where they still need to maintain the skills to verify the legitimacy and accuracy of AI output.
2. A raft of new legislation will emerge to address a wide array of AI-related issues.
As new AI legislation is enacted, legal teams will move beyond building AI for their own use cases to advising their businesses on the AI implementation. Legal departments will need to understand all of these different rules so they can establish legal frameworks that enable the organization to innovate and use AI. This use must follow ever-evolving new laws and regulations and must proceed in a safe and trusted way.
Within these frameworks, legal teams need to set business-optimized guardrails so they can make the most of business opportunities while preventing their organizations from incurring risk.
Smart use of technology will be a key to managing these new compliance obligations. gen AI and large language model AI can ingest, decipher, summarize and automate data and regulatory and compliance rules to a much wider degree than any current technology. Legal professionals who learn how to use technology for both improving productivity and policing its use will have a distinct competitive advantage.
3. Privacy laws and approaches to open data innovation will continue to diverge. The more AI is relied on, the more the risks increase, leading to more rigorous requirements aimed at protecting personal data on one hand while enabling its use for productivity gains and positive social change on the other.
Revolutionary AI systems have enormous potential to help solve various societal problems, such as disease and vendor diversity-based discrimination. However, these systems require copious amounts of personal data to create reliable statistical conclusions, raising issues about whether the right permissions and safeguards are in place for processing that data.
Regulatory restrictions on data usage, such as data localization and data sovereignty rules, will continue to increase. However, there will be some push and pull as some jurisdictions, such as the UK, attempt to simplify those rules in order to encourage innovation, sharing of data and open data. For example, the EU Data Act aims to allow public authorities to make public data available for purposes of the wider community via a public data trust.
Legal teams are likely to increase their use of AI-enabled privacy technology to demonstrate compliance as new data protection legislation comes onstream. This technology can also make legal data analysis more efficient and ultimately help make legal decisions more consistent.
4. With gen AI’s ability to create and transform, data sources will become more opaque and harder to trace, leading to more data privacy and intellectual property disputes.
As machine learning, large language models and gen AI continue to advance and collect huge volumes of data, it will become increasingly difficult to trace and verify the sources used to train these technologies. Currently, we have seen disputes over AI’s use of copyrighted texts and artworks in generating new works. The inability to prove who “owns” a source of original data could frustrate attempts to gain intellectual property protection for AI-generated results.
Challenges in tracing data could also cause companies to run afoul of data privacy legislation by hampering their ability to comply with legislated data subject rights, such as access or erasure requests.
In-house privacy teams will need to expand their focus to streamline processes and controls and adapt to AI-related risks and regulations. Legal departments will also need to have the ability to quickly develop internal policies, procedures and controls to keep up with the pace of new usage.
5. Legal departments will be on the front lines of defending against cyber attacks and upholding organizational resilience.
Cyber security threats are likely to multiply in the future as cyber criminals become adept at using gen AI for writing ransomware, bypassing protections, spreading misinformation and other offences. Legal teams will be called on to respond to these risks on a number of fronts by:
- advising companies on consistent policies for responding to and dealing with ransomware attacks
- working with in-house technology or operational teams to implement or adopt appropriate cybersecurity technology to protect the organization’s data (in compliance with stricter data protection/cyber security laws).
- educating people across the company on cyber risks, including the guardrails needed to mitigate those risks and what red flags to watch out for
- ensuring that the people responsible for complying with data security and privacy legislation:
- have the skills to understand the sources of cyber risks and related safeguards
- maintain their human connections within the organization so they can ensure AI uses remain safe and secure.
Governments can also be expected to get involved to ensure businesses in their jurisdiction have appropriate cyber security policies and governance in place. In the near future, we are likely to see legislation enacted to mandate organizational resilience on adopting stronger cyber security technology and efficient response to cyber security breach. Legal professionals will need to help their organizations develop approaches to complying with these rules.