Skip to main content

Last month, the UK Government introduced new regulatory advancements to enhance cybersecurity in AI models and software. Designed to fortify digital systems against hacking and sabotage, these are changes intended to foster confidence in the use of AI across various industries - confidence that is very much needed. 

Given the cost of cybercrime is already projected to reach $13.82 trillion by 2028 - and could grow at an even higher rate as the new generation of cybercriminals gain access to increasingly sophisticated AI - trust in the tech is understandably beginning to weaken.

While these new measures represent significant progress in addressing current cybersecurity challenges, there are still questions and concerns around the future adaptability and efficacy of regulatory frameworks, particularly among the developer community

In a recent survey among developers across both the UK and US, a total of 72% said that regulations made to protect privacy are not built for the future, with 56% believing that dynamic regulatory structures - which are meant to be adaptable to tech advancements - could pose an actual threat. A particularly alarming aspect is the security risk associated with AI systems that require vast datasets for training, which often include sensitive personal information.

With this in mind, changing or inconsistent regulations could create vulnerabilities or gaps in how this sensitive data is protected, increasing the risk of data breaches or misuse.

As regulations evolve, ensuring the security and privacy of the personal information used in AI training looks set to become increasingly difficult, which could lead to severe consequences for both individuals and organisations.

The same survey went on to reveal that 30% of developers believe that there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they’re tasked with regulating.

How to Design Adaptable and Effective Regulatory Frameworks

With skills and knowledge in question, alongside rapidly advancing AI and cybersecurity threats, what exactly should regulators keep in mind when creating regulatory frameworks that are both adaptable and effective?

It’s my view that, firstly, regulators should know all the options on the table when it comes to possible Privacy-enhancing technologies (PETs). While some PETs are already being utilised to minimise the risk of data breaches, others are evolving as I write, with immense potential in terms of securing sensitive data and protecting privacy. Knowing the advantages and limitations of each helps with a flexible approach in adopting them rather than trying to create one policy to cover all at once, for example:

  • Authentication technologies: Multi-factor authentication (MFA) - commonly integrated by developers into authentication systems to provide an additional layer of security - is used in applications ranging from online banking to enterprise software. Biometric authentication is another advanced and secure method also in use today that utilises unique physical traits such as fingerprints or facial recognition. Additionally, looking ahead, the adoption of federated identity mechanisms, such as FIDO (Fast Identity Online) or OpenID Connect, hold promise. These mechanisms not only enhance security, but also streamline user authentication processes across various platforms, offering a unified and secure approach to identity management.
  • End-to-End Encryption (E2EE): This tech provides robust security by ensuring data is encrypted from sender to recipient, preventing unauthorised access even by service providers. However, implementing E2EE can be complex and resource-intensive, often requiring significant computational power and sophisticated key management. Because E2EE prevents service providers from accessing the data, it can also hinder their ability to assist with data recovery or comply with legal requests for information; a situation that could be an issue in cases of criminal investigation or data recovery.
  • Fully Homomorphic Encryption (FHE): Although FHE is still considered to be in its early stages of being fully realised, it has made significant advancements in recent years. A type of encryption that supports data processing without requiring decryption, it is a perfect combination between AI and data security, as it allows organisations to wield the power of the technology without compromising privacy expectations for users. For example, FHE can be used by financial institutions to confidentially train fraud detection AI models across banks without exposing any personal data;and healthcare providers can perform predictive diagnostics without exposing the private information of their patients.
  • Multi-Party Computation (MPC): This technology complements FHE by providing an end-user the ability to decrypt encrypted data after checking that he or she has the right to access such data. MPC allows a quorum of designated entities to engage in a collaborative protocol that will reach a consensus on the access control before re-encrypting the data from the public encryption key of the protocol to the public encryption key of the end-user, this granting the end-user access to the clear data. Each entity in the quorum has only a piece of the private decryption key of the protocol and is therefore unable to decrypt any data on its own. Furthermore, the clear data is never made available to anyone but the end-user itself. , 

Collaborative Policy-Making

Once regulators have a good - and current - understanding of PETs - of which there are many more - the next step is for policymakers to ensure regulations don’t stifle technological advancement while still protecting against cyber threats.

To craft nuanced and effective privacy policies that evolve alongside technological advancements, it’s key to remember that they don’t operate in a vacuum. They are not expected to be the only ones responsible for this. Instead, policymakers should be working alongside the creators of the technology - who in turn should start designing their tech while keeping existing frameworks in mind rather than expecting new ones to adapt.

Incorporating continuous learning within the organization is also crucial, as well as allowing employees to participate in industry events and conferences in order to be up to speed with the latest developments and meet with experts. Where possible, we should collaborate with the industry, for example, by inviting representatives of tech companies to give internal seminars or demonstrations.

I strongly believe that all of the above should be factored in as we integrate increasingly complex systems like AI, IoT, and advanced data analytics into our daily lives and the potential for cyber threats grows.

By future-proofing regulations, we can ensure that we’re not constantly playing catch-up with cybercriminals but proactively protecting our digital infrastructure. By adopting a dynamic and adaptive regulatory framework, we can better safeguard sensitive data, protect user privacy, and maintain public trust in digital technologies.

Subscribe to our newsletter for the latest insights, trends, and strategies to protect your tech and drive innovation.

Ghazi Ben Amor

Ghazi Ben Amor is VP, Corporate Development at Zama. He has worked in cybersecurity for more than 20 years, with roles spanning engineering, strategy, investment, and finance. At Zama, he is heading partnerships development, with a focus on cloud providers, hardware accelerators, and financial institutions.