

Artificial intelligence is advancing at breakneck speed, reshaping industries and decision-making processes. Yet, regulatory frameworks and compliance systems are struggling to adapt in real time.
Stacey English, director of regulatory intelligence at Theta Lake, believes that while AI tools are rapidly reshaping compliance, companies can’t afford to lose sight of how these technologies align with tomorrow’s regulatory expectations.
She said, “Industry research of 500 compliance and IT leaders shows that nearly two-thirds of firms are already using AI in supervision—but 62% face data and implementation challenges. This highlights the concerns about whether current tools are flexible and transparent enough to meet the compliance demands of tomorrow. The best-in-class solutions are designed with the future in mind.
“They can already accommodate multiple purposes such as enabling keyword-based detections in parallel with AI classifiers to detect compliance risks. This helps firms to detect both known red flags as well as less obvious clues where contextual information like emojis and reactions are used.”
Despite this, adopting AI isn’t a ‘set it and forget it’ in the view of English. “Continuous maintenance is essential. Models must be regularly updated to reflect changing regulatory requirements, emerging risks, and shifts in communication norms—for instance, our classifiers have evolved to identify conversations that may signal off-channel activity or determine whether an AI notetaker bot or meeting assistant is present in a meeting.”
Without regular calibration, English underlines, firms risk over-relying on outdated models that may fail to detect relevant issues or generate false positives that erode trust in the tools.
“The AI compliance tools that will endure are those that invest not just in AI capabilities, but in the governance and model management to ensure they accommodate evolving risks and regulatory expectations,” she said.
In order to strike the right balance between AI innovation and regulatory safety, transparency is key, states English.
She detailed, “Compliance teams and regulators alike need to understand how decisions are made by AI tools. Features such as Theta Lake’s classifier audit reports and detection explanations and annotations are now essential, not optional features, that enable auditability and confidence in machine-driven decisions. As the use of AI for compliance continues to grow it will be matched by growing demand for explainable AI. The future of compliance isn’t just about embracing AI’s efficiencies —it’s about making sure fast-evolving AI capabilities keep pace with shifting compliance risks and regulatory demands.”
Fast-paced
It’s undeniable that AI is advancing at a pace that few could have predicted even a couple of years. For John Byrne, CEO and founder at Corlytics, said that while this acceleration creates enormous opportunities for firms to improve efficiency, decision-making and customer outcomes, whilst also exposing a fundamental tension – the gap between AI capabilities and the pace at which regulatory frameworks evolve.
He said, “The deployment of AI is not regulated, through the AI Directive and similar frameworks. Many large, regulated firms already have their own security and safety standards regarding the deployment of AI; many of these go beyond the scope of any current regulatory requirement. It is rare for large, regulated firms to accept latest models deployed by vendors that are untested. This requires proactive engagement with regulators, investment in explainable and transparent AI systems, building internal governance structures that can adapt as standards evolve.”
However, Byrne believes there is a risk that AI will outpace regulation, and in many ways, already has.
“Rather than viewing this as a constraint, it should be seen as a call to lead,“ said Byrne. “The latest models can do things not possible six months ago, however, there needs to be a more rigorous science-based approach, as many reliable models are already producing accurate results. Firms that embed compliance into their AI design processes will not only avoid regulatory pitfalls but will also build trust with customers, partners and society at large. The benefits are greater accuracy, consistency, immediate up-to-date traceability in compliance systems and greater efficiency.”
How is this challenge dealt with? For Byrne, adopting and championing international standards that drive accountability and trust in AI deployment is one way.
“When it comes to today’s AI tools, there is a lot change happening, but it holds true that accurate and efficient models need to be as simple and efficient as possible and need to be explainable,” said Byrne. “Models and platforms must be designed to evolve, to accommodate new data privacy and provenance requirements, emerging robustness standards and jurisdiction-specific rules. Static AI systems will struggle; adaptive, modular architectures will thrive.”
He concluded, “Ultimately, firms must balance the drive for AI innovation with an equally strong commitment to rigorous testing and regulatory safety. This is not a choice between the two – the real competitive advantage lies in achieving both. Those who innovate responsibly will be best positioned for sustainable success in the AI era, it just needs a disciplined and science-based approach.”
Meanwhile, Oisin Boydell, chief data officer at Corlytics, as AI capabilities continue to evolve as such a rapid pace, it is critical that firms embed rigorous governance, transparency, and adaptability at the core of their AI strategies.
“Regulatory frameworks may take time to catch up, but responsible AI deployment cannot wait,” he stated succinctly.
Aligning capabilities with demands
Charmian Simmons – financial crime and compliance expert at SymphonyAI – said that firms can align AI capabilities with regulatory demands by adopting a proactive approach to compliance.
This involves a number of factors, claims Simmons. “Firstly, maintaining an understanding of current AI techniques, capabilities and proven use cases to know what to consider and implement, and be comfortable with any decisions. Secondly, working with solution providers and industry specialists to embed compliance considerations into AI model design and deployment, to ensure concerns on ethics, outcomes and transparency are addressed during development and training. Lastly engaging with regulatory supervisors in a cadence that allows expectations and new requirements to be better understood.”
A key step to ensuring alignment for Simmons is creating cross-functional teams that include legal, compliance and AI experts. This, she states, allows for the integration of expertise, improves AI know-how, aids AI strategic direction and therefore enhances decision-making, fosters an innovation culture with corporate governance and societal norms.
She said, “As AI continues to evolve rapidly, it is possible the gap between AI’s capabilities and the regulatory frameworks designed to govern them, grows. Typically, regulation is a reactive response, whereas innovation is a proactive approach. When regulatory frameworks lag, they fail to adequately address new ethical, legal, and social implications posed by advanced AI systems. This gap may result in misuse of technology (such as by criminals and scam artists), privacy breaches, and other risks.”
For Simmons – mitigation is twofold. “Responsibility lies with AI development firms (and internal functions) to design and develop with responsible AI principles and practices in mind to ensure safe, secure and results that are aligned with practical outcomes.
“Also, firms should implement internal controls, a governance framework and ethical guidelines that exceed the minimum compliance requirements. This assists with training, retraining and new development aspects of AI innovation. Moreover, continuous training and development programs for staff on the implications of new AI technologies can help maintain control over AI applications and ensure they are used responsibly.”
The flexibility of current AI tools to meet future compliance needs depends largely on their design and the foresight of developers, with insight from practitioners and industry experts, states Simmons – explaining that technology does not change a firms’ regulatory obligations.
She remarked, “AI systems built using responsible AI principles, with adaptability and transparency in mind, such as modular architectures, build-your-own model functionality, simulation capabilities etc, that can be updated without extensive overhauls, are more likely to accommodate future regulations. Investing in AI systems that incorporate explainability, traceability, and auditability upfront, prepares firms for potential compliance needs related to data usage, decision-making processes, and regulatory expectations.”
The growing importance of AI model risk management, Simmons details, is one that can’t be understated when being future-forward.
She said, “As firms rely more heavily on AI solutions to make decisions, improve efficiencies, and reduce operating costs, they must manage the potential risks associated with AI models. Proper AI model governance will ensure accountability remains a practical focus within the compliance function, while aspects of fairness, accuracy and reliability and balanced with AI / data privacy and AML regulations.
“Balancing innovation with regulatory safety requires a dual approach: fostering experimentation while anchoring it within a strong governance framework. This is especially important in highly regulated industries, such as financial services, where both customer trust and systemic stability are paramount,” said Simmons.
How can these be balanced? Simmons suggests some ideas – for example, firms should utilise AI sandboxes for piloting new AI capabilities, run proof-of-concepts with trusted solution providers to assess performance, outcomes and regulatory alignment and embed regulatory compliance into model development, factoring in data quality, data privacy and model training considerations as well as bias, fairness and ethical aspects to ensure safe outcomes.
For AI governance, firms can, in her view, establish an AI governance framework, that may include AI policy, AI experts, and cross functional team(s) to aid with use-case expansion. They can also adopt AI Model Risk Management practices to develop, train and manage AI models, which can be tested/re-tested, monitored, retired etc. as required. Another key step potentially is to utilise responsible AI principles to ensure reliability and safety, security, accountability, transparency/traceability, and privacy exist in an ongoing manner.
She finished, “Lastly, firms can stay engaged with their key regulatory bodies on their current AI journey as well as what they plan to do in the future. This proactive approach builds regulatory goodwill and avoids surprises later.”
Surprisingly consistent
The rapid evolution of AI capabilities for regulatory and compliance teams have been often discussed, however, Supra Appikonda, co-founder and COO at 4CRisk.ai, the critical factors for deploying AI technology successfully are surprisingly consistent.
He said, “AI isn’t just scanning documents faster or flagging anomalies—it’s transforming how we think about legal, compliance, security and risk. First, understand the key use cases that are important for your organization and true value AI can bring to the end-end process. Second, measure the benefits, to ensure you are getting the most out of your AI solutions.
“Moreover, it’s critical to check that AI solutions provide security, privacy and alignment with your organization’s AI governance principles. Third, ensure you’ve got the right team in place to deploy AI smartly with human-in-the-loop reviews at the right intervals. Most teams find that AI can provide 80% of the analysis but professionals need to be hands-on 20% of time to ensure the process is truly optimized.”
Appikonda remarked that AI can bring ‘amazing’ benefits – completing task 2,30 or 50 times faster than manual methods, adding that AI can be your ‘always on’ AI Analyst who never sleeps, tirelessly checks regulations, monitoring, parsing, comparing and analysing content and transactions, whilst identifying trends and gaps in near real-time.
He said, “AI produces analysis that is only as good as the models that they use for information. Bias and hallucinations are notorious in AI, so the authoritative source of frameworks will need to be relied on for some time to come. Increasingly organizations are realizing that smaller, specialized language models, trained explicitly on the regulatory and compliance corpus, reduce bias and are highly accurate.
As time goes on, authors of the regulatory frameworks themselves will be in dialogue with AI to continuously improve regulations, rules, laws and standards, based on changes in realities in the business world, supply chains and threats.”
Appikonda also detailed that compliance is improving with the help of AI, which can ingest, parse and analyse vast amounts of information, be it structured or unstructured.
“Organizations are realizing the power of this kind of analysis, using AI to see the gaps in compliance quickly and to rationalize and streamline controls across groups. For example, security, IT/Cloud, business and third parties may all be using slightly different versions of controls for cyber-security, but in fact, could be immensely more efficient by removing duplicate controls and providing a control hierarchy that links to cyber standards and regulatory frameworks.”
He continued by stating that it’s ‘nearly impossible; to do this manually, but with AI, organizations can do this analysis in minutes rather than months and make amazing gains in compliance.
Also, whilst AI innovation is growing in leaps and bounds, Appikonda sees that it is important to syay aligned with AI principles of trustworthiness, privacy and sustainability.
He said, “Trying to deploy AI regulatory and compliance products without these principles thought through beforehand, means your teams may need to ‘rewire the airplane in flight’. It can be very difficult to agree on AI principles mid-deployment, if, for example, transparency, security or bias haven’t been defined or agreed. Teams that have already invested in a specific product or deployment may be resistant to changing products mid-stream when a failure to meet AI trust worthiness that cannot be mitigated is revealed.
“For example, many organizations have limited the use of LLMs such as Chat GPT as their policies restrict the flow of company data into a public model. If a product vendor cannot establish upfront that company data is secure and not used to train public LLMs, and is a fundamental part of their product architecture, a deployment may need to be abandoned mid-stream.”
Technology and automation
How can firms align fast-evolving AI capabilities with shifting regulatory demands? From the standpoint of Emil Kongelys, CTO at Muinmos, firstly it is key to make sure that you are ready to handle more complex cases than ever before.
He said, “Today, you can create a fake passport on several web pages using AI and there are countless other tools that can today make it very difficult to identify forgery. This cant be solved by hiring more people, the only solution is to embrace technology and automation! Which brings us to the other aspect, using AI and new technologies in your own business. For example, to speed up your onboarding process and ensure compliance. Today’s generation expects immediate feedback, so creating an account needs to be “a click of a button.”
What happens when AI outpaces regulatory frameworks? In the view of Kongelys, it already has – and AI is evolving faster every day. “We now see new models and agents coming out at a rapid pace, every model and agent is moving the limits for what is deemed acceptable to share your data with,” he said.
As for whether current AI tools are flexible enough for tomorrow’s compliance – Kongelys believes today we have many different AI tools available. “That said, a lot of them are at very early stages, and in many cases just prompt optimizations for certain tasks. On top of that the primary focus today is on GenAI and LLM’s, this is huge probability calculations, that will get to the correct answer over 90% of the time for the better models. This can work very well in some cases, but in compliance you need not only 100% but also to be able to explain, why the decision was made. This can only be done using explainable AI, and on a system built for this purpose.”
To balance AI innovation and regulation – it is key in the mind of the Muinmos CTO to do your due diligence.
He said, “Make sure the use case where the AI is to be used is suitable, then identify which model is the best for that use case and lastly make sure your data is safe. Most of the available tools will look amazing and deliver fast results that seem correct, so it can be easy to go for the first and most “shining” tool you find. However, you shouldn’t compromise accuracy nor explainability.”
Proactive and adaptive
For Laurence Hamilton, CCO at Consilient, to align fast-evolving AI capabilities with shifting regulatory demands, financial institutions must adopt a proactive, adaptive and governance-driven approach.
He remarked, “While AI offers the advantage of rapidly processing and adapting to new data patterns, its deployment must be underpinned by robust compliance frameworks. To meet regulatory requirements, AI models must be explainable, auditable, and transparent. Regulators expect orgainsations can justify the decisions AI makes.”
Secondly, Hamilton stated that compliance functions must work closely with technology teams to ensure that AI systems are continuously updated to reflect current regulatory guidance. This includes integrating regulatory change management into the AI development lifecycle—so models can evolve alongside laws, typologies, and expectations from bodies like FATF or local financial regulators.
Lastly, firms should implement flexible, modular systems that support regular AI model tuning and re-training. Hamilton detailed that this ensures that AI tools remain effective, accurate, and aligned with changing AML threats and compliance obligations.
“When AI outpaces regulatory frameworks, it creates a compliance gap that can expose firms to significant risk—both from a regulatory and reputational perspective. In AML, where trust, transparency, and accountability are critical, this gap can be particularly problematic,” said Hamilton.
Additionally, Hamilton explained that it is important for firms to take a principles-based approach, which means going beyond strict rule-following to align with the spirit of AML regulations, such as those around transparency, risk mitigation and due dilligence.
He added, “And companies should not be afraid to engage in regulatory dialogue—proactively sharing their AI strategies with regulators, participating in industry groups, and staying attuned to global best practices. There is hesitation to potentially disclose some concerns with AI, with the fear that this will prompt further investigation by the regulator but ultimately this not only builds trust but can also help shape emerging guidance.
“AI tools can be flexible enough for tomorrow’s compliance—but that flexibility isn’t automatic,” continued Hamilton. “While AI, by its nature, is built to learn and evolve, this potential must be intentionally harnessed within a controlled and compliant framework.
In the context of AML, compliance is not just about detection—it’s about governance, auditability, explainability, and risk alignment. So, the real question isn’t whether AI can evolve, but whether it can evolve responsibly and in alignment with regulatory expectations.”
Furthermore, Hamilton stressed his belief that AI has the potential to meet the demands of tomorrow’s compliance, but only if firms invest in building resilient, governed and adaptable frameworks around it.
“Flexibility isn’t just a feature of the AI model—it’s a characteristic of the entire compliance ecosystem in which that model operates. Balancing innovation and safety isn’t about slowing progress—it’s about building AI that is resilient, compliant, and trusted,” he said.
Not a magic bullet
RegTech firm AscentAI said that while AI capabilities continue to evolve and advance at an ‘incredible’ pace, its important to step back from the ‘AI as a magic bullet’ hype and instead focus on the core business process or challenge that AI can/should be applied to.
The firm remarked, “Getting the business process right – optimizing how you do change management for example – is of critical importance here. Making sure your people and compliance operations are configured and deployed properly then enables firms to work with RegTech vendors with proper business automation solutions powered by AI to create value and impact.”
With this said, AscentAI outlined that it is incumbent upon RegTech vendors to then leverage AI in their solutions in a way that appropriately and safely automates their business process.
“It’s not so much about AI flexibility or innovation as it is about applying the right AI tool for the right purpose or use case,” said the company. “In addition, more firms are interested in how RegTech vendor AI models are designed and function, to ensure they are educated and confident about the vendor’s AI strategy.”
Time efficient
Chaitanya Sarda, co-founder at AIPrise, understands that one of the key benefits of the AI revolution comes through its beneficial impact on compliance workload.
He expressed, “For ages, compliance work has basically been a ton of manual digging. Think of compliance analysts like detectives, always hunting through mountains of info – docs, websites, crime lists – trying to spot fraud or money laundering. It’s like searching for that one specific needle in a giant haystack!”
He stated that whilst tech has helped somewhat over the years, but the gathering of the evidence will still predominantly up to people. However, things are now undoubtedly changing thanks to AI.
Sarda remarked, “AI is getting smart enough to browse the web and read through those huge documents all by itself, taking care of the tedious data-gathering part. So, what’s the cool part? It frees up analysts to focus less on the grunt work and more on making the important calls and decisions. Companies like AiPrise are right at the front of this big shift. They’re already showing how AI can speed up getting new customers onboard and catch fraud that might’ve slipped through before because of simple human mistakes. It’s kicking off a whole new way of doing compliance – faster and more accurate!”
Dual challenges
For Comply Exchange, as AI continues to evolve at an unprecedented pace, firms face the dual challenge of harnessing innovation whilst maintaining regulatory integrity.
“Bridging the gap between cutting-edge technology and ever-shifting compliance requirements starts with adaptability and intentional design,” said the firm. In order to align AI capabilities with changing regulations, organisations need to prioritise transparency, auditability and strong data governance from the outset – claims Comply Exchange.
It added, “This means building AI systems with explainability in mind and ensuring controls are in place to manage model outputs, especially in high-risk areas like financial services and tax compliance.”
When AI outpaces regulation, the risk isn’t just falling out of compliance, claims the company, its also about losing trust. “That’s why forward-thinking firms don’t wait for regulators to catch up. Instead they actively participate in industry forums, anticipate guidance through scenario planning, and apply best practices from existing frameworks like GDPR, ISO, or NIST to stay ahead,” said the firm.
Current AI tools can be flexible if they’re built and deployed with modularity and control in mind. The key, Comply Exchange outlines, is to avoid black-box models in favor of solutions that can be updated and tuned as both business and regulatory contexts change.
“Ultimately, innovation and regulatory safety aren’t mutually exclusive. They just require a balanced approach: embed compliance into your AI lifecycle, involve cross-functional stakeholders early, and choose tech partners that understand both the power and responsibility of AI,” said the firm.
Moving fast
For Baran Ozkan, CEO of RegTech firm Flagright, AI is moving rapidly, but regulation is failing to do the same.
He said, “That’s the gap financial institutions are trying to straddle every day. The risk? If AI outpaces regulation, firms are left in the grey zone: legally exposed, ethically questioned, and operationally tangled. The key is to build AI with compliance in mind from day one, not as a bolt-on afterthought.”
For Ozkan, some of the current AI tools are ready for tomorrow – but most aren’t. The ones that will survive regulatory scrutiny will be in his view explainable, auditable and flexible.
He concluded, “The real balancing act? Pushing innovation while building in guardrails. At Flagright, that’s exactly how we approach AI. It’s not just about automation but about trust. And that trust has to be regulatory-grade.”
Meanwhile, Anthony Quinn, founder of Arctic Intelligence, highlighted that as AI is evolving faster than most people can keep up with including regulated entities, supervisors, consultants and tech vendors.
He said, “As RegTech solution providers companies like Arctic Intelligence have a responsibility to learn, think and build solutions that can leverage the powerful capabilities AI can deliver but do this in a way that embeds transparency, traceability, and useability as ultimately the success of AI in regulated industries hinges on whether people feel that they can trust and rely upon it which is necessary for its long term viability.”
Find the full story on RegTech Analyst here.
Keep up with all the latest FinTech news here
Copyright © 2025 FinTech Global