

As technology reshapes industries, it also redefines governance. Can machine-readable regulations bridge the gap between regulators and institutions? This new approach promises faster compliance, clearer expectations, and smarter oversight. But can it truly transform their complex, evolving relationship?
In the view of Emil Kongelys, CTO of Muinmos, a RegTech company, machine readable regulation will be a huge boost, the days of interpretation will be over, and there will no longer be an ‘excuse’ to not comply.
He said, “At Muinmos we have always believed in regulation as an API integration, and we have been advocating for one common protocol standard that all regulators can expose their regulation through. An FIX protocol for the regulators if you will.”
Kongelys emphasised, however, that most regulators do not have the IT infrastructure to begin a project like this. Despite this, the industry is seeing many digitising and putting in frameworks.
“However, when one common protocol, used by all regulators, will be a requirement, it will be the FIX protocol of regulation,” stressed Kongelys.
Does Kongelys believe regulators are ready to trust AI-driven compliance systems? What do firms need to watch out for, and how can this trust be achieved?
He said, “If the results generated by the compliance system can be explained 100%, regulators will have to trust the result. That does challenge the use of GenAI and LLM’s, as they would operate on probabilities that can’t be explained. Yet there are pieces where 100% explainability might not be needed, in screening for example fuzzy logic is commonly used, here matching is also done on percentage of possibility that there is a match. In the same way using an AI agent to identify if a document is forged, with a result in probability, which can then be reviewed by a human will also have to be accepted.”
In addition, how might real-time data sharing redefine accountability? While real-time data sharing will mean that any regulatory change is immediately known to all, this will really be true if the sector agrees to one uniform protocol that does not allow for interpretation.
Kongelys said, “There will always be a need for different regulators to have small differences, but the high-level protocol should be common and enough to set the expectations for accountability.”
End-to-end automation
From the standpoint of Mark Shead, product management at Regnology, the cost in regulation is not just in the ‘business as usual’ but in dealing with change, and there is no constant but change when it comes to financial regulation.
He said, “For each change, the existing business and rule base must be well understood as must the new rules, after which any required to change to business models and for any data or calculations required to show ‘compliance’, must be implemented and successfully operationalised.”
Shead remarked that if new regulation or rules can be written or coded, whereby they can be automatically ingested, mapped and assessed against a firm’s existing systems, then naturally this leans toward reducing the cost of compliance
He continued, “Going one step further, if through such a process gaps are identified, automation could be used to fill these gaps, which extends to further automation such as automated test data and testing. Going further still, full end to end automation from rules to reporting, both for change and run the bank, may not seem so far-fetched in today’s world of AI and GenAI and certainly seems attractive.”
There is a strong belief held by Shead that the industry has made great strides towards machine-readable regulation, particularly so when it comes to reporting. “After the last crisis, the industry saw a huge increase in obligations and some regulators were quick to use this to usher in more structured, machine passable reporting formats such as the XBRL (eXtensible Business Reporting Language) standard.”
Despite potentially seeming like a small step, XBRL taxonomies and supporting data point models can be hugely rich in content by allowing for the expression of semantics, and they can be ingested with ease – as well as supporting easier and more rapid assessment of firms data.
Shead continued, “However, though standardisation of reporting outputs is a critical step in the move to automated compliance, there is still the effort required by Banks to first identify, then source and calculate the appropriate data points.
“Aside from providing calculations based on a ‘Standardised Approach’ for the vast majority of regulators the level of standardisation stops at the report output, veering away from trying to standardise the granular data banks first need to source sitting beneath existing reports.”
This, however, Shead remarks, is changing. Regulators are now starting to bring in new granular data reporting requirements, either as an extension of reports or as separate collections.
“One such example being IReF (Integrated Reporting Framework). The initiative by the ECB (European Central Bank) is to collect banks’ balance sheet and interest rate statistics, securities holdings statistics and credit data, at a granular level defined using the BIRD (Banks’ Integrated Reporting Dictionary) thus potentially replacing a number of reports produced today, shifting the level of standardisation. But it is early days, and with the roll out yet to start benefits are far from realized.”
Importantly, is machine-readable regulation going mainstream? Despite the above efforts do deserve to be lauded, Shead believes there is still a huge hill to climb.
He said, “The above examples represent just a handful of initiatives, there are volumes of regulation in place which must be covered both at national and international levels, and this is where hills become mountains.
“Irrespective of how quickly machine-readable regulation goes mainstream, which no doubt through the advances in AI it will, firms need remember that this is just one part of the puzzle and they too play an important role in shifting the dial. As discussed earlier, understanding new rules is one piece, the second and perhaps arguably more complex piece, is understanding the existing landscape and the changes required to them,” said Shead.
“This requires firms to deploy the same techniques for that of the rulebooks but across a more diverse set of domains, and include the use of process, data and ultimately semantic modelling, a huge ask but one that is perhaps becoming more achievable through AI,” finished Shead.
Reducing reliance
Alongside this impact, Alex Mercer, head of innovation Lab at Zeidler Group, believes that by making rules machine-readable, the industry could significantly reduce the reliance on other intermediary processes to get rules and regulations digitized and usable for compliance reporting. This, he claims also has the added benefit of reducing the rate of errors and ensuring that any processes using the underlying rules conforms to the information at a higher level.
What blocks such a change from going mainstream? In the view of Mercer, the biggest challenges are the lack of clear standards on how these rules could be drafted, and the fact that most regulations are drafted with precision around the legal language used.
He remarked, “In a world where the placement of a comma can dramatically shift the meaning of a regulation, the stakes for accuracy are high, and bridging the gap between technological progress and regulatory aims may be difficult.”
There is still a strong belief that regulators are still in the exploratory phase of determining how to best understand and utilise AI-driven compliance systems.
Mercer agrees with this point, explaining, “For example, we’ve seen great enthusiasm for our AI-powered marketing material review tool from financial regulators, who often are thinking about how the industry will evolve in the coming years. As the industry moves towards and adopts AI-powered tools, we think that the regulators will follow along, especially as more established names with experience in regulatory compliance enter the area. While there isn’t an exact milestone on when trust will be achieved, it will likely be done on the backs of industry investing time and resources to implement AI-driven compliance solutions.”
As for real-time data sharing and how it may redefine accountability, such a practice in Mercer’s mind shuffles surface-level accountability overall, changing which party might be more accountable for parts of the process, but it does not fundamentally change accountability.
He remarked, “For example, switching from sending a vendor a weekly excel file to a consistent data feed reduces the accountability on sending the actual information on a regular cadence, but ensuring the underlying data is accurate and truthful remains the responsibility of the sending party. In a way, switching from regular data sharing to real-time sharing is an evolution on existing processes but not a total revolution.”
Game changer
Madhu Nadig, co-founder at Flagright, believes that machine-readable rules could be a game changer. He explains that this kind of automation would mean, amongst other things, faster reporting, real-time adaptation and less manual overhead.
Despite this, Nadig remarked that we’re not quite there yet. Why is this? He explained, “There is a lack of standardisation, as every regulator speaks a different data language. In addition, legacy systems are a challenge – as most firms aren’t ready to plug in rules like an API. There is also a trust gap, as regulators want a human in the loop.”
To build that trust, Nadig said that RegTech platforms like Flagright need to lead by example: transparent, traceable, and aligned with regulatory thinking.
He finished, “The bigger play? Real-time data sharing. If institutions and regulators have the same live view, accountability becomes proactive, not reactive. That’s the future we’re building toward.”
Importance of data
Machine-readable regulations have the potential to streamline compliance and enhance relationships between regulators and institutions, in the viewpoint of LEI CEO Darragh Hayes.
However, he believes that this is contingent on the use of standardised and reliable data.
He said, “Take the example of the Digital Operational Resilience Act (DORA) in the EU, where the Legal Entity Identifier (LEI) was proposed as the primary identifier. The LEI is globally recognized, machine-readable, and regularly updated, making it ideal for regulatory purposes. It would enable regulators to access accurate, up-to-date information and assess risks efficiently.
“However, the introduction of a second identifier, the European Unique Identifier (EUID), adds another layer of processing time and complexity to the entire process,” claims Hayes.
He said that the EUID contains outdated data, not machine-readable, and is geographically limited, it also slows down the entire data processing and collecting exercise, whereas the LEI has system integration, automation and mapping capabilities.
“Relying on a dual identifier system introduces errors and inefficiencies, complicating compliance and diminishing the effectiveness of DORA’s objectives,” said Hayes.
To truly benefit from machine-readable regulations, Hayes believes that data consistency and accuracy are paramount. Dual identifiers like the LEI and EUID create disharmony within systems, he believes, thus increasing the burden on financial institutions and diluting the regulatory process and end result.
Hayes concluded, “For the regulator-institution relationship to work effectively, regulators must consider that the regulations must be compatible with suitable technology and means first and foremost.”
Keep up with all the latest FinTech news here.
Copyright © 2025 FinTech Global