AI in Chemical Compliance: A Case for Caution
As Chief Technology and Product Officer, I spend a lot of time thinking about where technology can create real value for our clients, and where it risks creating more noise than progress. AI is one of those areas where both are true.
There is clearly enormous potential in it. There is also a huge amount of hype, a huge amount of superficiality, and a worrying tendency to treat fluent answers as if they were the same thing as reliable ones. In some industries, that may lead to inefficiency or frustration. In chemical compliance, particularly in the context of EU REACH, it can lead to something far more serious. That is why my own view is both optimistic and cautious.
I do believe AI has an important role to play in chemical compliance. In fact, I think it could become a genuinely valuable part of how compliance and product stewardship teams work. But I only believe that if it is applied in a way that is controlled, transparent, evidence-led, and built around expert review. I do not believe in using AI for its own sake, and I do not believe this is a domain where general-purpose models should be trusted simply because they are fast, impressive, or easy to use.
For me, the real opportunity is not in replacing chemists, toxicologists, or regulatory specialists. It is in reducing the manual effort involved in navigating the growing volume of information they have todeal with every day. The challenge is not just scientific. It is informational.
When people think about chemical compliance, they often think first about the scientific and regulatory complexity, and rightly so. But from a technology and product perspective, I think there is another equally important dimension to it: chemical compliance is also an information management challenge.
Teams are working across regulatory updates, substance data, supplier documentation, internal product data, literature, customer-specific requirements, hazard information, use information, prior assessments, and internal decisions made over time. The final expert judgement may rest on chemistry, toxicology, regulation and experience, but the path to that judgement is often heavily manual.
A great deal of effort is spent finding the right documents, reading large volumes of text, extracting the relevant points, cross-referencing them with internal data, and identifying where a potential impact may exist. That is where I think AI can help most.
Not by pretending to “know” compliance in some abstract sense, but by helping expert users search, summarise, connect, and triage information more efficiently. In other words, helping people spend less time digging and more time assessing.
Why I understand the scepticism
If you come from a chemistry or regulatory background and your instinct is to be sceptical about AI, I think that is entirely justified.
A lot of AI tooling is designed to feel quick and effortless. It produces polished language, decisive answers, and neat summaries. But in a domain like chemical compliance, those qualities can be deceptive. A polished answer is not necessarily a complete one. A confident answer is not necessarily a correct one. And a quick answer that misses a critical detail may be worse than no answer at all.
One of my biggest concerns is that AI often misses key information when it sits in the middle of a long and technical article, report, study summary, or regulatory text. Anyone who has worked in this field knows exactly how damaging that can be. The meaning of a document can hinge on a qualification, a caveat, a condition, an exemption, a methodological note, or a detail buried halfway through a dense section of text. If that gets missed, the output is not just incomplete. It can materially distort the conclusion.
That is why I am cautious about the casual use of general-purpose models, especially those optimised for speed over thoroughness. Fast is attractive. Fast demos well. Fast looks productive. But in a compliance context, fast and incomplete can be dangerous.
So when I talk about AI in this space, I am not talking about asking a general model whether a product is compliant and taking the answer at face value. I am talking about using AI in narrower, more grounded, more defensible ways, where it supports the workflow but does not replace the judgement.
Where I think AI can genuinely help
This is where I think the conversation becomes more useful. The question is not whether AI can do everything. It clearly cannot. The better question is where it can reduce effort safely and meaningfully.
One area is regulatory monitoring and impact screening. When new developments appear, the first task is often understanding what has changed, what substances are involved, what obligations may be affected, and whether any part of a product portfolio is potentially impacted. That initial triage can be highly manual. AI can help summarise the development, extract the relevant terms and substances, and cross-check them against known internal data to identify where expert attention is needed.
Another area is document navigation and review. Compliance teams routinely work across long, dense and highly technical material. AI can help extract the main points, compare related documents, identify relevant passages, and point a user back to the sections that appear most significant. Used properly, that does not remove the need to read critically, but it can reduce the time spent trying to find the right part of the haystack.
I also see value in connecting external intelligence with internal product and substance data. For example, if a new regulatory update mentions a set of substances, an AI-supported workflow could help identify which products in a portfolio include those substances, what uses may be relevant, and which accounts, dossiers, or internal stakeholders may need to be involved. Again, the value is not in a final decision being made automatically. The value is in getting the right information in front of the right expert faster.
There is also a strong case for using AI to support recurring internal and external questions. In many businesses, a lot of time is spent assembling responses from existing knowledge, documents, dataand previous assessments. If AI can help retrieve that information, structure a first draft, and link it back to evidence, then it can remove a meaningful amount of repetitive manual work.
That, in my view, is where the commercial opportunity really sits. Not in replacing expertise, but in improving the efficiency and responsiveness of expert teams without diluting the quality of their work.
A practical example
To make that more concrete, imagine a new regulatory update with potential relevance to EU REACH.
A sensible AI-supported workflow might summarise the update, identify the substances referenced, recognise key regulatory concepts or obligations mentioned in the text, and compare those against the substances present in a company’s product portfolio. It might then surface a list of products that could warrant review and direct the user to the precise source passages that drove that screening.
That would be genuinely useful. It could save considerable manual effort and reduce the time it takes to get from publication of a change to the point where an informed expert review begins. But it would still only be the start of the process.
A regulatory specialist or chemist would still need to assess whether the apparent match is meaningful, whether the interpretation is correct, whether the context has been fully understood, and what action, if any, is actually required. That is exactly as it should be.
Where I do not think AI should be trusted alone
Just as importantly, there are boundaries that I do not think should be crossed.
I do not think AI should be treated as an autonomous authority on regulatory interpretation. I do not think it should be allowed to produce final compliance decisions without expert review. I do not think it should generate conclusions that cannot be traced back to evidence. And I do not think black-box outputs are acceptable in an area where the reasoning behind a conclusion can matter just as much as the conclusion itself.
That is especially true in a REACH context, where obligations, classifications, uses, communications and supporting rationale all sit inside a framework that demands care, traceability and defensibility.So, for me, the model is simple: AI should assist, and experts should assess.
That is not a compromise. I think it is the only credible way to apply this technology in a scientifically demanding and regulated environment.
What safe use looks like from my perspective
When I talk about safe use of AI in chemical compliance, I am really talking about design discipline.
First, outputs should be tied back to source material wherever possible. If a summary, answer or screening result cannot show the user the underlying text, document or evidence it is based on, trust is weakened immediately.
Second, the quality of the workflow depends heavily on the quality and control of the knowledge sources. In this domain, provenance matters. Relevance matters. Scope matters. A model grounded in controlled, domain-specific information is far more useful than one operating as a generic answer engine.
Third, AI should be applied to clearly defined workflows. I have much more confidence in a system designed to help review regulatory updates, screen possible product impacts, navigate dossier content, or support evidence retrieval than in a broad assistant expected to answer anything about compliance on demand.
Fourth, the system must be designed with the assumption that models can miss things. That means workflows should favour traceability, reviewability, and thoroughness over speed alone. It also means being honest about uncertainty instead of hiding it behind polished wording.
And finally, expert review must remain central. In my mind, that is not a limitation of the approach. It is a requirement for using the technology responsibly.
Why I think REACH is such an important use case
EU REACH is one of the clearest examples of why this matters.
It creates a substantial and ongoing information burden. Businesses need to understand substances, hazards, uses, obligations, supplier inputs, documentation, communications, and regulatory developments over time. They need to connect external changes with internal reality. They need to do that repeatedly, at scale, and often under time pressure.
From a product and technology perspective, that is exactly the kind of environment where carefully designed AI support can make a real difference. There is a large volume of text and data. There are repeated patterns of review and screening. There is a continual need to connect information across sources. And there is still an essential role for human judgement at the end of the process.
That combination matters. It means the opportunity is real, but so is the need for discipline.
My view in simple terms
My view is not that AI will solve chemical compliance. It will not.
My view is that it can help reduce the manual burden involved in chemical compliance, particularly around information retrieval, summarisation, triage, screening and connecting external developments to internal product and substance data. If that is done carefully, I think it can make expert teams more effective, more responsive, and better supported.That is the part that interests me most.
So yes, we are taking AI seriously. But from my perspective, taking it seriously also means refusing to be casual about it. It means recognising where general-purpose tools can fall short. It means designing for evidence, transparency and control. And it means being very clear that, in this field, thoroughness matters more than speed.
I think the future here is not expert judgement versus AI. I think it is expert judgement strengthened by well-designed AI support.
And in chemical compliance, that feels like the right ambition: not automation for its own sake, but better decisions, supported by better tools, with less manual effort spent getting to them.