8 min read

Maritime AI: A good governance checklist

Read more

By Joanne Waters & Tim Ryan

|

Published 01 September 2025

Overview

AI is increasingly being used in the maritime industry to provide better situational awareness, better insights into a vessel's condition and performance, and overall better business decisions.

The safe and effective adoption of AI systems requires good governance, which goes beyond a standard IT or information security policy. The latest research from Thetius shows that while a majority of maritime companies are actively piloting AI use cases, only 11% have the necessary governance policies in place to support AI use.

 

AI governance checklist

Some of the issues maritime GCs and compliance officers may wish to consider when formulating a maritime AI governance strategy to enable successful pilots and broader adoption of AI are:

1. Setting and implementing an AI procurement policy, including:

1. How you will exercise and document due diligence in the selection and vetting of AI developers and products, including by reference to classification society rules and type approvals and internationally accepted standards for quality and assurance

2. How you will verify the vendor's statements regarding the system's capabilities

3. How you will ensure your data, and any new data produced by the system, is secure, remains private and remains owned by you

4. How you will ensure any new systems, and the data they use and generate, can be fully and properly integrated with existing systems

5. Steps for understanding the allocation of risks and liabilities as between AI system developers, providers and users, avenues for recourse, and who is responsible for maintaining hardware and software and for how long

2. Steps for understanding any restrictions on the use of data generated by a product, and any fees for using APIs to allow that data to be integrated into your existing systems and data collection processes

3. Setting and implementing an AI use policy – for example, prohibiting crew and shore-side staff from using public generative AI for guidance on safety-critical tasks, or from putting private, confidential company data into public AI systems like ChatGPT

4. Ensuring there is a system for auditing and updating SMS manuals, crew training and crew familiarisation protocols to account for the use of AI systems - for example, are crew aware that generative AI should be seen as a cadet and not a captain, contributing information to decision making, but not making any decisions itself? Are crew aware of how much human oversight and intervention a system needs?

5. Where AI is used in safety-critical applications, ensuring there is a procedure in place to ensure crew are fully aware of and competent in using any non-AI manual back-up, in the event the system fails

6. Updating PMS systems to ensure they include routine maintenance and calibration of sensors, and patching and updates to software

7. Up-to-date business continuity policies in the event of failure of the system or failure of the sensors that feed data into that system

8. Implementing a policy to prevent crew digital fatigue and ensuring that any new technology takes into account human factors

9. Updating cyber security policies to address and mitigate any new risks arising from the use of AI systems

10. Ensuring you have in place robust data management policies, infrastructure and expertise, and have mapped all sources of data being used by AI systems

11. Ensuring you have all appropriate data use rights for data being used to train, and operate, AI systems

12. Setting clear accountability rules regarding when and how AI systems can be allowed to operate autonomously (for example, for routine, repetitive, low risk administrative tasks) and when a human must be kept in the loop

13. Reviewing standard contracts and clauses to ensure they are fit for purpose and address the use of the AI system and data

 

Who is responsible?

Having the right policies is only one part of good governance for maritime AI. Defining who is responsible for implementing each component, and recording and auditing implementation of those policies, is vital.

As AI begins to infiltrate all aspects of business and vessel operations, responsibility is likely to be spread across departments and a cross-functional "AI Team" is likely to be needed, potentially in collaboration with external experts including classification societies. For example, the responsibilities of the "AI Team" in a shipowner of a vessel deploying a pilot of a collision avoidance system that combines computer vision with advanced data analytics might be organised as follows:

ISM Manager / Vessel's DPA

· Updates to PMS

· Updates to SMS

· Updates to crew training and familiarisation procedures

· Undertaking crew training

· Monitoring use of system and collating crew feedback

IT

· Vetting AI systems and vendors

· Ensuring interoperability and integration

· Input into data management policies and setting up necessary data infrastructure

· Cyber-security

· Ensuring Vessel has sufficient connectivity for system requirements

Legal, compliance and insurance

· Liaising with flag state for any necessary authorisations to deploy technology onboard

· Reviewing and updating contracts to account for use of new system

· Reviewing and updating policies

· Reviewing and updating insurance coverage

Operations

· Liaising with charterers, ports of call and pilots regarding use of system

· Analysing system performance versus status quo, benchmarking and providing feedback

Crew

· Making use of system, following training and policies

· Providing feedback

Overlying all of this is C-suite buy-in and leadership, setting the broader strategic goals and business impact that the use of any particular AI tool is aiming to achieve.

 

Why do I need an AI-specific policy?

AI, and the software and systems powered by it, are not "business as usual" systems. AI has the potential to fundamentally reshape the way maritime business and operations are done, particularly with systems capable of making decisions at varying degrees of autonomy. The ability for AI systems to spot hidden patterns in data and make decisions without human intervention is a paradigm shift, particularly for an industry like shipping that places high emphasis on intuitive decision-making and personal relationships.

The way in which AI systems work is a step change from traditional IT. Traditional IT is driven by pre-determined, coded workflows mapping out the steps a system will take to provide a service, in a predictable and static way. AI, and in particular systems driven by machine learning, do not run on a pre-determined, coded workflow. They are trained on, and learn from, data and their core functionality is that they can adapt to new data. That adaptability means that AI software and systems do not behave like traditional IT and cannot be treated as such. They can't be installed and then effectively ignored until they require patching or security updates. They require proactive oversight to ensure there is no "model drift" (i.e. that the performance of the model is not degrading over time, as new data is introduced).

The growing use of AI also introduces entirely new risks that are unlikely to be covered by traditional IT and data security policies. AI has enabled new forms of malicious cyber-attack and increased the speed at which attackers can spread within a company's IT ecosystem and has given cyber-criminals the opportunity to even attack AI models themselves by introducing "bad data" into a training set leading to "model poisoning". The widespread availability of free, public LLMs such as ChatGPT, exposes companies to data leakage, where confidential business information is entered into the system, becoming part of the public data available to the system to answer future queries. 

Finally, the law regarding liability for damage caused by AI systems is in a very early stage of development. The recent Law Commission discussion paper on AI and the Law points to the long and often complex supply chains that are involved in AI development, and the difficulty with pinpointing where in that chain liability might lie.

This is even more so for maritime law, where liability for incidents involving vessels often devolve to questions of seaworthiness. As a matter of English law, a shipowner is responsible for, at a minimum, exercising due diligence to make a vessel seaworthy. Evidencing the exercise of due diligence is a challenge where machine learning systems are used, because by their very nature the way in which these systems work sometimes cannot be readily explained, even by those who have developed them. If a shipowner is unable to understand how a system works, how are they to test its robustness, or question the underlying assumptions used in the model, or test the propriety and completeness of the training data?

Further, where the duty is one of due diligence, a shipowner cannot delegate responsibility for unseaworthiness and remains responsible even where that unseaworthiness was caused by the negligence of third parties1. There has been no indication from the English courts or legislature that this fundamental principle is set to shift with the rise in maritime digitalisation2, so for the near future shipowners must assume that this principle will be applied, even when a vessel is found to unseaworthy due to an error in a safety-critical AI system that the shipowner was unable to detect because of the "black box" nature of the system. Whilst it may be argued that this was a "latent" defect, being one that was not reasonably discoverable by the shipowner or experts employed by him, the obvious counter-argument is that the shipowner ought to have made sufficient enquiries of the vendor to understand the system and any limitations, and where the vendor could not provide clear answers, the proper exercise of due diligence should have resulted in an alternative vendor having been chosen. However, it is also important to note that the exercise of due diligence in the selection of independent contractors is not enough to fulfil the seaworthiness obligation; those contractors must themselves exercise due diligence in any work they carry out that relates to seaworthiness of the vessel.

This underlines the need for implementing strong AI governance as a protection against possible claims, with policies and implementation records acting as a first line of defence in demonstrating responsible selection, use and oversight.

If you are drafting or updating your governance policies to account for the adoption of maritime AI and would like some assistance, our dedicated Maritime Technology team can help.

 

[1] The Muncaster Castle [1961] 1 Lloyd’s Rep. 57

[2] Compare the legislative approach taken in relation to autonomous vehicles, with the Autonomous Vehicles Act 2024 setting out clear divisions of liability between users and developers, and clear boundaries of liability depending on which mode a vehicle is operating in.

Authors