The optimal design of an efficient and cost-effective vessel requires extensive knowledge. As a designer and manufacturer of complex, high-value vessels and equipment, it is important for us to know how they are performing once in the field. Do they operate as intended? Are there design flaws that need to be amended? However, due to the competitive nature of the market in which our equipment is being deployed, this information is not always readily available.
Within Royal IHC, we have looked at ways of analysing the operational performance of trailing suction hopper dredgers (TSHD). Before taking you through our thought process, first let’s have a look at the kind of work a TSHD generally does.
The dredge cycle
The work of a TSHD is cyclical in nature:
It will sail to a location where it can load material (1) and proceeds to fill its hopper (2, 3). With a full hopper, it will sail to an offloading location (4) after which it will offload its material (5). There are three main offloading methods; shore pumping via hoses (5A), dumping via its bottom doors (5B) or rainbowing via its bow connection (5C). After offloading, it will start the whole process again.
If you look at the dredge cycle in a speed-over-time track, you can see that both loading and offloading are done at low speeds while sailing generally happens at a higher speed.
Location information and vessel specifications
You can only analyse results if you have context – you need to know something about the subject and have the relevant data. In our case, we need to know the specifications of a particular vessel and where it has been. Different types of vessels have varying kinds of operational parameters. Analysing a pipelay operation when you expect a dredging operation will yield unusable results. Having an understanding of a vessel’s specs also helps to do an initial check in whether the results are plausible. If a vessel’s design speed is listed as 12 knots and you see it consistently going above 20 knots, then something may be off.
To understand the nature of the operation and the type of dredging work, knowing the loading and offloading locations is vital. If the loading location is in a port and it’s offloading outside, chances are the vessel is maintaining the port at depth. If it’s the other way around and the vessel is bringing soil to the port, it may be constructing new land. This explains a great deal about what it’s been doing.
Another important indicator of a vessel’s activity is speed. This can also demonstrate if a vessel is loaded or empty, if it’s dredging, shore pumping or dumping via bottom doors. All these activities are done at different combinations of speed, time spent and location.
When looking at the location information of a 5,600 m3 TSHD, you can instantly tell that it has been working and at one point visited the port:
If we look at the low-speed locations, three locations remain:
One in port (1), one loading location and one offloading location (2, 3). The loading and offloading locations can easily be confused as they can be similar in shape, time and speed. In some cases, the soil is unwanted (maintenance dredging) and should be transported away, while in others, the soil is required (capital and reclamation dredging).
A manual classification of the loading and offloading locations is therefore required. In the example given, 2 is the offloading location and 3 is the loading location. Once the labelling of the datapoints is done, the machine learning algorithm uses both the labelling and the datapoints to create clusters. These clusters will then be reclassified using a decision tree after which it can produce the required results.
When we look at the processed data, we get the following image:
Orange is correctly classified as the offloading location and red as the loading location. When we look at the speed-over-time graph you can see all cycles neatly segmented:
Based on these results, we can go further and produce the required metrics that enable us to analyse the performance of the vessel and optimise our designs.
Being able to generate these metrics is an important step. Another crucial aspect is being able to trust the results. In the next blog, we will discuss how we were able to validate the algorithm.