After a slow start in a notoriously conservative industry, the speed of adoption of data science and data-driven decision making has accelerated in the maritime industry over the past couple of years. Indeed, when it comes to the availability and analysis of global spatial data, the industry is arguably far ahead of land-based freight transportation. New and demanding environmental regulation is likely to push this development further over the coming decade as customers, policymakers and the general public demand more transparency and that the industry cleans up its act. Despite this very encouraging trend, there are some high-level challenges that prevent us from unlocking the full value of maritime data science.
Economists love to use the bulk shipping markets as a textbook example of a perfectly competitive market, where freight rates are offered down to the marginal cost of transportation in times of oversupply, with little or no possibility of differentiation. For ship operators and owners conditioned to cut-throat competition it is a scary proposition to open up and share proprietary data which have a perceived commercial value with outsiders, whether startups, academic institutions or other third parties. Even internally, there is often a lack of communication, data flow and learning between, for instance, the departments responsible for the technical and commercial management of a fleet.
In a fragmented, lean industry such as shipping, this is probably not the best approach. Firstly, it necessarily means that each company must spend resources and manpower to reinvent the wheel, while duplicating the efforts of everyone else in the process. Secondly, decisions made on a smaller set of data are almost certainly suboptimal, as you do not see the full picture. Thirdly, particularly in the world of machine learning, the more data you have available for calibration, the lower the probability of generating false signals. The question to ask is whether the economic benefits from co-operation beat the value of your standalone data when used internally only. In many cases, such as basic vessel performance analytics, the answer is almost certainly yes. Data science can probably even assist in this conundrum through the use of “shared learning” based on edge data, as opposed to having to pool data from users to train models.
While modern ships are undoubtedly increasingly sophisticated, the majority of the world fleet remains decidedly “analog”, without the sensors and high-bandwidth communication channels to either be sufficiently connected or generate the required data streams. There is clearly a huge potential in the development of low-cost sensors and onboard data processing solutions that can be retrofitted to older tonnage, ultimately bringing the whole world fleet into the 21st century. Indeed, we see the physical- to-digital link as the most crucial, and the hardest, to get right as you otherwise risk being stuck with garbage-in/ garbage-out models and output. While data science can help in identifying outliers in input data, this is not an ideal solution.
The link to the real world is particularly important if there is to be progress in the application of smart contracts in the supply chain. For instance, if the charterparty for a grain cargo were to be autonomously executed, a question that would need to be objectively answered would be: Is this cargo hold clean enough for loading to commence? For now, this requires human inspection – maybe using drones at the cutting edge – yet this is something that technology ought to be able to solve.
When tech guys take a first peek from the outside and into our venerable industry the first conclusion is that there appear to be a lot of inefficiencies. For instance, if data on speed- and fuel-consumption for a voyage is analyzed by any half-decent machine learning model it is likely to conclude that the speed is too high. Why? Well, the vessel spent 48 hours waiting at anchorage before loading commenced. Seen in isolation this would be the correct conclusion, but of course, this typically did not occur because ships are operated poorly on purpose. Instead, a host of contractual constraints and incentives govern the behavior of ships (e.g. making cancelling date, maximizing demurrage, getting in the lineup for a busy terminal), and any data-driven optimization must be able to account for this. However, this highlights a much more important issue: in order to drive efficiency gains and emission reduction in shipping, changes in contracts and port policies are as important as data and models.
As a final thought: Until (if) we ever get to a fully autonomous transportation system, data-driven decisions and optimization models are only as good as the people who implement them. Leaders in this field, therefore, need to get even better at selling how data science can be used to make our jobs more interesting and empowering humans.
Roar Ådland joined the Norwegian School of Economics as a professor in 2012. He is the holder of the Bergen Shipowners’ Association Chair in shipping economics at the Center for Shipping and Logistics. He holds a Master of Science degree in marine technology from NTNU, an M.Phil. in Business Economics from NHH and a Master of Science in Ocean Systems Management from Massachusetts Institute of Technology (MIT). Professor Ådland received his Ph.D. in Ocean Systems Management from MIT in 2003.
Before becoming an academic, Roar Ådland had a career in the shipping industry, first as an analyst with Clarkson Research Ltd. In London and since 2006 as a trader and portfolio manager of freight derivatives (FFAs) at Clarkson Fund Management Ltd., a shipping-focused hedge fund.
Roar Ådlands’ research focuses on freight derivative pricing and trading, applications of AIS data, vessel valuation, shipping risk management and bulk freight market modeling.
This article was originally published in the ‘Sea: the Future’ conference magazine