Earth Observation and its analytical counterpart, GeoAI, are a pair of promising technologies with potentially big benefits for insurance. By literally providing a completely different view of the world, they allow a whole new set of approaches to risk assessment and loss analysis. Unfortunately, though, the technology is improving very much faster than the sector can adopt it. So far, at least, the most advanced EO-GeoAI remains aspirational tech for most in the market.
EO is shorthand for the practice of recording a bird’s-eye view of the ground and the things on it. GeoAI is the suite of computing tools necessary to analyse the information collected by various types of EO imaging systems. At its best, the combination is aerial reconnaissance with superpowers. It could be revolutionary for some underwriting and claims processes, and we do not use that word lightly.
Level up
EO has levels of sophistication, both in terms of sensors and the platforms which take them aloft. Still pictures from drone-mounted instruments are the most basic, yet the practical applications for insurers of imaging instruments on drones are many and often obvious. The primary advantage, alongside low costs, is the ability to quickly inspect specific sites in areas which cannot be seen from the ground. It is very much easier to use a drone to photograph a roof than to climb up and have a look yourself.
The fact that drone-mounted cameras is the lowest level of EO tech hints at the enormous heights it has reached. Aircraft-mounted instruments are the next level of platform. True, people took pictures from planes almost from day one, but today the instruments to snap objects and action on the ground are very much more sophisticated than even the highest-quality Olympus.
Bat-like Synthetic Aperture Radar moves beyond static photographs and surpasses grainy video. It reads the lie of the land by bouncing radio signals off what is below. Since it is not dependent on light returning to the sensor, it can literally draw you a picture of surface contours in the dark, and even through clouds. SAR data requires a lot of interpretation (which is where the GeoAI comes in), but it can tell us, for example, the extent of flooding after a deluge, since the Earth’s watery surface is higher than it was pre-inundation, and very smooth. It can also measure wetness through changes in soil conductivity, which can be really useful in underwriting, but it suffers from “back scatter,” particularly in urban environments, when bits like lamp posts and cars distort the picture.
Another really useful imaging tool is LiDAR, an even more sophisticated ranging system. LiDAR fires laser pulses at the planet to create three-dimensional pictures of the surface. These are invaluable to insurers in areas such as assessment of flood and subsidence risk. Swiss Re subsidiary Fathom, for example, has just announced a new system that will apply GeoAI tools to existing LiDAR data to create what it claims will be the most accurate, bias-corrected global terrain model ever.
What can be stuck on a plane can be attached to a satellite, creating an altogether higher level of data collection. Imaging from space yields reach that permits collection of immense data sets very quickly. Combining space-based collection with the ultimate sensing instruments is the definite ultimate. To that end, the most sophisticated end of the Earth Observation spectrum is hyperspectral sensing from space.
Engage the hyperdrive
Hyperspectral technology “sees” things that are invisible. It allows determination of the physical characteristics of objects from space, all the way down to their molecular composition. If data is the “new oil” (as has been said), and Earth Observation data is a “game-changing new frontier” for the insurance sector (as reported), then hyperspectral Earth Observation data should fire a revolution. So far, though, it is just potential. Existing agencies and some exciting start-ups hold much promise, but they are not yet delivering hyperspectral data from a stable orbit.
It works like this. Every substance, natural or man-made, has a reflective quality that is as unique as a fingerprint. Carbon (say) reflects light differently than quartz. Speciality hyperspectral sensors determine with great granularity the wavelengths along the electromagnetic spectrum of light reflected by specific types of matter, even at great distances.
In other words, light bounces differently off every different kind of matter. By reading the characteristics of the light a thing reflects, we can figure out what it is made of (there is the GeoAI again). For example, not only does light reflected off water have a fingerprint that is visibly different from space than that of light reflected off concrete, but water containing, say, fluorocarbons has a different signature than plain water.
The technology has been used by NASA on Mars rovers to figure out what our neighbouring planet is made of. A hyperspectral sensing instrument mounted on a satellite can do the same over distance.
Realistic use scenarios
The number of potential applications for such technology is almost too many to count. Imagine you underwrite oil refineries. Hyperspectral Earth Observation would allow you to determine if the tops of storage tanks show evidence of rust or leakage of fumes.
It is magnificently useful for claims, too. Perhaps you wish to know the extent of hail damage to a roof following a storm. Hyperspectral imagery, if sufficiently granular, can identify instances of roof substrate showing through the shingle layer (in theory, at least). The California start-up Matter Intelligence is currently experimenting.
To add to the hyperbole, hyperspectral has the potential to change materially how natural catastrophes are detected, assessed and priced within the insurance value chain. Its impact is best understood by contrasting it with the incumbent approaches. These range from just having a look to multispectral imaging.
Before hyperspectral, which can divide the electromagnetic spectrum into as many as 1000+ spectral bands, earlier multispectral sensors captured typically 8 to 15 bands of light. The very much higher resolution of hyperspectral can therefore produce a much more detailed spectral signature for each “pixel” observed. That allows both compositional identification and monitoring of subtle changes of conditions like stress and moisture content, even before they are otherwise visible.
Meanwhile, the pixels are shrinking. These 3D blocks of perception were recently only as granular as a city block, but new instruments promise to bring that down to about a meter. On a practical level, that allows insurers to characterize individual asset vulnerability and hazard susceptibility.
Hyperspectral can confidently determine factors such as roofing type, degradation and coatings. It can show if a surface is asphalt or concrete or a composite, and can even determine vegetation types and moisture levels adjacent to risks. That can be useful to assess wildfire exposure or the risk of subsidence. The benefit is more granular vulnerability curves, differentiation of risks which currently look identical in underwriting models, and dynamic assessment of risk surfaces that evolve seasonally or even weekly.
When it comes to damage assessment, because hyperspectral data detects chemical and physical changes, it can identify such things as burn severity and combustion completeness in wildfires; oil, chemical, or hazardous material releases during floods or storms; crop and vegetation damage before canopy collapse; and many other post-event conditions. That allows earlier and more confident confirmation of loss-causing conditions, supporting faster reserve-setting and reinsurance notifications.
By determining how damage occurred, hyperspectral can facilitate improved causation attribution, which is critical for coverage assessment, sub-limit application, and litigation and subrogation support. It can aid the gradation of damage severity, for example through continuous severity scoring based on material alteration. It can detect hidden or latent damage like moisture ingress or chemical exposure. Those features allow more precise loss estimation at scale.
Where next?
Many further applications are conceivable, but while the promise of hyperspectral is great, we know of only a handful of projects pursuing the practical application of hyperspectral data for the insurance sector. Costs are high, and barriers to entry not-quite-insurmountable.
One way ahead is the common-property approach. Companies such as McKenzie Intelligence Services and LexisNexis compile satellite data from multiple organisations to provide clients with the specific data and imagery they need at a given moment. In the Lloyd’s market, a subscription organised through the Lloyd’s Market Association makes such a service available to all managing agents trading there. The insurance sector’s cooperative Geospatial Insurance Consortium, which existed to collect EO data from aircraft, has merged into Vexcel, a company which collects still images and multispectral data from planes. Looking ahead, start-up Orbital Sentry proposes a new “Insurance Wildfire Consortium” to help EO adoption for that costly peril.
Still, in the words of Northeastern University lecturer and former Verisk geospatial product guru Tee Barr, “the insurance industry has been remarkably slow to operationalize EO data at scale.”
This is easy to explain. It has been only a couple of years since we could strap cameras to drones. It can take several years to incorporate the data they generate into established processes, and by the time that has happened – and it is happening now across the insurance sector – it is already time to take it to the next level. It is no wonder many take a wait-and-see approach, but we should be exploring the options. For early adopters, sophisticated EO systems paired with effective GeoAI processes promise a dramatic competitive advantage.
