Verisk cuts processing time and storage costs with Amazon Redshift and lakehouse | Amazon Web Services

Verisk cuts processing time and storage costs with Amazon Redshift and lakehouse | Amazon Web Services

This post is co-written with Srinivasa Are, Principal Cloud Architect, and Karthick Shanmugam, Head of Architecture Verisk EES (Extreme Event Solutions).

Verisk, a catastrophe modeling SaaS provider serving insurance and reinsurance companies worldwide, cut processing time from hours to minutes-level aggregations while reducing storage costs by implementing a lakehouse architecture with Amazon Redshift and Apache Iceberg. If you’re managing billions of catastrophe modeling records across hurricanes, earthquakes, and wildfires, this approach eliminates the traditional compute-versus-cost trade-off by separating storage from processing power.

In this post, we examine Verisk’s lakehouse implementation, focusing on four architectural decisions that delivered measurable improvements:

  • Execution performance: Sub-hour aggregations across billions of records replaced long batch process
  • Storage efficiency: Columnar Parquet compression reduced…

https://aws.amazon.com/blogs/big-data/verisk-cuts-processing-time-and-storage-costs-with-amazon-redshift-and-lakehouse/