Derived from the standard TPC-H benchmarks we compared Bodo’s to Spark and Dask for data processing workloads. We used scale factor 1,000 (~1 TB dataset) on a cluster of 16 c5n.18xlarge AWS instances which has 576 physical CPU cores and 3 TB of total memory. Bodo provided a median speedup of 23x over Spark (95%+ infrastructure cost savings) and 148x over Dask. (Note: Equivalent Python versions of TPC-H SQL queries were used to evaluate Python systems.)
Customer benchmark for data engineering (ETL and feature engineering): Bodo is 16.5x faster than optimized Spark on a 125-node cluster (AWS c5n.18xlarge) with 4,500 CPU cores, input data is scale 40,000 with 52 billion rows (2.5TB data in compressed Parquet format). Note: Equivalent Python versions of TPC-H SQL queries were used to evaluate Python systems.

Customer benchmark for data engineering: Bodo is 9x faster than optimized Spark on a 125-node cluster (AWS c5n.18xlarge) with 4,500 CPU cores, input data is scale factor 10,000 of TeraSort with 100B rows (4TB in compressed Parquet format).

Customer benchmark for filtering data using customized user-input filters and and joining the resulting group back with the original dataset. Bodo is 11x faster than optimized PySpark on a 16-node cluster (AWS c5n.18xlarge) with 576 CPU cores (input data is a 120GB data in compressed Parquet).

Customer benchmark for an End-to-End ML pipeline including Data Load, Data Prep, Feature Engineering, ML Training, ML Prediction. Bodo is 120x faster than PySpark on an r5d.16xlarge AWS node (32 CPU cores).

Customer benchmark for retail price image management using simulations. Bodo on a 4 node cluster (AWS m5.24xlarge) is 85x faster than multi-processing Python on a single node.
