point cloud benchmarks

Multi‑language point cloud performance tests

Project

This project benchmarks point cloud processing tools across multiple languages (TypeScript, C++, Rust, Python) and execution environments (browser WebAssembly and backend servers) to inform technology choices for point cloud applications. It focuses on fair comparisons by running identical algorithms with consistent optimisations across all implementations.


What it does

Implements the same algorithms (voxel downsampling, voxel debug visualisation, point cloud smoothing) in TypeScript, C++, Rust and Python/Cython.

Compares browser C++/Rust WASM (main thread and Web Worker) against native C++/Rust/Python backends via a binary WebSocket protocol.

Measures full end‑to‑end processing time, including data prep, compute and network I/O, not just inner loops.

Provides a React + Babylon.js UI to load LAZ/LAS point clouds, run all implementations and see benchmark results in real time.

Shows that C++ WASM is roughly 2–5× faster than backends for voxel operations, while C++ backends are about 7% faster for compute‑heavy smoothing, with TypeScript typically 1.5–4× slower depending on the algorithm.


Results and further details

Full benchmark tables, methodology and detailed recommendations (including when to choose WASM vs backend at different dataset sizes) are documented in the README and benchmark reports on GitHub.


Tech stack: React, TypeScript, Babylon.js, Web Workers, WebAssembly (C++ via Emscripten, Rust via wasm‑bindgen), Node.js, Express, C++/Rust/Python (Cython) backends



Get in Touch

Interested in collaborating? Let's talk.