Entwine is a data organization library for massive point clouds, designed to conquer datasets of hundreds of billions of points as well as desktop-scale point clouds. Entwine can index anything that is PDAL-readable, and can read/write to a variety of sources like S3 or Dropbox. Builds are completely lossless, so no points will be discarded even for terabyte-scale datasets.
Check out the client demos, showcasing Entwine output with Cesium (see "Sample Data" section) and Potree clients.
Getting started with Entwine is easy with Conda. First,
create an environment with the entwine
package, then activate this environment:
conda create --yes --name entwine --channel conda-forge entwine
conda activate entwine
Now we can index some public data:
entwine build \
-i https://data.entwine.io/red-rocks.laz \
-o ~/entwine/red-rocks
Now we have our output at ~/entwine/red-rocks
. We could have also passed a directory like -i ~/county-data/
to index multiple files. Now we can
statically serve ~/entwine
with a simple HTTP server:
docker run -it -v ~/entwine:/var/www -p 8080:8080 connormanning/http-server
And view the data with Cesium or Potree.
For detailed information about how to configure your builds, check out the configuration documentation. Here, you can find information about reprojecting your data, using configuration files and templates, enabling S3 capabilities, and all sorts of other settings.
To learn about the Entwine Point Tile (EPT) file format produced by Entwine, see the file format documentation.
For an alternative method of generating EPT which can also generate COPC data, see the Untwine project.