geotrellis.spark.io.hadoop.HadoopRDDWriter
When record being written would exceed the block size of the current MapFile opens a new file to continue writing. This allows to split partition into block-sized chunks without foreknowledge of how big it is.
When record being written would exceed the block size of the current MapFile opens a new file to continue writing. This allows to split partition into block-sized chunks without foreknowledge of how big it is.