Class MapReducer<X>

java.lang.Object
org.heigit.ohsome.oshdb.api.mapreducer.MapReducer<X>
Type Parameters:
X - the type that is returned by the currently set of mapper function. the next added mapper function will be called with a parameter of this type as input
All Implemented Interfaces:
Serializable, Mappable<X>
Direct Known Subclasses:
MapReducerJdbcMultithread, MapReducerJdbcSinglethread

public abstract class MapReducer<X> extends Object implements Mappable<X>, Serializable
Main class of oshdb's "functional programming" API.

It accepts a list of filters, transformation `map` functions a produces a result when calling the `reduce` method (or one of its shorthand versions like `sum`, `count`, etc.).

You can set a list of filters that are applied on the raw OSM data, for example you can filter:

  • geometrically by an area of interest (bbox or polygon)
  • by osm tags (key only or key/value)
  • by OSM type
  • custom filter callback

Depending on the used data "view", the MapReducer produces either "snapshots" or evaluated all modifications ("contributions") of the matching raw OSM data.

These data can then be transformed arbitrarily by user defined `map` functions (which take one of these entity snapshots or modifications as input an produce an arbitrary output) or `flatMap` functions (which can return an arbitrary number of results per entity snapshot/contribution). It is possible to chain together any number of transformation functions.

Finally, one can either use one of the pre-defined result-generating functions (e.g. `sum`, `count`, `average`, `uniq`), or specify a custom `reduce` procedure.

If one wants to get results that are aggregated by timestamp (or some other index), one can use the `aggregateByTimestamp` or `aggregateBy` functionality that automatically handles the grouping of the output data.

For more complex analyses, it is also possible to enable the grouping of the input data by the respective OSM ID. This can be used to view at the whole history of entities at once.

See Also:
  • Field Details

  • Constructor Details

  • Method Details

    • isCancelable

      public boolean isCancelable()
      Returns if the current backend can be canceled (e.g. in a query timeout).
    • copy

      @NotNull protected abstract @NotNull MapReducer<X> copy()
    • tagInterpreter

      @Contract(pure=true) public MapReducer<X> tagInterpreter(TagInterpreter tagInterpreter)
      Sets the tagInterpreter to use in the analysis. The tagInterpreter is used internally to determine the geometry type of osm entities (e.g. an osm way can become either a LineString or a Polygon, depending on its tags). Normally, this is generated automatically for the user. But for example, if one doesn't want to use the DefaultTagInterpreter, it is possible to use this function to supply their own tagInterpreter.
      Parameters:
      tagInterpreter - the tagInterpreter object to use in the processing of osm entities
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • areaOfInterest

      @Contract(pure=true) public MapReducer<X> areaOfInterest(@NotNull @NotNull OSHDBBoundingBox bboxFilter)
      Set the area of interest to the given bounding box. Only objects inside or clipped by this bbox will be passed on to the analysis' `mapper` function.
      Parameters:
      bboxFilter - the bounding box to query the data in
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • areaOfInterest

      @Contract(pure=true) public <P extends org.locationtech.jts.geom.Geometry & org.locationtech.jts.geom.Polygonal> MapReducer<X> areaOfInterest(@NotNull P polygonFilter)
      Set the area of interest to the given polygon. Only objects inside or clipped by this polygon will be passed on to the analysis' `mapper` function.
      Parameters:
      polygonFilter - the bounding box to query the data in
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • timestamps

      @Contract(pure=true) public MapReducer<X> timestamps(OSHDBTimestampList tstamps)
      Set the timestamps for which to perform the analysis.

      Depending on the *View*, this has slightly different semantics:

      • For the OSMEntitySnapshotView it will set the time slices at which to take the "snapshots"
      • For the OSMContributionView it will set the time interval in which to look for osm contributions (only the first and last timestamp of this list are contributing).
      Additionally, these timestamps are used in the `aggregateByTimestamp` functionality.
      Parameters:
      tstamps - an object (implementing the OSHDBTimestampList interface) which provides the timestamps to do the analysis for
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • timestamps

      @Contract(pure=true) public MapReducer<X> timestamps(String isoDateStart, String isoDateEnd, OSHDBTimestamps.Interval interval)
      Set the timestamps for which to perform the analysis in a regular interval between a start and end date.

      See timestamps(OSHDBTimestampList) for further information.

      Supplied times are assumed to be in UTC (and the only allowed timezone designator is 'Z'). If a date parameter does not include a time part, midnight (00:00:00Z) of the respective date is used.

      Parameters:
      isoDateStart - an ISO 8601 date string representing the start date of the analysis
      isoDateEnd - an ISO 8601 date string representing the end date of the analysis
      interval - the interval between the timestamps to be used in the analysis
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • timestamps

      @Contract(pure=true) public MapReducer<X> timestamps(String isoDate)
      Sets a single timestamp for which to perform the analysis at.

      Useful in combination with the OSMEntitySnapshotView when not performing further aggregation by timestamp.

      See timestamps(OSHDBTimestampList) for further information.

      Supplied times are assumed to be in UTC (and the only allowed timezone designator is 'Z'). If a date parameter does not include a time part, midnight (00:00:00Z) of the respective date is used.

      Parameters:
      isoDate - an ISO 8601 date string representing the date of the analysis
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • timestamps

      @Contract(pure=true) public MapReducer<X> timestamps(String isoDateStart, String isoDateEnd)
      Sets two timestamps (start and end date) for which to perform the analysis.

      Useful in combination with the OSMContributionView when not performing further aggregation by timestamp.

      See timestamps(OSHDBTimestampList) for further information.

      Supplied times are assumed to be in UTC (and the only allowed timezone designator is 'Z'). If a date parameter does not include a time part, midnight (00:00:00Z) of the respective date is used.

      Parameters:
      isoDateStart - an ISO 8601 date string representing the start date of the analysis
      isoDateEnd - an ISO 8601 date string representing the end date of the analysis
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • timestamps

      @Contract(pure=true) public MapReducer<X> timestamps(String isoDateFirst, String isoDateSecond, String... isoDateMore)
      Sets multiple arbitrary timestamps for which to perform the analysis.

      Note for programmers wanting to use this method to supply an arbitrary number (n>=1) of timestamps: You may supply the same time string multiple times, which will be de-duplicated internally. E.g. you can call the method like this: .timestamps(dateArr[0], dateArr[0], dateArr)

      See timestamps(OSHDBTimestampList) for further information.

      Supplied times are assumed to be in UTC (and the only allowed timezone designator is 'Z'). If a date parameter does not include a time part, midnight (00:00:00Z) of the respective date is used.

      Parameters:
      isoDateFirst - an ISO 8601 date string representing the start date of the analysis
      isoDateSecond - an ISO 8601 date string representing the second date of the analysis
      isoDateMore - more ISO 8601 date strings representing the remaining timestamps of the analysis
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • map

      @Contract(pure=true) public <R> MapReducer<R> map(SerializableFunction<X,R> mapper)
      Set an arbitrary `map` transformation function.
      Specified by:
      map in interface Mappable<X>
      Type Parameters:
      R - an arbitrary data type which is the return type of the transformation `map` function
      Parameters:
      mapper - function that will be applied to each data entry (osm entity snapshot or contribution)
      Returns:
      a modified copy of this MapReducer object operating on the transformed type (<R>)
    • map

      @Contract(pure=true) protected <R> MapReducer<R> map(SerializableBiFunction<X,Object,R> mapper)
    • flatMap

      @Contract(pure=true) public <R> MapReducer<R> flatMap(SerializableFunction<X,Iterable<R>> flatMapper)
      Set an arbitrary `flatMap` transformation function, which returns list with an arbitrary number of results per input data entry. The results of this function will be "flattened", meaning that they can be for example transformed again by setting additional `map` functions.
      Specified by:
      flatMap in interface Mappable<X>
      Type Parameters:
      R - an arbitrary data type which is the return type of the transformation `map` function
      Parameters:
      flatMapper - function that will be applied to each data entry (osm entity snapshot or contribution) and returns a list of results
      Returns:
      a modified copy of this MapReducer object operating on the transformed type (<R>)
    • flatMap

      @Contract(pure=true) protected <R> MapReducer<R> flatMap(SerializableBiFunction<X,Object,Iterable<R>> flatMapper)
    • filter

      @Contract(pure=true) public MapReducer<X> filter(SerializablePredicate<X> f)
      Adds a custom arbitrary filter that gets executed in the current transformation chain.
      Specified by:
      filter in interface Mappable<X>
      Parameters:
      f - the filter function that determines if the respective data should be passed on (when f returns true) or discarded (when f returns false)
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
    • filter

      @Contract(pure=true) public MapReducer<X> filter(FilterExpression f)
      Apply a custom filter expression to this query.
      Parameters:
      f - the FilterExpression to apply
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
      See Also:
      • oshdb-filter readme and org.heigit.ohsome.oshdb.filter for further information about how to create such a filter expression object.
    • filter

      @Contract(pure=true) public MapReducer<X> filter(String f)
      Apply a textual filter to this query.
      Parameters:
      f - the filter string to apply
      Returns:
      a modified copy of this mapReducer (can be used to chain multiple commands together)
      See Also:
    • groupByEntity

      @Contract(pure=true) public MapReducer<List<X>> groupByEntity() throws UnsupportedOperationException
      Groups the input data (osm entity snapshot or contributions) by their respective entity's ids before feeding them into further transformation functions. This can be used to do more complex analysis on the osm data, that requires one to know about the full editing history of individual osm entities, e.g., when looking for contributions which got reverted at a later point in time.

      The values in the returned lists of snapshot or contribution objects are returned in their natural order: i.e. sorted ascending by timestamp.

      This needs to be called before any `map` or `flatMap` transformation functions have been set. Otherwise a runtime exception will be thrown.

      Returns:
      the MapReducer object which applies its transformations on (by entity id grouped) lists of the input data
      Throws:
      UnsupportedOperationException - if this is called after some map (or flatMap) functions have already been set
      UnsupportedOperationException - if this is called when a grouping has already been activated
    • aggregateBy

      @Contract(pure=true) public <U extends Comparable<U> & Serializable> MapAggregator<U,X> aggregateBy(SerializableFunction<X,U> indexer, Collection<U> zerofill)
      Sets a custom aggregation function that is used to group output results into.
      Type Parameters:
      U - the data type of the values used to aggregate the output. has to be a comparable type
      Parameters:
      indexer - a function that will be called for each input element and returns a value that will be used to group the results by
      zerofill - a collection of values that are expected to be present in the result
      Returns:
      a MapAggregator object with the equivalent state (settings, filters, map function, etc.) of the current MapReducer object
    • aggregateBy

      @Contract(pure=true) public <U extends Comparable<U> & Serializable> MapAggregator<U,X> aggregateBy(SerializableFunction<X,U> indexer)
      Sets a custom aggregation function that is used to group output results into.
      Type Parameters:
      U - the data type of the values used to aggregate the output. has to be a comparable type
      Parameters:
      indexer - a function that will be called for each input element and returns a value that will be used to group the results by
      Returns:
      a MapAggregator object with the equivalent state (settings, filters, map function, etc.) of the current MapReducer object
    • aggregateByTimestamp

      @Contract(pure=true) public MapAggregator<OSHDBTimestamp,X> aggregateByTimestamp() throws UnsupportedOperationException
      Sets up automatic aggregation by timestamp.

      In the OSMEntitySnapshotView, the snapshots' timestamp will be used directly to aggregate results into. In the OSMContributionView, the timestamps of the respective data modifications will be matched to corresponding time intervals (that are defined by the `timestamps` setting here).

      Cannot be used together with the `groupByEntity()` setting enabled.

      Returns:
      a MapAggregator object with the equivalent state (settings, filters, map function, etc.) of the current MapReducer object
      Throws:
      UnsupportedOperationException - if this is called when the `groupByEntity()` mode has been activated
    • aggregateByTimestamp

      Sets up aggregation by a custom time index.

      The timestamps returned by the supplied indexing function are matched to the corresponding time intervals.

      Parameters:
      indexer - a callback function that return a timestamp object for each given data. Note that if this function returns timestamps outside of the supplied timestamps() interval results may be undefined
      Returns:
      a MapAggregator object with the equivalent state (settings, filters, map function, etc.) of the current MapReducer object
      Throws:
      UnsupportedOperationException
    • aggregateByGeometry

      @Contract(pure=true) public <U extends Comparable<U> & Serializable, P extends org.locationtech.jts.geom.Geometry & org.locationtech.jts.geom.Polygonal> MapAggregator<U,X> aggregateByGeometry(Map<U,P> geometries) throws UnsupportedOperationException
      Sets up automatic aggregation by geometries.

      Cannot be used together with the `groupByEntity()` setting enabled.

      Type Parameters:
      U - the type of the identifers used to aggregate
      P - a polygonal geometry type
      Parameters:
      geometries - an associated list of polygons and identifiers
      Returns:
      a MapAggregator object with the equivalent state (settings, filters, map function, etc.) of the current MapReducer object
      Throws:
      UnsupportedOperationException - if this is called when the `groupByEntity()` mode has been activated
      UnsupportedOperationException - when called after any map or flatMap functions are set
    • reduce

      @Contract(pure=true) public <S> S reduce(SerializableSupplier<S> identitySupplier, SerializableBiFunction<S,X,S> accumulator, SerializableBinaryOperator<S> combiner) throws Exception
      Generic map-reduce routine.

      The combination of the used types and identity/reducer functions must make "mathematical" sense:

      • the accumulator and combiner functions need to be associative,
      • values generated by the identitySupplier factory must be an identity for the combiner function: `combiner(identitySupplier(),x)` must be equal to `x`,
      • the combiner function must be compatible with the accumulator function: `combiner(u, accumulator(identitySupplier(), t)) == accumulator.apply(u, t)`

      Functionally, this interface is similar to Java11 Stream's reduce(identity,accumulator,combiner) interface.

      Type Parameters:
      S - the data type used to contain the "reduced" (intermediate and final) results
      Parameters:
      identitySupplier - a factory function that returns a new starting value to reduce results into (e.g. when summing values, one needs to start at zero)
      accumulator - a function that takes a result from the `mapper` function (type <R>) and an accumulation value (type <S>, e.g. the result of `identitySupplier()`) and returns the "sum" of the two; contrary to `combiner`, this function is allowed to alter (mutate) the state of the accumulation value (e.g. directly adding new values to an existing Set object)
      combiner - a function that calculates the "sum" of two <S> values; this function must be pure (have no side effects), and is not allowed to alter the state of the two input objects it gets!
      Returns:
      the result of the map-reduce operation, the final result of the last call to the `combiner` function, after all `mapper` results have been aggregated (in the `accumulator` and `combiner` steps)
      Throws:
      UnsupportedOperationException - if the used oshdb database backend doesn't implement the required reduce operation.
      Exception - if during the reducing operation an exception happens (see the respective implementations for details).
    • reduce

      @Contract(pure=true) public X reduce(SerializableSupplier<X> identitySupplier, SerializableBinaryOperator<X> accumulator) throws Exception
      Generic map-reduce routine (shorthand syntax).

      This variant is shorter to program than `reduce(identitySupplier, accumulator, combiner)`, but can only be used if the result type is the same as the current `map`ped type <X>. Also this variant can be less efficient since it cannot benefit from the mutability freedoms the accumulator+combiner approach has.

      The combination of the used types and identity/reducer functions must make "mathematical" sense:

      • the accumulator function needs to be associative,
      • values generated by the identitySupplier factory must be an identity for the accumulator function: `accumulator(identitySupplier(),x)` must be equal to `x`,

      Functionally, this interface is similar to Java11 Stream's reduce(identity,accumulator) interface.

      Parameters:
      identitySupplier - a factory function that returns a new starting value to reduce results into (e.g. when summing values, one needs to start at zero)
      accumulator - a function that takes a result from the `mapper` function (type <X>) and an accumulation value (also of type <X>, e.g. the result of `identitySupplier()`) and returns the "sum" of the two; contrary to `combiner`, this function is not to alter (mutate) the state of the accumulation value (e.g. directly adding new values to an existing Set object)
      Returns:
      the result of the map-reduce operation, the final result of the last call to the `combiner` function, after all `mapper` results have been aggregated (in the `accumulator` and `combiner` steps)
      Throws:
      Exception
    • sum

      @Contract(pure=true) public Number sum() throws Exception
      Sums up the results.

      The current data values need to be numeric (castable to "Number" type), otherwise a runtime exception will be thrown.

      Returns:
      the sum of the current data
      Throws:
      UnsupportedOperationException - if the data cannot be cast to numbers
      Exception
    • sum

      @Contract(pure=true) public <R extends Number> R sum(SerializableFunction<X,R> mapper) throws Exception
      Sums up the results provided by a given `mapper` function.

      This is a shorthand for `.map(mapper).sum()`, with the difference that here the numerical return type of the `mapper` is ensured.

      Type Parameters:
      R - the numeric type that is returned by the `mapper` function
      Parameters:
      mapper - function that returns the numbers to sum up
      Returns:
      the summed up results of the `mapper` function
      Throws:
      Exception
    • count

      @Contract(pure=true) public Integer count() throws Exception
      Counts the number of results.
      Returns:
      the total count of features or modifications, summed up over all timestamps
      Throws:
      Exception
    • uniq

      @Contract(pure=true) public Set<X> uniq() throws Exception
      Gets all unique values of the results.

      For example, this can be used together with the OSMContributionView to get the total amount of unique users editing specific feature types.

      Returns:
      the set of distinct values
      Throws:
      Exception
    • uniq

      @Contract(pure=true) public <R> Set<R> uniq(SerializableFunction<X,R> mapper) throws Exception
      Gets all unique values of the results provided by a given mapper function.

      This is a shorthand for `.map(mapper).uniq()`.

      Type Parameters:
      R - the type that is returned by the `mapper` function
      Parameters:
      mapper - function that returns some values
      Returns:
      a set of distinct values returned by the `mapper` function
      Throws:
      Exception
    • countUniq

      @Contract(pure=true) public Integer countUniq() throws Exception
      Counts all unique values of the results.

      For example, this can be used together with the OSMContributionView to get the number of unique users editing specific feature types.

      Returns:
      the set of distinct values
      Throws:
      Exception
    • average

      @Contract(pure=true) public Double average() throws Exception
      Calculates the averages of the results.

      The current data values need to be numeric (castable to "Number" type), otherwise a runtime exception will be thrown.

      Returns:
      the average of the current data
      Throws:
      UnsupportedOperationException - if the data cannot be cast to numbers
      Exception
    • average

      @Contract(pure=true) public <R extends Number> Double average(SerializableFunction<X,R> mapper) throws Exception
      Calculates the average of the results provided by a given `mapper` function.
      Type Parameters:
      R - the numeric type that is returned by the `mapper` function
      Parameters:
      mapper - function that returns the numbers to average
      Returns:
      the average of the numbers returned by the `mapper` function
      Throws:
      Exception
    • weightedAverage

      @Contract(pure=true) public Double weightedAverage(SerializableFunction<X,WeightedValue> mapper) throws Exception
      Calculates the weighted average of the results provided by the `mapper` function.

      The mapper must return an object of the type `WeightedValue` which contains a numeric value associated with a (floating point) weight.

      Parameters:
      mapper - function that gets called for each entity snapshot or modification, needs to return the value and weight combination of numbers to average
      Returns:
      the weighted average of the numbers returned by the `mapper` function
      Throws:
      Exception
    • estimatedMedian

      @Contract(pure=true) public Double estimatedMedian() throws Exception
      Returns an estimate of the median of the results.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Returns:
      estimated median
      Throws:
      Exception
    • estimatedMedian

      @Contract(pure=true) public <R extends Number> Double estimatedMedian(SerializableFunction<X,R> mapper) throws Exception
      Returns an estimate of the median of the results after applying the given map function.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Parameters:
      mapper - function that returns the numbers to generate the mean for
      Returns:
      estimated median
      Throws:
      Exception
    • estimatedQuantile

      @Contract(pure=true) public Double estimatedQuantile(double q) throws Exception
      Returns an estimate of a requested quantile of the results.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Parameters:
      q - the desired quantile to calculate (as a number between 0 and 1)
      Returns:
      estimated quantile boundary
      Throws:
      Exception
    • estimatedQuantile

      @Contract(pure=true) public <R extends Number> Double estimatedQuantile(SerializableFunction<X,R> mapper, double q) throws Exception
      Returns an estimate of a requested quantile of the results after applying the given map function.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Parameters:
      mapper - function that returns the numbers to generate the quantile for
      q - the desired quantile to calculate (as a number between 0 and 1)
      Returns:
      estimated quantile boundary
      Throws:
      Exception
    • estimatedQuantiles

      @Contract(pure=true) public List<Double> estimatedQuantiles(Iterable<Double> q) throws Exception
      Returns an estimate of the quantiles of the results.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Parameters:
      q - the desired quantiles to calculate (as a collection of numbers between 0 and 1)
      Returns:
      estimated quantile boundaries
      Throws:
      Exception
    • estimatedQuantiles

      @Contract(pure=true) public <R extends Number> List<Double> estimatedQuantiles(SerializableFunction<X,R> mapper, Iterable<Double> q) throws Exception
      Returns an estimate of the quantiles of the results after applying the given map function.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Parameters:
      mapper - function that returns the numbers to generate the quantiles for
      q - the desired quantiles to calculate (as a collection of numbers between 0 and 1)
      Returns:
      estimated quantile boundaries
      Throws:
      Exception
    • estimatedQuantiles

      @Contract(pure=true) public DoubleUnaryOperator estimatedQuantiles() throws Exception
      Returns a function that computes estimates of arbitrary quantiles of the results.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Returns:
      a function that computes estimated quantile boundaries
      Throws:
      Exception
    • estimatedQuantiles

      @Contract(pure=true) public <R extends Number> DoubleUnaryOperator estimatedQuantiles(SerializableFunction<X,R> mapper) throws Exception
      Returns a function that computes estimates of arbitrary quantiles of the results after applying the given map function.

      Uses the t-digest algorithm to calculate estimates for the quantiles in a map-reduce system: https://raw.githubusercontent.com/tdunning/t-digest/master/docs/t-digest-paper/histo.pdf

      Parameters:
      mapper - function that returns the numbers to generate the quantiles for
      Returns:
      a function that computes estimated quantile boundaries
      Throws:
      Exception
    • forEach

      @Deprecated public void forEach(SerializableConsumer<X> action) throws Exception
      Deprecated.
      only for testing purposes, use `.stream().forEach()` instead
      Iterates over each entity snapshot or contribution, and performs a single `action` on each one of them.

      This method can be handy for testing purposes. But note that since the `action` doesn't produce a return value, it must facilitate its own way of producing output.

      If you'd like to use such a "forEach" in a non-test use case, use `.stream().forEach()` instead.

      Parameters:
      action - function that gets called for each transformed data entry
      Throws:
      Exception
    • collect

      @Contract(pure=true) public List<X> collect() throws Exception
      Collects all results into a List.
      Returns:
      a list with all results returned by the `mapper` function
      Throws:
      Exception
    • stream

      @Contract(pure=true) public Stream<X> stream() throws Exception
      Returns all results as a Stream.

      If the used oshdb database backend doesn't implement the stream operation directly, this will fall back to executing `.collect().stream()` instead, which buffers all results in memory first before returning them as a stream.

      Returns:
      a stream with all results returned by the `mapper` function
      Throws:
      Exception
    • mapStreamCellsOSMContribution

      protected abstract Stream<X> mapStreamCellsOSMContribution(SerializableFunction<OSMContribution,X> mapper) throws Exception
      Throws:
      Exception
    • flatMapStreamCellsOSMContributionGroupedById

      protected abstract Stream<X> flatMapStreamCellsOSMContributionGroupedById(SerializableFunction<List<OSMContribution>,Iterable<X>> mapper) throws Exception
      Throws:
      Exception
    • mapStreamCellsOSMEntitySnapshot

      protected abstract Stream<X> mapStreamCellsOSMEntitySnapshot(SerializableFunction<OSMEntitySnapshot,X> mapper) throws Exception
      Throws:
      Exception
    • flatMapStreamCellsOSMEntitySnapshotGroupedById

      protected abstract Stream<X> flatMapStreamCellsOSMEntitySnapshotGroupedById(SerializableFunction<List<OSMEntitySnapshot>,Iterable<X>> mapper) throws Exception
      Throws:
      Exception
    • mapReduceCellsOSMContribution

      protected abstract <R, S> S mapReduceCellsOSMContribution(SerializableFunction<OSMContribution,R> mapper, SerializableSupplier<S> identitySupplier, SerializableBiFunction<S,R,S> accumulator, SerializableBinaryOperator<S> combiner) throws Exception
      Generic map-reduce used by the `OSMContributionView`.

      The combination of the used types and identity/reducer functions must make "mathematical" sense:

      • the accumulator and combiner functions need to be associative,
      • values generated by the identitySupplier factory must be an identity for the combiner function: `combiner(identitySupplier(),x)` must be equal to `x`,
      • the combiner function must be compatible with the accumulator function: `combiner(u, accumulator(identitySupplier(), t)) == accumulator.apply(u, t)`

      Functionally, this interface is similar to Java11 Stream's reduce(identity,accumulator,combiner) interface.

      Type Parameters:
      R - the data type returned by the `mapper` function
      S - the data type used to contain the "reduced" (intermediate and final) results
      Parameters:
      mapper - a function that's called for each `OSMContribution`
      identitySupplier - a factory function that returns a new starting value to reduce results into (e.g. when summing values, one needs to start at zero)
      accumulator - a function that takes a result from the `mapper` function (type <R>) and an accumulation value (type <S>, e.g. the result of `identitySupplier()`) and returns the "sum" of the two; contrary to `combiner`, this function is allowed to alter (mutate) the state of the accumulation value (e.g. directly adding new values to an existing Set object)
      combiner - a function that calculates the "sum" of two <S> values; this function must be pure (have no side effects), and is not allowed to alter the state of the two input objects it gets!
      Returns:
      the result of the map-reduce operation, the final result of the last call to the `combiner` function, after all `mapper` results have been aggregated (in the `accumulator` and `combiner` steps)
      Throws:
      Exception
    • flatMapReduceCellsOSMContributionGroupedById

      protected abstract <R, S> S flatMapReduceCellsOSMContributionGroupedById(SerializableFunction<List<OSMContribution>,Iterable<R>> mapper, SerializableSupplier<S> identitySupplier, SerializableBiFunction<S,R,S> accumulator, SerializableBinaryOperator<S> combiner) throws Exception
      Generic "flat" version of the map-reduce used by the `OSMContributionView`, with by-osm-id grouped input to the `mapper` function.

      Contrary to the "normal" map-reduce, the "flat" version adds the possibility to return any number of results in the `mapper` function. Additionally, this interface provides the `mapper` function with a list of all `OSMContribution`s of a particular OSM entity. This is used to do more complex analyses that require the full edit history of the respective OSM entities as input.

      The combination of the used types and identity/reducer functions must make "mathematical" sense:

      • the accumulator and combiner functions need to be associative,
      • values generated by the identitySupplier factory must be an identity for the combiner function: `combiner(identitySupplier(),x)` must be equal to `x`,
      • the combiner function must be compatible with the accumulator function: `combiner(u, accumulator(identitySupplier(), t)) == accumulator.apply(u, t)`

      Functionally, this interface is similar to Java11 Stream's reduce(identity,accumulator,combiner) interface.

      Type Parameters:
      R - the data type returned by the `mapper` function
      S - the data type used to contain the "reduced" (intermediate and final) results
      Parameters:
      mapper - a function that's called for all `OSMContribution`s of a particular OSM entity; returns a list of results (which can have any number of entries).
      identitySupplier - a factory function that returns a new starting value to reduce results into (e.g. when summing values, one needs to start at zero)
      accumulator - a function that takes a result from the `mapper` function (type <R>) and an accumulation value (type <S>, e.g. the result of `identitySupplier()`) and returns the "sum" of the two; contrary to `combiner`, this function is allowed to alter (mutate) the state of the accumulation value (e.g. directly adding new values to an existing Set object)
      combiner - a function that calculates the "sum" of two <S> values; this function must be pure (have no side effects), and is not allowed to alter the state of the two input objects it gets!
      Returns:
      the result of the map-reduce operation, the final result of the last call to the `combiner` function, after all `mapper` results have been aggregated (in the `accumulator` and `combiner` steps)
      Throws:
      Exception
    • mapReduceCellsOSMEntitySnapshot

      protected abstract <R, S> S mapReduceCellsOSMEntitySnapshot(SerializableFunction<OSMEntitySnapshot,R> mapper, SerializableSupplier<S> identitySupplier, SerializableBiFunction<S,R,S> accumulator, SerializableBinaryOperator<S> combiner) throws Exception
      Generic map-reduce used by the `OSMEntitySnapshotView`.

      The combination of the used types and identity/reducer functions must make "mathematical" sense:

      • the accumulator and combiner functions need to be associative,
      • values generated by the identitySupplier factory must be an identity for the combiner function: `combiner(identitySupplier(),x)` must be equal to `x`,
      • the combiner function must be compatible with the accumulator function: `combiner(u, accumulator(identitySupplier(), t)) == accumulator.apply(u, t)`

      Functionally, this interface is similar to Java11 Stream's reduce(identity,accumulator,combiner) interface.

      Type Parameters:
      R - the data type returned by the `mapper` function
      S - the data type used to contain the "reduced" (intermediate and final) results
      Parameters:
      mapper - a function that's called for each `OSMEntitySnapshot`
      identitySupplier - a factory function that returns a new starting value to reduce results into (e.g. when summing values, one needs to start at zero)
      accumulator - a function that takes a result from the `mapper` function (type <R>) and an accumulation value (type <S>, e.g. the result of `identitySupplier()`) and returns the "sum" of the two; contrary to `combiner`, this function is allowed to alter (mutate) the state of the accumulation value (e.g. directly adding new values to an existing Set object)
      combiner - a function that calculates the "sum" of two <S> values; this function must be pure (have no side effects), and is not allowed to alter the state of the two input objects it gets!
      Returns:
      the result of the map-reduce operation, the final result of the last call to the `combiner` function, after all `mapper` results have been aggregated (in the `accumulator` and `combiner` steps)
      Throws:
      Exception
    • flatMapReduceCellsOSMEntitySnapshotGroupedById

      protected abstract <R, S> S flatMapReduceCellsOSMEntitySnapshotGroupedById(SerializableFunction<List<OSMEntitySnapshot>,Iterable<R>> mapper, SerializableSupplier<S> identitySupplier, SerializableBiFunction<S,R,S> accumulator, SerializableBinaryOperator<S> combiner) throws Exception
      Generic "flat" version of the map-reduce used by the `OSMEntitySnapshotView`, with by-osm-id grouped input to the `mapper` function.

      Contrary to the "normal" map-reduce, the "flat" version adds the possibility to return any number of results in the `mapper` function. Additionally, this interface provides the `mapper` function with a list of all `OSMContribution`s of a particular OSM entity. This is used to do more complex analyses that require the full list of snapshots of the respective OSM entities as input.

      The combination of the used types and identity/reducer functions must make "mathematical" sense:

      • the accumulator and combiner functions need to be associative,
      • values generated by the identitySupplier factory must be an identity for the combiner function: `combiner(identitySupplier(),x)` must be equal to `x`,
      • the combiner function must be compatible with the accumulator function: `combiner(u, accumulator(identitySupplier(), t)) == accumulator.apply(u, t)`

      Functionally, this interface is similar to Java11 Stream's reduce(identity,accumulator,combiner) interface.

      Type Parameters:
      R - the data type returned by the `mapper` function
      S - the data type used to contain the "reduced" (intermediate and final) results
      Parameters:
      mapper - a function that's called for all `OSMEntitySnapshot`s of a particular OSM entity; returns a list of results (which can have any number of entries)
      identitySupplier - a factory function that returns a new starting value to reduce results into (e.g. when summing values, one needs to start at zero)
      accumulator - a function that takes a result from the `mapper` function (type <R>) and an accumulation value (type <S>, e.g. the result of `identitySupplier()`) and returns the "sum" of the two; contrary to `combiner`, this function is allowed to alter (mutate) the state of the accumulation value (e.g. directly adding new values to an existing Set object)
      combiner - a function that calculates the "sum" of two <S> values; this function must be pure (have no side effects), and is not allowed to alter the state of the two input objects it gets!
      Returns:
      the result of the map-reduce operation, the final result of the last call to the `combiner` function, after all `mapper` results have been aggregated (in the `accumulator` and `combiner` steps)
      Throws:
      Exception
    • isOSMContributionViewQuery

      protected boolean isOSMContributionViewQuery()
    • isOSMEntitySnapshotViewQuery

      protected boolean isOSMEntitySnapshotViewQuery()
    • getTagInterpreter

      protected TagInterpreter getTagInterpreter() throws org.json.simple.parser.ParseException, IOException
      Throws:
      org.json.simple.parser.ParseException
      IOException
    • getPreFilter

      protected OSHEntityFilter getPreFilter()
    • getFilter

      protected OSMEntityFilter getFilter()
    • getCellIdRanges

      protected Iterable<XYGridTree.CellIdRange> getCellIdRanges()
    • getPolyFilter

      protected <P extends org.locationtech.jts.geom.Geometry & org.locationtech.jts.geom.Polygonal> P getPolyFilter()