Object

it.agilelab.bigdata.wasp.consumers.spark.plugins.kafka

KafkaSparkStructuredStreamingReader

Related Doc: package kafka

Permalink

object KafkaSparkStructuredStreamingReader extends SparkStructuredStreamingReader with Logging

Linear Supertypes
Logging, SparkStructuredStreamingReader, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaSparkStructuredStreamingReader
  2. Logging
  3. SparkStructuredStreamingReader
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def createStructuredStream(etl: StructuredStreamingETLModel, streamingReaderModel: StreamingReaderModel)(implicit ss: SparkSession): DataFrame

    Permalink

    Creates a streaming DataFrame from a Kafka streaming source.

    Creates a streaming DataFrame from a Kafka streaming source.

    If all the input topics share the same schema the returned DataFrame will contain a column named "kafkaMetadata" with message metadata and the message contents either as a single column named "value" or as multiple columns named after the value fields depending on the topic datatype.

    If the input topics do not share the same schema the returned Dataframe will contain a column named "kafkaMetadata" with message metadata and each topic content on a column named after the topic name, previously escaped calling the function MultiTopicModel.topicModelNames(). This means that if 5 topic models with different schema are read, the output dataframe will contain 6 columns, and of these 6 columns only the kafkaMetadata and the topic related to current message, will have a value different from null, like the following:

    +--------------------+--------------------+-------------------------+
    |       kafkaMetadata|     test_json_topic|testcheckpoint_avro_topic|
    +--------------------+--------------------+-------------------------+
    |[45, [], test_jso...|[45, 45, [field1_...|                     null|
    |[12, [], testchec...|                null|      [12, 77, [field1_..|
    +--------------------+--------------------+-------------------------+

    The kafkaMetadata column is record with the following fields:

    • key: bytes
    • headers: array of {headerKey: string, headerValue: bytes}
    • topic: string
    • partition: int
    • offset: long
    • timestamp: timestamp
    • timestampType: int


    The behaviour for message contents column(s) is the following:

    • the avro and json topic data types will output the columns specified by their schemas
    • the plaintext and bytes topic data types output a value column with the contents as string or bytes respectively

    There is also the possibility to manage a Parsing mode for Avro/json deserialization, this param can be:

    • Strict: job will crash when a record can't be parsed
    • Ignore: records that are impossible to parse will be filtered out from the resulting dataframe
    • Handle: produce two columns instead of exploding the schema of parsed record, the first column named raw will contain the raw value (bytes) if the parsing failed, while the other(s) will contain the parsed value (i.e. a struct or a primitive) which will be null if parsing failed

    <u>In Strict and Ignore mode result dataframe will have the same schema described early</u>
    In Handle mode, for single topic when topic type is Avro/Json the structure will have:

    • kafkaMetadata -> same as other mode
    • raw -> raw byte array, it will be null if parsing has worked fine, else it will contain raw byte array
    • value -> struct column that contains parsed record or null if parsing encountered any problem

    Follows an example:

    +--------------------+--------------------+-------------------------+
    |       kafkaMetadata|                 raw|                    value|
    +--------------------+--------------------+-------------------------+
    |[45, [], test_jso...|          [45, 47..]|                     null|
    |[12, [], testchec...|                null|      [12, 77, [field1_..|
    +--------------------+--------------------+-------------------------+

    <u>Handle mode is meant to divide the good data from bad data through a simple where(col("raw").isNull), then good data can be exploded through select(col("value.*"))</u>

    N.B in Single reading mode Parsing Mode on binary/plaintext topic type will be ignored

    Handle parsing mode in multi mode scenario will be managed as standard multi mode plus the column raw, which has the same usage as single mode. An example result is like:

    +--------------------------------+----+------------+--------------+-----------------+-------------+
    |                   kafkaMetadata| raw|  topic_json|  topic_binary|      topic_plain|    topic_avro|
    +--------------------------------+----+------------+--------------+-----------------+--------------+
    |[1, [], topic_json,.............|null| [1, valore]|          null|             null|          null|
    |[1, [], topic_binary,...........|null|        null|   binary_test|             null|          null|
    |[1, [], topic_plain,............|null|        null|          null|   plaintext_test|          null|
    |[1, [], topic_avro,.............|[05]|        null|          null|             null|          null|
    |[1, [], topic_avro,.............|null|        null|          null|             null|   [1, valore]|
    +--------------------------------+----+------------+--------------+-----------------+--------------+

    <u>In case of error, raw column will be populated otherwise it will be null. To access parsed value it is possible to select column ${topic_name} this field will be null if parsing didn't work</u>

    N.B. when a parsing error occurs, every topic_name column will have null as value, you can know one which one parsing error has occurred by looking at `kafkaMetadata.topic` field.

    N.B. for plaintext and binary type, since is possible to have `${topic_name}.value` = null, raw column will be null as well (since is populated only for parsing errors which cannot happen on these types).

    Definition Classes
    KafkaSparkStructuredStreamingReader → SparkStructuredStreamingReader
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  11. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  12. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  13. val logger: WaspLogger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  14. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  16. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  17. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  18. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  19. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  20. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from SparkStructuredStreamingReader

Inherited from AnyRef

Inherited from Any

Ungrouped