class KafkaContinuousStream extends ContinuousStream with Logging
A ContinuousStream for data from kafka.
Linear Supertypes
Ordering
- Alphabetic
- By Inheritance
Inherited
- KafkaContinuousStream
- Logging
- ContinuousStream
- SparkDataStream
- AnyRef
- Any
- Hide All
- Show All
Visibility
- Public
- Protected
Instance Constructors
- new KafkaContinuousStream(offsetReader: KafkaOffsetReader, kafkaParams: Map[String, AnyRef], options: CaseInsensitiveStringMap, metadataPath: String, initialOffsets: KafkaOffsetRangeLimit, failOnDataLoss: Boolean)
- offsetReader
a reader used to get kafka offsets. Note that the actual data will be read by per-task consumers generated later.
- kafkaParams
String params for per-task Kafka consumers.
- options
Params which are not Kafka consumer params.
- metadataPath
Path to a directory this reader can use for writing metadata.
- initialOffsets
The Kafka offsets to start reading data at.
- failOnDataLoss
Flag indicating whether reading should fail in data loss scenarios, where some offsets after the specified initial ones can't be properly read.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def commit(end: Offset): Unit
- Definition Classes
- KafkaContinuousStream → SparkDataStream
- def createContinuousReaderFactory(): ContinuousPartitionReaderFactory
- Definition Classes
- KafkaContinuousStream → ContinuousStream
- def deserializeOffset(json: String): Offset
- Definition Classes
- KafkaContinuousStream → SparkDataStream
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def initialOffset(): Offset
- Definition Classes
- KafkaContinuousStream → SparkDataStream
- def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def log: Logger
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logName: String
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def mergeOffsets(offsets: Array[PartitionOffset]): Offset
- Definition Classes
- KafkaContinuousStream → ContinuousStream
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def needsReconfiguration(): Boolean
- Definition Classes
- KafkaContinuousStream → ContinuousStream
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def planInputPartitions(start: Offset): Array[InputPartition]
- Definition Classes
- KafkaContinuousStream → ContinuousStream
- def stop(): Unit
Stop this source and free any resources it has allocated.
Stop this source and free any resources it has allocated.
- Definition Classes
- KafkaContinuousStream → SparkDataStream
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- KafkaContinuousStream → AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()