object SparkNLP
- Alphabetic
- By Inheritance
- SparkNLP
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val MavenGpuSpark3: String
- val MavenSpark3: String
- val MavenSparkAarch64: String
- val MavenSparkSilicon: String
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @HotSpotIntrinsicCandidate() @native()
- val currentVersion: String
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def read(params: Map[String, String]): SparkNLPReader
- def read: SparkNLPReader
- def start(gpu: Boolean = false, apple_silicon: Boolean = false, aarch64: Boolean = false, memory: String = "16G", cache_folder: String = "", log_folder: String = "", cluster_tmp_dir: String = "", params: Map[String, String] = Map.empty): SparkSession
Start SparkSession with Spark NLP
Start SparkSession with Spark NLP
- gpu
start Spark NLP with GPU
- apple_silicon
start Spark NLP for Apple M1 & M2 systems
- aarch64
start Spark NLP for Linux Aarch64 systems
- memory
set driver memory for SparkSession
- cache_folder
The location to download and extract pretrained Models and Pipelines (by default, it will be in the users home directory under
cache_pretrained.)- log_folder
The location to use on a cluster for temporarily files such as unpacking indexes for WordEmbeddings. By default, this locations is the location of
hadoop.tmp.dirset via Hadoop configuration for Apache Spark. NOTE:S3is not supported and it must be local, HDFS, or DBFS.- cluster_tmp_dir
The location to save logs from annotators during training (By default, it will be in the users home directory under
annotator_logs.)- params
Custom parameters to set for the Spark configuration (Default:
Map.empty)- returns
SparkSession
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- def version(): String
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)