org.bdgenomics.adam.api.java

JavaADAMContext

class JavaADAMContext extends Serializable

The JavaADAMContext provides java-friendly functions on top of ADAMContext.

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. JavaADAMContext
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new JavaADAMContext(ac: ADAMContext)

    ac

    The ADAMContext to wrap.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. val ac: ADAMContext

    The ADAMContext to wrap.

  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. def getSparkContext: JavaSparkContext

    returns

    Returns the Java Spark Context associated with this Java ADAM Context.

  14. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  15. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  16. def loadAlignments(pathName: String, stringency: ValidationStringency): AlignmentRecordRDD

    Load alignment records into an AlignmentRecordRDD (java-friendly method).

    Load alignment records into an AlignmentRecordRDD (java-friendly method).

    Loads path names ending in: * .bam/.cram/.sam as BAM/CRAM/SAM format, * .fa/.fasta as FASTA format, * .fq/.fastq as FASTQ format, and * .ifq as interleaved FASTQ format.

    If none of these match, fall back to Parquet + Avro.

    For FASTA, FASTQ, and interleaved FASTQ formats, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load alignment records from. Globs/directories are supported, although file extension must be present for BAM/CRAM/SAM, FASTA, and FASTQ formats.

    stringency

    The validation stringency to use when validating BAM/CRAM/SAM or FASTQ formats.

    returns

    Returns an AlignmentRecordRDD which wraps the RDD of alignment records, sequence dictionary representing contigs the alignment records may be aligned to, and the record group dictionary for the alignment records if one is available.

    See also

    ADAMContext#loadAlignments

  17. def loadAlignments(pathName: String): AlignmentRecordRDD

    Load alignment records into an AlignmentRecordRDD (java-friendly method).

    Load alignment records into an AlignmentRecordRDD (java-friendly method).

    Loads path names ending in: * .bam/.cram/.sam as BAM/CRAM/SAM format, * .fa/.fasta as FASTA format, * .fq/.fastq as FASTQ format, and * .ifq as interleaved FASTQ format.

    If none of these match, fall back to Parquet + Avro.

    For FASTA, FASTQ, and interleaved FASTQ formats, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load alignment records from. Globs/directories are supported, although file extension must be present for BAM/CRAM/SAM, FASTA, and FASTQ formats.

    returns

    Returns an AlignmentRecordRDD which wraps the RDD of alignment records, sequence dictionary representing contigs the alignment records may be aligned to, and the record group dictionary for the alignment records if one is available.

    See also

    ADAMContext#loadAlignments

  18. def loadContigFragments(pathName: String): NucleotideContigFragmentRDD

    Load nucleotide contig fragments into a NucleotideContigFragmentRDD (java-friendly method).

    Load nucleotide contig fragments into a NucleotideContigFragmentRDD (java-friendly method).

    If the path name has a .fa/.fasta extension, load as FASTA format. Else, fall back to Parquet + Avro.

    For FASTA format, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load nucleotide contig fragments from. Globs/directories are supported, although file extension must be present for FASTA format.

    returns

    Returns a NucleotideContigFragmentRDD.

    See also

    ADAMContext#loadContigFragments

  19. def loadCoverage(pathName: String, stringency: ValidationStringency): CoverageRDD

    Load features into a FeatureRDD and convert to a CoverageRDD (java-friendly method).

    Load features into a FeatureRDD and convert to a CoverageRDD (java-friendly method). Coverage is stored in the score field of Feature.

    Loads path names ending in: * .bed as BED6/12 format, * .gff3 as GFF3 format, * .gtf/.gff as GTF/GFF2 format, * .narrow[pP]eak as NarrowPeak format, and * .interval_list as IntervalList format.

    If none of these match, fall back to Parquet + Avro.

    For BED6/12, GFF3, GTF/GFF2, NarrowPeak, and IntervalList formats, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load features from. Globs/directories are supported, although file extension must be present for BED6/12, GFF3, GTF/GFF2, NarrowPeak, or IntervalList formats.

    stringency

    The validation stringency to use when validating BED6/12, GFF3, GTF/GFF2, NarrowPeak, or IntervalList formats.

    returns

    Returns a FeatureRDD converted to a CoverageRDD.

    See also

    ADAMContext#loadCoverage

  20. def loadCoverage(pathName: String): CoverageRDD

    Load features into a FeatureRDD and convert to a CoverageRDD (java-friendly method).

    Load features into a FeatureRDD and convert to a CoverageRDD (java-friendly method). Coverage is stored in the score field of Feature.

    Loads path names ending in: * .bed as BED6/12 format, * .gff3 as GFF3 format, * .gtf/.gff as GTF/GFF2 format, * .narrow[pP]eak as NarrowPeak format, and * .interval_list as IntervalList format.

    If none of these match, fall back to Parquet + Avro.

    For BED6/12, GFF3, GTF/GFF2, NarrowPeak, and IntervalList formats, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load features from. Globs/directories are supported, although file extension must be present for BED6/12, GFF3, GTF/GFF2, NarrowPeak, or IntervalList formats.

    returns

    Returns a FeatureRDD converted to a CoverageRDD.

    See also

    ADAMContext#loadCoverage

  21. def loadFeatures(pathName: String, stringency: ValidationStringency): FeatureRDD

    Load features into a FeatureRDD (java-friendly method).

    Load features into a FeatureRDD (java-friendly method).

    Loads path names ending in: * .bed as BED6/12 format, * .gff3 as GFF3 format, * .gtf/.gff as GTF/GFF2 format, * .narrow[pP]eak as NarrowPeak format, and * .interval_list as IntervalList format.

    If none of these match, fall back to Parquet + Avro.

    For BED6/12, GFF3, GTF/GFF2, NarrowPeak, and IntervalList formats, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load features from. Globs/directories are supported, although file extension must be present for BED6/12, GFF3, GTF/GFF2, NarrowPeak, or IntervalList formats.

    stringency

    The validation stringency to use when validating BED6/12, GFF3, GTF/GFF2, NarrowPeak, or IntervalList formats.

    returns

    Returns a FeatureRDD.

    See also

    ADAMContext#loadFeatures

  22. def loadFeatures(pathName: String): FeatureRDD

    Load features into a FeatureRDD (java-friendly method).

    Load features into a FeatureRDD (java-friendly method).

    Loads path names ending in: * .bed as BED6/12 format, * .gff3 as GFF3 format, * .gtf/.gff as GTF/GFF2 format, * .narrow[pP]eak as NarrowPeak format, and * .interval_list as IntervalList format.

    If none of these match, fall back to Parquet + Avro.

    For BED6/12, GFF3, GTF/GFF2, NarrowPeak, and IntervalList formats, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load features from. Globs/directories are supported, although file extension must be present for BED6/12, GFF3, GTF/GFF2, NarrowPeak, or IntervalList formats.

    returns

    Returns a FeatureRDD.

    See also

    ADAMContext#loadFeatures

  23. def loadFragments(pathName: String, stringency: ValidationStringency): FragmentRDD

    Load fragments into a FragmentRDD (java-friendly method).

    Load fragments into a FragmentRDD (java-friendly method).

    Loads path names ending in: * .bam/.cram/.sam as BAM/CRAM/SAM format and * .ifq as interleaved FASTQ format.

    If none of these match, fall back to Parquet + Avro.

    For interleaved FASTQ format, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load fragments from. Globs/directories are supported, although file extension must be present for BAM/CRAM/SAM and FASTQ formats.

    stringency

    The validation stringency to use when validating BAM/CRAM/SAM or FASTQ formats.

    returns

    Returns a FragmentRDD.

    See also

    ADAMContext#loadFragments

  24. def loadFragments(pathName: String): FragmentRDD

    Load fragments into a FragmentRDD (java-friendly method).

    Load fragments into a FragmentRDD (java-friendly method).

    Loads path names ending in: * .bam/.cram/.sam as BAM/CRAM/SAM format and * .ifq as interleaved FASTQ format.

    If none of these match, fall back to Parquet + Avro.

    For interleaved FASTQ format, compressed files are supported through compression codecs configured in Hadoop, which by default include .gz and .bz2, but can include more.

    pathName

    The path name to load fragments from. Globs/directories are supported, although file extension must be present for BAM/CRAM/SAM and FASTQ formats.

    returns

    Returns a FragmentRDD.

    See also

    ADAMContext#loadFragments

  25. def loadGenotypes(pathName: String, stringency: ValidationStringency): GenotypeRDD

    Load genotypes into a GenotypeRDD (java-friendly method).

    Load genotypes into a GenotypeRDD (java-friendly method).

    If the path name has a .vcf/.vcf.gz/.vcf.bgzf/.vcf.bgz extension, load as VCF format. Else, fall back to Parquet + Avro.

    pathName

    The path name to load genotypes from. Globs/directories are supported, although file extension must be present for VCF format.

    stringency

    The validation stringency to use when validating VCF format.

    returns

    Returns a GenotypeRDD.

    See also

    ADAMContext#loadGenotypes

  26. def loadGenotypes(pathName: String): GenotypeRDD

    Load genotypes into a GenotypeRDD (java-friendly method).

    Load genotypes into a GenotypeRDD (java-friendly method).

    If the path name has a .vcf/.vcf.gz/.vcf.bgzf/.vcf.bgz extension, load as VCF format. Else, fall back to Parquet + Avro.

    pathName

    The path name to load genotypes from. Globs/directories are supported, although file extension must be present for VCF format.

    returns

    Returns a GenotypeRDD.

    See also

    ADAMContext#loadGenotypes

  27. def loadReferenceFile(pathName: String): ReferenceFile

    Load reference sequences into a broadcastable ReferenceFile (java-friendly method).

    Load reference sequences into a broadcastable ReferenceFile (java-friendly method).

    If the path name has a .2bit extension, loads a 2bit file. Else, uses loadContigFragments to load the reference as an RDD, which is then collected to the driver. Uses a maximum fragment length of 10kbp.

    pathName

    The path name to load reference sequences from. Globs/directories for 2bit format are not supported.

    returns

    Returns a broadcastable ReferenceFile.

    See also

    loadContigFragments

  28. def loadReferenceFile(pathName: String, maximumLength: Long): ReferenceFile

    Load reference sequences into a broadcastable ReferenceFile (java-friendly method).

    Load reference sequences into a broadcastable ReferenceFile (java-friendly method).

    If the path name has a .2bit extension, loads a 2bit file. Else, uses loadContigFragments to load the reference as an RDD, which is then collected to the driver.

    pathName

    The path name to load reference sequences from. Globs/directories for 2bit format are not supported.

    maximumLength

    Maximum fragment length. Defaults to 10000L. Values greater than 1e9 should be avoided.

    returns

    Returns a broadcastable ReferenceFile.

    See also

    loadContigFragments

  29. def loadVariants(pathName: String, stringency: ValidationStringency): VariantRDD

    Load variants into a VariantRDD (java-friendly method).

    Load variants into a VariantRDD (java-friendly method).

    If the path name has a .vcf/.vcf.gz/.vcf.bgzf/.vcf.bgz extension, load as VCF format. Else, fall back to Parquet + Avro.

    pathName

    The path name to load variants from. Globs/directories are supported, although file extension must be present for VCF format.

    stringency

    The validation stringency to use when validating VCF format.

    returns

    Returns a VariantRDD.

    See also

    ADAMContext#loadVariants

  30. def loadVariants(pathName: String): VariantRDD

    Load variants into a VariantRDD (java-friendly method).

    Load variants into a VariantRDD (java-friendly method).

    If the path name has a .vcf/.vcf.gz/.vcf.bgzf/.vcf.bgz extension, load as VCF format. Else, fall back to Parquet + Avro.

    pathName

    The path name to load variants from. Globs/directories are supported, although file extension must be present for VCF format.

    returns

    Returns a VariantRDD.

    See also

    ADAMContext#loadVariants

  31. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  32. final def notify(): Unit

    Definition Classes
    AnyRef
  33. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  34. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  35. def toString(): String

    Definition Classes
    AnyRef → Any
  36. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped