scala.meta.syntactic.tokenizers

LegacyScanner

Related Doc: package tokenizers

class LegacyScanner extends AnyRef

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. LegacyScanner
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new LegacyScanner(origin: Origin, decodeUni: Boolean = true)(implicit dialect: Dialect)

Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  5. val cbuf: collection.mutable.StringBuilder

    A character buffer for literals

  6. def charLitOr(op: () ⇒ Unit): Unit

    Parse character literal if current character is followed by \', or follow with given op and return a symbol literal token

  7. def checkNoLetter(): Unit

  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. val curr: LegacyTokenData

  10. implicit val dialect: Dialect

  11. def discardDocBuffer(): Unit

    To prevent doc comments attached to expressions from leaking out of scope onto the next documentable entity, they are discarded upon passing a right brace, bracket, or parenthesis.

  12. def emitIdentifierDeprecationWarnings: Boolean

    Determines whether this scanner should emit identifier deprecation warnings, e.g.

    Determines whether this scanner should emit identifier deprecation warnings, e.g. when seeing macro or then, which are planned to become keywords in future versions of Scala.

    Attributes
    protected
  13. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  15. final def fetchToken(): Unit

    read next token, filling TokenData fields of Scanner.

    read next token, filling TokenData fields of Scanner.

    Attributes
    protected
  16. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. def flushDoc(): Unit

  18. def foreach(f: (LegacyTokenData) ⇒ Unit): Unit

    Initialize scanner; call f on each scanned token data

  19. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  20. def getFraction(): Unit

    read fractional part and exponent of floating point number if one is present.

    read fractional part and exponent of floating point number if one is present.

    Attributes
    protected
  21. def getLitChar(): Unit

    copy current character into cbuf, interpreting any escape sequences, and advance to next character.

    copy current character into cbuf, interpreting any escape sequences, and advance to next character.

    Attributes
    protected
  22. def getNumber(): Unit

    Read a number into strVal and set base

    Read a number into strVal and set base

    Attributes
    protected
  23. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  24. def invalidEscape(): Unit

    Attributes
    protected
  25. def isAtEnd: Boolean

  26. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  27. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  28. val next: LegacyTokenData

  29. def nextToken(): Unit

    Produce next token, filling curr TokenData fields of Scanner.

  30. final def notify(): Unit

    Definition Classes
    AnyRef
  31. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  32. val origin: Origin

  33. val prev: LegacyTokenData

  34. def putChar(c: Char): Unit

    append Unicode character to "cbuf" buffer

    append Unicode character to "cbuf" buffer

    Attributes
    protected
  35. def putCommentChar(): Unit

    Attributes
    protected
  36. val reader: CharArrayReader

  37. val reporter: Reporter

  38. def resume(lastCode: LegacyToken): Unit

  39. var sepRegions: List[LegacyToken]

    a stack of tokens which indicates whether line-ends can be statement separators also used for keeping track of nesting levels.

    a stack of tokens which indicates whether line-ends can be statement separators also used for keeping track of nesting levels. We keep track of the closing symbol of a region. This can be RPAREN if region starts with '(' RBRACKET if region starts with '[' RBRACE if region starts with '{' ARROW if region starts with case' STRINGLIT if region is a string interpolation expression starting with '${' (the STRINGLIT appears twice in succession on the stack iff the expression is a multiline string literal).

  40. def skipBlockComment(): Unit

  41. def skipComment(): Boolean

  42. def skipDocComment(): Unit

  43. final def skipNestedComments(): Unit

    Annotations
    @tailrec()
  44. def skipToken(): Offset

    read next token and return last offset

  45. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  46. def toString(): String

    Definition Classes
    LegacyScanner → AnyRef → Any
  47. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  48. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  49. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped