Packages

  • package root

    This is the documentation for Parsley.

    This is the documentation for Parsley.

    Package structure

    The parsley package contains the Parsley class, as well as the Result, Success, and Failure types. In addition to these, it also contains the following packages and "modules" (a module is defined as being an object which mocks a package):

    • parsley.Parsley contains the bulk of the core "function-style" combinators.
    • parsley.combinator contains many helpful combinators that simplify some common parser patterns.
    • parsley.character contains the combinators needed to read characters and strings, as well as combinators to match specific sub-sets of characters.
    • parsley.debug contains debugging combinators, helpful for identifying faults in parsers.
    • parsley.expr contains the following sub modules:
      • parsley.expr.chain contains combinators used in expression parsing
      • parsley.expr.precedence is a builder for expression parsers built on a precedence table.
      • parsley.expr.infix contains combinators used in expression parsing, but with more permissive types than their equivalents in chain.
      • parsley.expr.mixed contains combinators that can be used for expression parsing, but where different fixities may be mixed on the same level: this is rare in practice.
    • parsley.syntax contains several implicits to add syntactic sugar to the combinators. These are sub-categorised into the following sub modules:
      • parsley.syntax.character contains implicits to allow you to use character and string literals as parsers.
      • parsley.syntax.lift enables postfix application of the lift combinator onto a function (or value).
      • parsley.syntax.zipped enables boths a reversed form of lift where the function appears on the right and is applied on a tuple (useful when type inference has failed) as well as a .zipped method for building tuples out of several combinators.
      • parsley.syntax.extension contains syntactic sugar combinators exposed as implicit classes.
    • parsley.errors contains modules to deal with error messages, their refinement and generation.
    • parsley.lift contains functions which lift functions that work on regular types to those which now combine the results of parsers returning those same types. these are ubiquitous.
    • parsley.ap contains functions which allow for the application of a parser returning a function to several parsers returning each of the argument types.
    • parsley.state contains combinators that interact with the context-sensitive functionality in the form of state.
    • parsley.token contains the Lexer class that provides a host of helpful lexing combinators when provided with the description of a language.
    • parsley.position contains parsers for extracting position information.
    • parsley.generic contains some basic implementations of the Parser Bridge pattern (see Design Patterns for Parser Combinators in Scala, or the parsley wiki): these can be used before more specialised generic bridge traits can be constructed.
    Definition Classes
    root
  • package parsley
    Definition Classes
    root
  • package errors

    This package contains various functionality relating to the generation and formatting of error messages.

    This package contains various functionality relating to the generation and formatting of error messages.

    In particular, it includes a collection of combinators for improving error messages within the parser, including labelling and providing additional information. It also contains combinators that can be used to valid data produced by a parser, to ensure it conforms to expected invariances, producing good quality error messages if this is not the case. Finally, this package contains ways of changing the formatting of error messages: this can either be changing how the default String-based errors are formatted, or by injectiing Parsley's errors into a custom error object.

    Definition Classes
    parsley
  • package tokenextractors

    This package contains implementations of token extractors that can be mixed into ErrorBuilder to decide how to extract unexpected tokens from the residual input left over from a parse error.

    This package contains implementations of token extractors that can be mixed into ErrorBuilder to decide how to extract unexpected tokens from the residual input left over from a parse error.

    These are common strategies, and something here is likely to be what is needed. They are all careful to handle unprintable characters and whitespace in a sensible way, and account for unicode codepoints that are wider than a single 16-bit character.

    Definition Classes
    errors
    Since

    4.0.0

  • LexToken
  • MatchParserDemand
  • SingleChar
  • TillNextWhitespace

trait LexToken extends AnyRef

This extractor mixin provides an implementation for ErrorBuilder.unexpectedToken when mixed into an error builder: it will try and parse the residual input to identify a valid lexical token to report.

When parsing a grammar that as a dedicated lexical distinction, it is nice to be able to report problematic tokens relevant to that grammar as opposed to generic input lifted straight from the input stream. The easiest way of doing this would be having a pre-lexing pass and parsing based on tokens, but this is deliberately not how Parsley is designed. Instead, this extractor can try and parse the remaining input to try and identify a token on demand.

If the lexicalError flag of the unexpectedToken method is not set, which would indicate a problem within a token reported by a classical lexer and not the parser, the extractor will try to parse each of the provided tokens in turn: whichever is the longest matched of these tokens will be reported as the problematic one (this can be changed by overriding selectToken). For best effect, these tokens should not consume whitespace (which would otherwise be included at the end of the token!): this means that, if using the Lexer class, the functionality in nonlexeme should be used. If one of the givens tokens cannot be parsed, the input until the next valid parsable token (or end of input) is returned as a Token.Raw.

Currently, if lexicalError is true, this extractor will just return the next character as the problematic item (this may be changed by overriding the extractItem method).

Self Type
LexToken with ErrorBuilder[_]
Source
LexToken.scala
Since

4.0.0

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. LexToken
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Abstract Value Members

  1. abstract def tokens: Seq[Parsley[String]]

    The tokens that should be recognised by this extractor: each parser should return the intended name of the token exactly as it should appear in the Named token.

    The tokens that should be recognised by this extractor: each parser should return the intended name of the token exactly as it should appear in the Named token.

    This should include a whitespace parser for "unexpected whitespace".

    Since

    4.0.0

    Note

    with the exception of the whitespace parser, these tokens should not consume trailing (and certainly not leading) whitespace: if using definitions from parsley.token.Lexer functionality, the nonlexeme versions of the tokens should be used.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. def extractItem(cs: Iterable[Char], amountOfInputParserWanted: Int): Token

    If the parser failed during the parsing of a token, this function extracts the problematic item from the remaining input.

    If the parser failed during the parsing of a token, this function extracts the problematic item from the remaining input.

    The default behaviour mimics SingleChar.

    Since

    4.0.0

  9. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  10. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  12. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  13. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  15. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  16. def selectToken(matchedToks: List[(String, Int)]): (String, Int)

    If the extractor is successful in identifying tokens that can be parsed from the residual input, this function will select one of them to report back.

    If the extractor is successful in identifying tokens that can be parsed from the residual input, this function will select one of them to report back.

    The default behaviour is to take the longest matched token (i.e. the one with the largest paired position). In case of a tie, the first token is chosen: this means that more specific tokens should be put sooner in the tokens list.

    matchedToks

    the list of tokens successfully parsed, along with the position at the end of that parse (careful: this position starts back at (1, 1), not where the original parser left off!)

    returns

    the chosen token and position pair

    Since

    4.0.0

    Note

    the matchedToks list is guaranteed to be non-empty

  17. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  18. def toString(): String
    Definition Classes
    AnyRef → Any
  19. final def unexpectedToken(cs: Iterable[Char], amountOfInputParserWanted: Int, lexicalError: Boolean): Token

  20. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  21. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  22. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()

Inherited from AnyRef

Inherited from Any

Ungrouped