package descriptions
This package contains the descriptions of various lexical structures to be fed to Lexer
.
- Source
- token-package.scala
- Since
4.0.0
- Alphabetic
- By Inheritance
- descriptions
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- final case class LexicalDesc(nameDesc: NameDesc, symbolDesc: SymbolDesc, numericDesc: NumericDesc, textDesc: TextDesc, spaceDesc: SpaceDesc) extends Product with Serializable
This class describes the aggregation of a bunch of different sub-configurations for lexing a specific language.
This class describes the aggregation of a bunch of different sub-configurations for lexing a specific language.
- nameDesc
the description of name-like lexemes
- symbolDesc
the description of specific symbolic lexemes
- numericDesc
the description of numeric literals
- textDesc
the description of text literals
- spaceDesc
the description of whitespace
- Since
4.0.0
- final case class NameDesc(identifierStart: CharPredicate, identifierLetter: CharPredicate, operatorStart: CharPredicate, operatorLetter: CharPredicate) extends Product with Serializable
The class describes how name-like things are described lexically.
The class describes how name-like things are described lexically.
- identifierStart
what characters may start an identifier?
- identifierLetter
what characters may continue an identifier?
- operatorStart
what characters may start a user-defined operator?
- operatorLetter
what characters may continue a user-defined operator?
- Since
4.0.0
- final case class SpaceDesc(lineCommentStart: String, lineCommentAllowsEOF: Boolean, multiLineCommentStart: String, multiLineCommentEnd: String, multiLineNestedComments: Boolean, space: CharPredicate, whitespaceIsContextDependent: Boolean) extends Product with Serializable
This class describes how whitespace should be handled lexically.
This class describes how whitespace should be handled lexically.
- lineCommentStart
how do single-line comments start? (empty for no single-line comments)
- lineCommentAllowsEOF
can a single-line comment be terminated by the end-of-file, or must it ends with a newline
- multiLineCommentStart
how do multi-line comments start? (empty for no multi-line comments)
- multiLineCommentEnd
how do multi-line comments end? (empty for no multi-line comments)
- multiLineNestedComments
can multi-line comments be nested within each other?
- space
what characters serve as whitespace within the language?
- whitespaceIsContextDependent
can the definition of whitespace change depending on context? (in Python, say, newlines are valid whitespace within parentheses, but are significant outside of them)
- Since
4.0.0
- final case class SymbolDesc(hardKeywords: Set[String], hardOperators: Set[String], caseSensitive: Boolean) extends Product with Serializable
This class describes how symbols (textual literals in a BNF) should be processed lexically.
This class describes how symbols (textual literals in a BNF) should be processed lexically.
- hardKeywords
what keywords are always treated as keywords within the language.
- hardOperators
what operators are always treated as reserved operators within the language.
- caseSensitive
are the keywords case sensitive: when
false
,IF == if
.
- Since
4.0.0
Value Members
- object LexicalDesc extends Serializable
This object contains any preconfigured lexical definitions.
This object contains any preconfigured lexical definitions.
- Since
4.0.0
- object NameDesc extends Serializable
This object contains any preconfigured name definitions.
This object contains any preconfigured name definitions.
- Since
4.0.0
- object SpaceDesc extends Serializable
This object contains any default configurations describing whitespace.
This object contains any default configurations describing whitespace.
- Since
4.0.0
- object SymbolDesc extends Serializable
This object contains any preconfigured symbol descriptions.
This object contains any preconfigured symbol descriptions.
- Since
4.0.0
This is the documentation for Parsley.
Package structure
The parsley package contains the
Parsley
class, as well as theResult
,Success
, andFailure
types. In addition to these, it also contains the following packages and "modules" (a module is defined as being an object which mocks a package):parsley.Parsley
contains the bulk of the core "function-style" combinators.parsley.combinator
contains many helpful combinators that simplify some common parser patterns.parsley.character
contains the combinators needed to read characters and strings, as well as combinators to match specific sub-sets of characters.parsley.debug
contains debugging combinators, helpful for identifying faults in parsers.parsley.expr
contains the following sub modules:parsley.expr.chain
contains combinators used in expression parsingparsley.expr.precedence
is a builder for expression parsers built on a precedence table.parsley.expr.infix
contains combinators used in expression parsing, but with more permissive types than their equivalents inchain
.parsley.expr.mixed
contains combinators that can be used for expression parsing, but where different fixities may be mixed on the same level: this is rare in practice.parsley.syntax
contains several implicits to add syntactic sugar to the combinators. These are sub-categorised into the following sub modules:parsley.syntax.character
contains implicits to allow you to use character and string literals as parsers.parsley.syntax.lift
enables postfix application of the lift combinator onto a function (or value).parsley.syntax.zipped
enables boths a reversed form of lift where the function appears on the right and is applied on a tuple (useful when type inference has failed) as well as a.zipped
method for building tuples out of several combinators.parsley.syntax.extension
contains syntactic sugar combinators exposed as implicit classes.parsley.errors
contains modules to deal with error messages, their refinement and generation.parsley.errors.combinator
provides combinators that can be used to either produce more detailed errors as well as refine existing errors.parsley.errors.tokenextractors
provides mixins for common token extraction strategies during error message generation: these can be used to avoid implementingunexpectedToken
in theErrorBuilder
.parsley.lift
contains functions which lift functions that work on regular types to those which now combine the results of parsers returning those same types. these are ubiquitous.parsley.ap
contains functions which allow for the application of a parser returning a function to several parsers returning each of the argument types.parsley.state
contains combinators that interact with the context-sensitive functionality in the form of state.parsley.token
contains theLexer
class that provides a host of helpful lexing combinators when provided with the description of a language.parsley.position
contains parsers for extracting position information.parsley.generic
contains some basic implementations of the Parser Bridge pattern (see Design Patterns for Parser Combinators in Scala, or the parsley wiki): these can be used before more specialised generic bridge traits can be constructed.