Package | Description |
---|---|
org.antlr.v4.runtime | |
org.antlr.v4.runtime.tree | |
org.antlr.v4.runtime.tree.pattern | |
org.antlr.v4.runtime.tree.xpath |
Modifier and Type | Interface and Description |
---|---|
interface |
TokenFactory<Symbol extends Token>
The default mechanism for creating tokens.
|
class |
UnbufferedTokenStream<T extends Token> |
Modifier and Type | Interface and Description |
---|---|
interface |
WritableToken |
Modifier and Type | Class and Description |
---|---|
class |
CommonToken |
Modifier and Type | Field and Description |
---|---|
Token |
Lexer._token
The goal of all lexer rules/methods is to create a token object.
|
protected Token |
ListTokenSource.eofToken
This field caches the EOF token for the token source.
|
protected Token |
UnbufferedTokenStream.lastToken
This is the
LT(-1) token for the current position. |
protected Token |
UnbufferedTokenStream.lastTokenBufferStart
|
Token |
ParserRuleContext.start
For debugging/tracing purposes, we want to track all of the nodes in
the ATN traversed by the parser for a particular rule.
|
Token |
ParserRuleContext.stop
For debugging/tracing purposes, we want to track all of the nodes in
the ATN traversed by the parser for a particular rule.
|
protected Token[] |
UnbufferedTokenStream.tokens
A moving window buffer of the data being scanned.
|
Modifier and Type | Field and Description |
---|---|
protected List<? extends Token> |
ListTokenSource.tokens
The wrapped collection of
Token objects to return. |
protected List<Token> |
BufferedTokenStream.tokens
A collection of all tokens fetched from the token source.
|
Modifier and Type | Method and Description |
---|---|
Token |
Parser.consume()
Consume and return the current symbol.
|
Token |
Lexer.emit()
The standard method called to automatically emit a token at the
outermost lexical rule.
|
Token |
Lexer.emitEOF() |
Token |
UnbufferedTokenStream.get(int i) |
Token |
TokenStream.get(int index)
Gets the
Token at the specified index in the stream. |
Token |
BufferedTokenStream.get(int i) |
Token |
Parser.getCurrentToken()
Match needs to return the current input symbol, which gets put
into the label for the associated token ref; e.g., x=ID.
|
protected Token |
DefaultErrorStrategy.getMissingSymbol(Parser recognizer)
Conjure up a missing token during error recovery.
|
Token |
RecognitionException.getOffendingToken() |
Token |
ParserRuleContext.getStart()
Get the initial token in this context.
|
Token |
NoViableAltException.getStartToken() |
Token |
ParserRuleContext.getStop()
Get the final token in this context.
|
Token |
Lexer.getToken()
Override if emitting multiple tokens.
|
protected Token |
CommonTokenStream.LB(int k) |
protected Token |
BufferedTokenStream.LB(int k) |
Token |
UnbufferedTokenStream.LT(int i) |
Token |
TokenStream.LT(int k)
|
Token |
CommonTokenStream.LT(int k) |
Token |
BufferedTokenStream.LT(int k) |
Token |
Parser.match(int ttype)
Match current input symbol against
ttype . |
Token |
Parser.matchWildcard()
Match current input symbol as a wildcard.
|
Token |
TokenSource.nextToken()
Return a
Token object from your input stream (usually a
CharStream ). |
Token |
ListTokenSource.nextToken()
Return a
Token object from your input stream (usually a
CharStream ). |
Token |
Lexer.nextToken()
Return a token from this source; i.e., match a token on the char
stream.
|
protected Token |
ParserInterpreter.recoverInline() |
Token |
DefaultErrorStrategy.recoverInline(Parser recognizer)
This method is called when an unexpected symbol is encountered during an
inline match operation, such as
Parser.match(int) . |
Token |
BailErrorStrategy.recoverInline(Parser recognizer)
Make sure we don't attempt to recover inline; if the parser
successfully recovers, it won't throw an exception.
|
Token |
ANTLRErrorStrategy.recoverInline(Parser recognizer)
This method is called when an unexpected symbol is encountered during an
inline match operation, such as
Parser.match(int) . |
protected Token |
DefaultErrorStrategy.singleTokenDeletion(Parser recognizer)
This method implements the single-token deletion inline error recovery
strategy.
|
Modifier and Type | Method and Description |
---|---|
protected List<Token> |
BufferedTokenStream.filterForChannel(int from,
int to,
int channel) |
List<Token> |
BufferedTokenStream.get(int start,
int stop)
Get all tokens from start..stop inclusively
|
List<? extends Token> |
Lexer.getAllTokens()
Return a list of all Token objects in input char stream.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToLeft(int tokenIndex)
Collect all hidden tokens (any off-default channel) to the left of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToLeft(int tokenIndex,
int channel)
Collect all tokens on specified channel to the left of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToRight(int tokenIndex)
Collect all hidden tokens (any off-default channel) to the right of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL
or EOF.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToRight(int tokenIndex,
int channel)
Collect all tokens on specified channel to the right of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL or
EOF.
|
TokenFactory<? extends Token> |
Lexer.getTokenFactory() |
List<Token> |
BufferedTokenStream.getTokens() |
List<Token> |
BufferedTokenStream.getTokens(int start,
int stop) |
List<Token> |
BufferedTokenStream.getTokens(int start,
int stop,
int ttype) |
List<Token> |
BufferedTokenStream.getTokens(int start,
int stop,
Set<Integer> types)
Given a start and stop index, return a List of all tokens in
the token type BitSet.
|
Modifier and Type | Method and Description |
---|---|
protected void |
UnbufferedTokenStream.add(Token t) |
TerminalNode |
ParserRuleContext.addChild(Token matchedToken)
Deprecated.
|
ErrorNode |
ParserRuleContext.addErrorNode(Token badToken)
Deprecated.
|
ErrorNode |
Parser.createErrorNode(ParserRuleContext parent,
Token t)
How to create an error node, given a token, associated with a parent.
|
TerminalNode |
Parser.createTerminalNode(ParserRuleContext parent,
Token t)
How to create a token leaf node associated with a parent.
|
void |
TokenStreamRewriter.delete(String programName,
Token from,
Token to) |
void |
TokenStreamRewriter.delete(Token indexT) |
void |
TokenStreamRewriter.delete(Token from,
Token to) |
void |
Lexer.emit(Token token)
By default does not support multiple emits per nextToken invocation
for efficiency reasons.
|
protected String |
DefaultErrorStrategy.getSymbolText(Token symbol) |
protected int |
DefaultErrorStrategy.getSymbolType(Token symbol) |
String |
UnbufferedTokenStream.getText(Token start,
Token stop) |
String |
TokenStream.getText(Token start,
Token stop)
Return the text of all tokens in this stream between
start and
stop (inclusive). |
String |
BufferedTokenStream.getText(Token start,
Token stop) |
String |
Recognizer.getTokenErrorDisplay(Token t)
Deprecated.
This method is not called by the ANTLR 4 Runtime. Specific
implementations of
ANTLRErrorStrategy may provide a similar
feature when necessary. For example, see
DefaultErrorStrategy.getTokenErrorDisplay(org.antlr.v4.runtime.Token) . |
protected String |
DefaultErrorStrategy.getTokenErrorDisplay(Token t)
How should a token be displayed in an error message? The default
is to display just the text, but during development you might
want to have a lot of information spit out.
|
void |
TokenStreamRewriter.insertAfter(String programName,
Token t,
Object text) |
void |
TokenStreamRewriter.insertAfter(Token t,
Object text) |
void |
TokenStreamRewriter.insertBefore(String programName,
Token t,
Object text) |
void |
TokenStreamRewriter.insertBefore(Token t,
Object text) |
void |
Parser.notifyErrorListeners(Token offendingToken,
String msg,
RecognitionException e) |
void |
TokenStreamRewriter.replace(String programName,
Token from,
Token to,
Object text) |
void |
TokenStreamRewriter.replace(Token indexT,
Object text) |
void |
TokenStreamRewriter.replace(Token from,
Token to,
Object text) |
protected void |
RecognitionException.setOffendingToken(Token offendingToken) |
void |
Lexer.setToken(Token _token) |
Constructor and Description |
---|
CommonToken(Token oldToken)
Constructs a new
CommonToken as a copy of another Token . |
NoViableAltException(Parser recognizer,
TokenStream input,
Token startToken,
Token offendingToken,
ATNConfigSet deadEndConfigs,
ParserRuleContext ctx) |
Constructor and Description |
---|
ListTokenSource(List<? extends Token> tokens)
Constructs a new
ListTokenSource instance from the specified
collection of Token objects. |
ListTokenSource(List<? extends Token> tokens,
String sourceName)
Constructs a new
ListTokenSource instance from the specified
collection of Token objects and source name. |
Modifier and Type | Field and Description |
---|---|
Token |
TerminalNodeImpl.symbol |
Modifier and Type | Method and Description |
---|---|
Token |
TerminalNodeImpl.getPayload() |
Token |
TerminalNodeImpl.getSymbol() |
Token |
TerminalNode.getSymbol() |
Constructor and Description |
---|
ErrorNodeImpl(Token token) |
TerminalNodeImpl(Token symbol) |
Modifier and Type | Class and Description |
---|---|
class |
RuleTagToken
A
Token object representing an entire subtree matched by a parser
rule; e.g., <expr> . |
class |
TokenTagToken
A
Token object representing a token of a particular type; e.g.,
<ID> . |
Modifier and Type | Method and Description |
---|---|
List<? extends Token> |
ParseTreePatternMatcher.tokenize(String pattern) |
Modifier and Type | Method and Description |
---|---|
Token |
XPathLexer.nextToken() |
Modifier and Type | Method and Description |
---|---|
protected XPathElement |
XPath.getXPathElement(Token wordToken,
boolean anywhere)
Convert word like
* or ID or expr to a path
element. |
Copyright © 1992–2020 ANTLR. All rights reserved.