当前位置:网站首页>Write a jison parser from scratch (5/10): a brief introduction to the working principle of jison parser syntax
Write a jison parser from scratch (5/10): a brief introduction to the working principle of jison parser syntax
2022-07-04 09:28:00 【Hu Zhenghui】
Write one from scratch Jison Parser (5/10):Jison A brief introduction to the working principle of parser syntax
- Write one from scratch Jison Parser (1/10):Jison, No Json
- Write one from scratch Jison Parser (2/10): Learn parser generator parser generator The right posture
- Write one from scratch Jison Parser (3/10): A good beginning is half the battle ——《 political science 》 ( Aristotle )
- Write one from scratch Jison Parser (4/10):Jison Parser generator syntax format details
- Write one from scratch
JisonParser (5/10):JisonA brief introduction to the working principle of parser syntax
List of articles
- Write one from scratch Jison Parser (5/10):Jison A brief introduction to the working principle of parser syntax
- This example explains `Jison` Parser generator code
- `Jison` Lexical analysis `lexical analysis` working principle
- `Jison` Semantic analysis `semantic analyse` working principle
- `Jison` Semantic analysis `semantic analyse` Middle action `action` working principle
- This example generates `Jison` Parser code
This example explains Jison Parser generator code
In the previous section, we explained the Jison Syntax format of parser generator .
%lex
%%
[^\n\r]+ {
return 'LINE'; }
[\n\r]+ {
return 'EOL'; }
<<EOF>> {
return 'EOF'; }
/lex
%%
p
: ll EOF
{
console.log($1); }
;
ll
: ll l
{
$$ = $1 + $2; }
| l
{
$$ = $1; }
;
l
: LINE EOL
{
$$ = $1 + $2; }
;
As mentioned earlier ,Jison Instead of running this code directly , Instead, compile this code into a parser , At the end of this article, you can see the generated parser code .
Looking directly at the parser code, you will find that it is far more complex than the parser generator code , So in learning and understanding Jison How the parser generator works , Analyze the above Jison The code of parser generator , because Jison The parser code strictly follows Jison Code generation of parser generator , So in practice , There is little need to read Jison Parser code .
Jison Lexical analysis lexical analysis working principle
As mentioned above Jison Parser generator code contains lexical analysis lexical analysis And semantic analysis semantic analyse Two parts ,Jison The parser also performs lexical analysis before working lexical analysis , And then semantic analysis semantic analyse, Look at lexical analysis first lexical analysis part .
Lexical analysis in this example lexical analysis The code for %lex To /lex Part of
%lex
%%
[^\n\r]+ {
return 'LINE'; }
[\n\r]+ {
return 'EOL'; }
<<EOF>> {
return 'EOF'; }
Because this example is parsed by line , It's simple , Therefore, the first few behavior examples are intercepted from the target data to explain , The intercepted data is as follows .
com.apple.xpc.launchd.domain.system = {
type = system
handle = 0
active count = 648
The first 1 One letter is c, matching [^\n\r]+, Mismatch [\n\r]+, So the first mark token yes LINE, Mark token Follow the greedy pattern when matching , So the first mark LINE Match until the end of the line , That is to say com.apple.xpc.launchd.domain.system = { , At this time, the unresolved content is
type = system
handle = 0
active count = 648
Note that there is nothing in the first line here , Not a clerical error , It is LINE Regular expression of [^\n\r]+ There is no newline that matches first line \r, So there is a newline left on the first line \r, The first line in the content that doesn't seem to match is an empty line .
Next, the first character of the unmatched content is \r, Contrast lexical analysis lexical analysis The code of is known to match [\n\r]+, So the second marker token yes EOL. At this time, the matching tag is
LINE EOL
The string that these two tags match is
"com.apple.xpc.launchd.domain.system = {" "\r"
Here is to show line breaks , So we use quotation marks and escape characters . The unmatched content is
type = system
handle = 0
active count = 648
Due to newline \r It's already matched , So the first line of unmatched content is no longer empty , But the second line of the original data . The next matching process is similar to the previous , The mark after all four lines match is .
LINE EOL
LINE EOL
LINE EOL
LINE EOL
Note the spaces and branches here for ease of understanding , The matching tag does not contain spaces or newlines , The string that these tags match is
"com.apple.xpc.launchd.domain.system = {" "\r"
" type = system" "\r"
" handle = 0" "\r"
" active count = 648" "\r"
Here is to show line breaks , So we use quotation marks and escape characters . From the correspondence, we can see , Finally, there is no unmatched content , But lexical analysis is not over , Because there is another rule
<<EOF>> {
return 'EOF'; }
The end position of the file matches EOF, Lexical analysis can also be seen from this rule lexical analysis The matching syntax of is based on regular expressions , And add corresponding extensions , Here <<EOF>>, That is, the extended matching syntax . Lexical analysis lexical analysis The result after completion is
LINE EOL
LINE EOL
LINE EOL
LINE EOL
EOF
Note the spaces and branches here for ease of understanding , The matching tag does not contain spaces or newlines . Allied , The matching result of complete data is
LINE EOL
LINE EOL
......
LINE EOL
LINE EOL
EOF
Middle matching mark token repeat .
Jison Semantic analysis semantic analyse working principle
Jison In lexical analysis lexical analysis After that, semantic analysis semantic analyse 了 , In this example, semantic analysis semantic analyse The code for /lex The next part
/lex
%%
p
: ll EOF
{
console.log($1); }
;
ll
: ll l
{
$$ = $1 + $2; }
| l
{
$$ = $1; }
;
l
: LINE EOL
{
$$ = $1 + $2; }
;
Because this example is parsed by line , It's simple , Still use the previous lexical analysis lexical analysis The result is
LINE EOL
LINE EOL
LINE EOL
LINE EOL
EOF
The first two marks token by LINE and EOL, Contrast semantic analysis semantic analyse The code of is known to match
l
: LINE EOL
;
The result after matching is
l
LINE EOL
LINE EOL
LINE EOL
LINE EOL
EOF
For display purposes , This paper uses indentation to express semantic analysis semantic analyse Matching relationship in .
matched l, Contrast semantic analysis semantic analyse The code of is known to match ll The second rule in .
ll
: ll l
| l
;
The result after matching is
ll
l
LINE EOL
LINE EOL
LINE EOL
LINE EOL
EOF
At this time, the first mark token yes ll, Semantic analysis semantic analyse There is no match in ll The rules of , The first two marks are ll and LINE, Semantic analysis semantic analyse There are no matching rules in , The first three marks are ll、LINE and EOL, Semantic analysis semantic analyse There are no matching rules in .
Although in semantic analysis semantic analyse There is no direct definition of matching the first three tags ll、LINE and EOL The rules of , But it can be combined in the following ways
ll
ll
l
LINE EOL
That is, put the second line LINE and EOL Combine into l, And then with the first line ll matching ll and l Combine into ll, The result after matching is
ll
ll
l
LINE EOL
l
LINE EOL
Unmatched tags token by
LINE EOL
LINE EOL
EOF
Due to lexical analysis lexical analysis Is to match marks one by one token, So in semantic analysis semantic analyse Stages are often mistaken for matching marks according to rules token, For example, this example may be considered to match into
l
l
l
l
EOF
Then further match . The misunderstanding of this understanding is that there may be many combinations , For example, further match into
ll
l
ll
l
ll
l
ll
l
EOF
Then there are no matching rules . And if you match from the beginning , It will be
ll
ll
l
LINE EOL
l
LINE EOL
LINE EOL
LINE EOL
EOF
Then match again ll and l Combine
ll
ll
ll
l
LINE EOL
l
LINE EOL
l
LINE EOL
LINE EOL
EOF
Continue matching ll and l Combine
ll
ll
ll
ll
l
LINE EOL
l
LINE EOL
l
LINE EOL
l
LINE EOL
EOF
Last match ll and EOF Combine into p The rules of
p
: ll EOF
;
The result is
p
ll
ll
ll
ll
l
LINE EOL
l
LINE EOL
l
LINE EOL
l
LINE EOL
EOF
You can understand the ab initio matching mark by comparing with the previous token The role of the .
A string here ll That is, the recursive rule explained in the previous article recursive rule.
Jison Semantic analysis semantic analyse Middle action action working principle
Semantic analysis is explained above semantic analyse Action is ignored in action, stay Jison Middle action action Attached to semantic analysis semantic analyse, Understand semantic analysis semantic analyse After working principle of , Just take the action .
For the sake of distinction , Lexical analysis lexical analysis Marks in the results LINE Number , That is to say
LINE1 EOL
LINE2 EOL
LINE3 EOL
LINE4 EOL
EOF
Similar semantic analysis semantic analyse Matching results are also numbered
p
ll4
ll3
ll2
ll1
l1
LINE1 EOL
l2
LINE2 EOL
l3
LINE3 EOL
l4
LINE4 EOL
EOF
Rule of grammar grammar rules in l The action is
l
: LINE EOL
{
$$ = $1 + $2; }
;
You can get
l1 = LINE1 + EOL
l2 = LINE2 + EOL
l3 = LINE3 + EOL
l4 = LINE4 + EOL
Rule of grammar grammar rules in ll There are two rules , Each has its own corresponding action
ll
: ll l
{
$$ = $1 + $2; }
| l
{
$$ = $1; }
;
Match which rule to execute the corresponding action action, for example ll1 Matching rules l, perform { $$ = $1; }, That is to say
ll1 = l1
= LINE1 + EOL
and ll2 Matching rules ll and l Combine , perform { $$ = $1 + $2; }, That is to say
ll2 = ll1 + l2
= LINE1 + EOL + LINE2 + EOL
And so on
ll3 = ll2 + l3
= LINE1 + EOL + LINE2 + EOL + LINE3 + EOL
ll4 = ll3 + l4
= LINE1 + EOL + LINE2 + EOL + LINE3 + EOL + LINE4 + EOL
Last match p
p
: ll EOF
{
console.log($1); }
;
The action action Code for { console.log($1); } No longer assignment , amount to
console.log(ll4);
Expand to
console.log(LINE1 + EOL + LINE2 + EOL + LINE3 + EOL + LINE4 + EOL);
The output content is exactly the previous lexical analysis lexical analysis Result , In other words, this example parses by line and then combines it into the original output .
This example generates Jison Parser code
/* parser generated by jison 0.4.18 */
/*
Returns a Parser object of the following structure:
Parser: {
yy: {}
}
Parser.prototype: {
yy: {},
trace: function(),
symbols_: {associative list: name ==> number},
terminals_: {associative list: number ==> name},
productions_: [...],
performAction: function anonymous(yytext, yyleng, yylineno, yy, yystate, $$, _$),
table: [...],
defaultActions: {...},
parseError: function(str, hash),
parse: function(input),
lexer: {
EOF: 1,
parseError: function(str, hash),
setInput: function(input),
input: function(),
unput: function(str),
more: function(),
less: function(n),
pastInput: function(),
upcomingInput: function(),
showPosition: function(),
test_match: function(regex_match_array, rule_index),
next: function(),
lex: function(),
begin: function(condition),
popState: function(),
_currentRules: function(),
topState: function(),
pushState: function(condition),
options: {
ranges: boolean (optional: true ==> token location info will include a .range[] member)
flex: boolean (optional: true ==> flex-like lexing behaviour where the rules are tested exhaustively to find the longest match)
backtrack_lexer: boolean (optional: true ==> lexer regexes are tested in order and for each matching regex the action code is invoked; the lexer terminates the scan when a token is returned by the action code)
},
performAction: function(yy, yy_, $avoiding_name_collisions, YY_START),
rules: [...],
conditions: {associative list: name ==> set},
}
}
token location info (@$, _$, etc.): {
first_line: n,
last_line: n,
first_column: n,
last_column: n,
range: [start_number, end_number] (where the numbers are indexes into the input string, regular zero-based)
}
the parseError function receives a 'hash' object with these members for lexer and parser errors: {
text: (matched text)
token: (the produced terminal token, if any)
line: (yylineno)
}
while parser (grammar) errors will also provide these members, i.e. parser errors deliver a superset of attributes: {
loc: (yylloc)
expected: (string describing the set of expected tokens)
recoverable: (boolean: TRUE when the parser has a error recovery rule available for this particular error)
}
*/
var step_1_line = (function(){
var o=function(k,v,o,l){for(o=o||{},l=k.length;l--;o[k[l]]=v);return o},$V0=[1,4],$V1=[5,7];
var parser = {trace: function trace () { },
yy: {},
symbols_: {"error":2,"p":3,"ll":4,"EOF":5,"l":6,"LINE":7,"EOL":8,"$accept":0,"$end":1},
terminals_: {2:"error",5:"EOF",7:"LINE",8:"EOL"},
productions_: [0,[3,2],[4,2],[4,1],[6,2]],
performAction: function anonymous(yytext, yyleng, yylineno, yy, yystate /* action[1] */, $$ /* vstack */, _$ /* lstack */) {
/* this == yyval */
var $0 = $$.length - 1;
switch (yystate) {
case 1:
console.log($$[$0-1]);
break;
case 2: case 4:
this.$ = $$[$0-1] + $$[$0];
break;
case 3:
this.$ = $$[$0];
break;
}
},
table: [{3:1,4:2,6:3,7:$V0},{1:[3]},{5:[1,5],6:6,7:$V0},o($V1,[2,3]),{8:[1,7]},{1:[2,1]},o($V1,[2,2]),o($V1,[2,4])],
defaultActions: {5:[2,1]},
parseError: function parseError (str, hash) {
if (hash.recoverable) {
this.trace(str);
} else {
var error = new Error(str);
error.hash = hash;
throw error;
}
},
parse: function parse(input) {
var self = this, stack = [0], tstack = [], vstack = [null], lstack = [], table = this.table, yytext = '', yylineno = 0, yyleng = 0, recovering = 0, TERROR = 2, EOF = 1;
var args = lstack.slice.call(arguments, 1);
var lexer = Object.create(this.lexer);
var sharedState = { yy: {} };
for (var k in this.yy) {
if (Object.prototype.hasOwnProperty.call(this.yy, k)) {
sharedState.yy[k] = this.yy[k];
}
}
lexer.setInput(input, sharedState.yy);
sharedState.yy.lexer = lexer;
sharedState.yy.parser = this;
if (typeof lexer.yylloc == 'undefined') {
lexer.yylloc = {};
}
var yyloc = lexer.yylloc;
lstack.push(yyloc);
var ranges = lexer.options && lexer.options.ranges;
if (typeof sharedState.yy.parseError === 'function') {
this.parseError = sharedState.yy.parseError;
} else {
this.parseError = Object.getPrototypeOf(this).parseError;
}
function popStack(n) {
stack.length = stack.length - 2 * n;
vstack.length = vstack.length - n;
lstack.length = lstack.length - n;
}
_token_stack:
var lex = function () {
var token;
token = lexer.lex() || EOF;
if (typeof token !== 'number') {
token = self.symbols_[token] || token;
}
return token;
};
var symbol, preErrorSymbol, state, action, a, r, yyval = {}, p, len, newState, expected;
while (true) {
state = stack[stack.length - 1];
if (this.defaultActions[state]) {
action = this.defaultActions[state];
} else {
if (symbol === null || typeof symbol == 'undefined') {
symbol = lex();
}
action = table[state] && table[state][symbol];
}
if (typeof action === 'undefined' || !action.length || !action[0]) {
var errStr = '';
expected = [];
for (p in table[state]) {
if (this.terminals_[p] && p > TERROR) {
expected.push('\'' + this.terminals_[p] + '\'');
}
}
if (lexer.showPosition) {
errStr = 'Parse error on line ' + (yylineno + 1) + ':\n' + lexer.showPosition() + '\nExpecting ' + expected.join(', ') + ', got \'' + (this.terminals_[symbol] || symbol) + '\'';
} else {
errStr = 'Parse error on line ' + (yylineno + 1) + ': Unexpected ' + (symbol == EOF ? 'end of input' : '\'' + (this.terminals_[symbol] || symbol) + '\'');
}
this.parseError(errStr, {
text: lexer.match,
token: this.terminals_[symbol] || symbol,
line: lexer.yylineno,
loc: yyloc,
expected: expected
});
}
if (action[0] instanceof Array && action.length > 1) {
throw new Error('Parse Error: multiple actions possible at state: ' + state + ', token: ' + symbol);
}
switch (action[0]) {
case 1:
stack.push(symbol);
vstack.push(lexer.yytext);
lstack.push(lexer.yylloc);
stack.push(action[1]);
symbol = null;
if (!preErrorSymbol) {
yyleng = lexer.yyleng;
yytext = lexer.yytext;
yylineno = lexer.yylineno;
yyloc = lexer.yylloc;
if (recovering > 0) {
recovering--;
}
} else {
symbol = preErrorSymbol;
preErrorSymbol = null;
}
break;
case 2:
len = this.productions_[action[1]][1];
yyval.$ = vstack[vstack.length - len];
yyval._$ = {
first_line: lstack[lstack.length - (len || 1)].first_line,
last_line: lstack[lstack.length - 1].last_line,
first_column: lstack[lstack.length - (len || 1)].first_column,
last_column: lstack[lstack.length - 1].last_column
};
if (ranges) {
yyval._$.range = [
lstack[lstack.length - (len || 1)].range[0],
lstack[lstack.length - 1].range[1]
];
}
r = this.performAction.apply(yyval, [
yytext,
yyleng,
yylineno,
sharedState.yy,
action[1],
vstack,
lstack
].concat(args));
if (typeof r !== 'undefined') {
return r;
}
if (len) {
stack = stack.slice(0, -1 * len * 2);
vstack = vstack.slice(0, -1 * len);
lstack = lstack.slice(0, -1 * len);
}
stack.push(this.productions_[action[1]][0]);
vstack.push(yyval.$);
lstack.push(yyval._$);
newState = table[stack[stack.length - 2]][stack[stack.length - 1]];
stack.push(newState);
break;
case 3:
return true;
}
}
return true;
}};
/* generated by jison-lex 0.3.4 */
var lexer = (function(){
var lexer = ({
EOF:1,
parseError:function parseError(str, hash) {
if (this.yy.parser) {
this.yy.parser.parseError(str, hash);
} else {
throw new Error(str);
}
},
// resets the lexer, sets new input
setInput:function (input, yy) {
this.yy = yy || this.yy || {};
this._input = input;
this._more = this._backtrack = this.done = false;
this.yylineno = this.yyleng = 0;
this.yytext = this.matched = this.match = '';
this.conditionStack = ['INITIAL'];
this.yylloc = {
first_line: 1,
first_column: 0,
last_line: 1,
last_column: 0
};
if (this.options.ranges) {
this.yylloc.range = [0,0];
}
this.offset = 0;
return this;
},
// consumes and returns one char from the input
input:function () {
var ch = this._input[0];
this.yytext += ch;
this.yyleng++;
this.offset++;
this.match += ch;
this.matched += ch;
var lines = ch.match(/(?:\r\n?|\n).*/g);
if (lines) {
this.yylineno++;
this.yylloc.last_line++;
} else {
this.yylloc.last_column++;
}
if (this.options.ranges) {
this.yylloc.range[1]++;
}
this._input = this._input.slice(1);
return ch;
},
// unshifts one char (or a string) into the input
unput:function (ch) {
var len = ch.length;
var lines = ch.split(/(?:\r\n?|\n)/g);
this._input = ch + this._input;
this.yytext = this.yytext.substr(0, this.yytext.length - len);
//this.yyleng -= len;
this.offset -= len;
var oldLines = this.match.split(/(?:\r\n?|\n)/g);
this.match = this.match.substr(0, this.match.length - 1);
this.matched = this.matched.substr(0, this.matched.length - 1);
if (lines.length - 1) {
this.yylineno -= lines.length - 1;
}
var r = this.yylloc.range;
this.yylloc = {
first_line: this.yylloc.first_line,
last_line: this.yylineno + 1,
first_column: this.yylloc.first_column,
last_column: lines ?
(lines.length === oldLines.length ? this.yylloc.first_column : 0)
+ oldLines[oldLines.length - lines.length].length - lines[0].length :
this.yylloc.first_column - len
};
if (this.options.ranges) {
this.yylloc.range = [r[0], r[0] + this.yyleng - len];
}
this.yyleng = this.yytext.length;
return this;
},
// When called from action, caches matched text and appends it on next action
more:function () {
this._more = true;
return this;
},
// When called from action, signals the lexer that this rule fails to match the input, so the next matching rule (regex) should be tested instead.
reject:function () {
if (this.options.backtrack_lexer) {
this._backtrack = true;
} else {
return this.parseError('Lexical error on line ' + (this.yylineno + 1) + '. You can only invoke reject() in the lexer when the lexer is of the backtracking persuasion (options.backtrack_lexer = true).\n' + this.showPosition(), {
text: "",
token: null,
line: this.yylineno
});
}
return this;
},
// retain first n characters of the match
less:function (n) {
this.unput(this.match.slice(n));
},
// displays already matched input, i.e. for error messages
pastInput:function () {
var past = this.matched.substr(0, this.matched.length - this.match.length);
return (past.length > 20 ? '...':'') + past.substr(-20).replace(/\n/g, "");
},
// displays upcoming input, i.e. for error messages
upcomingInput:function () {
var next = this.match;
if (next.length < 20) {
next += this._input.substr(0, 20-next.length);
}
return (next.substr(0,20) + (next.length > 20 ? '...' : '')).replace(/\n/g, "");
},
// displays the character position where the lexing error occurred, i.e. for error messages
showPosition:function () {
var pre = this.pastInput();
var c = new Array(pre.length + 1).join("-");
return pre + this.upcomingInput() + "\n" + c + "^";
},
// test the lexed token: return FALSE when not a match, otherwise return token
test_match:function(match, indexed_rule) {
var token,
lines,
backup;
if (this.options.backtrack_lexer) {
// save context
backup = {
yylineno: this.yylineno,
yylloc: {
first_line: this.yylloc.first_line,
last_line: this.last_line,
first_column: this.yylloc.first_column,
last_column: this.yylloc.last_column
},
yytext: this.yytext,
match: this.match,
matches: this.matches,
matched: this.matched,
yyleng: this.yyleng,
offset: this.offset,
_more: this._more,
_input: this._input,
yy: this.yy,
conditionStack: this.conditionStack.slice(0),
done: this.done
};
if (this.options.ranges) {
backup.yylloc.range = this.yylloc.range.slice(0);
}
}
lines = match[0].match(/(?:\r\n?|\n).*/g);
if (lines) {
this.yylineno += lines.length;
}
this.yylloc = {
first_line: this.yylloc.last_line,
last_line: this.yylineno + 1,
first_column: this.yylloc.last_column,
last_column: lines ?
lines[lines.length - 1].length - lines[lines.length - 1].match(/\r?\n?/)[0].length :
this.yylloc.last_column + match[0].length
};
this.yytext += match[0];
this.match += match[0];
this.matches = match;
this.yyleng = this.yytext.length;
if (this.options.ranges) {
this.yylloc.range = [this.offset, this.offset += this.yyleng];
}
this._more = false;
this._backtrack = false;
this._input = this._input.slice(match[0].length);
this.matched += match[0];
token = this.performAction.call(this, this.yy, this, indexed_rule, this.conditionStack[this.conditionStack.length - 1]);
if (this.done && this._input) {
this.done = false;
}
if (token) {
return token;
} else if (this._backtrack) {
// recover context
for (var k in backup) {
this[k] = backup[k];
}
return false; // rule action called reject() implying the next rule should be tested instead.
}
return false;
},
// return next match in input
next:function () {
if (this.done) {
return this.EOF;
}
if (!this._input) {
this.done = true;
}
var token,
match,
tempMatch,
index;
if (!this._more) {
this.yytext = '';
this.match = '';
}
var rules = this._currentRules();
for (var i = 0; i < rules.length; i++) {
tempMatch = this._input.match(this.rules[rules[i]]);
if (tempMatch && (!match || tempMatch[0].length > match[0].length)) {
match = tempMatch;
index = i;
if (this.options.backtrack_lexer) {
token = this.test_match(tempMatch, rules[i]);
if (token !== false) {
return token;
} else if (this._backtrack) {
match = false;
continue; // rule action called reject() implying a rule MISmatch.
} else {
// else: this is a lexer rule which consumes input without producing a token (e.g. whitespace)
return false;
}
} else if (!this.options.flex) {
break;
}
}
}
if (match) {
token = this.test_match(match, rules[index]);
if (token !== false) {
return token;
}
// else: this is a lexer rule which consumes input without producing a token (e.g. whitespace)
return false;
}
if (this._input === "") {
return this.EOF;
} else {
return this.parseError('Lexical error on line ' + (this.yylineno + 1) + '. Unrecognized text.\n' + this.showPosition(), {
text: "",
token: null,
line: this.yylineno
});
}
},
// return next match that has a token
lex:function lex () {
var r = this.next();
if (r) {
return r;
} else {
return this.lex();
}
},
// activates a new lexer condition state (pushes the new lexer condition state onto the condition stack)
begin:function begin (condition) {
this.conditionStack.push(condition);
},
// pop the previously active lexer condition state off the condition stack
popState:function popState () {
var n = this.conditionStack.length - 1;
if (n > 0) {
return this.conditionStack.pop();
} else {
return this.conditionStack[0];
}
},
// produce the lexer rule set which is active for the currently active lexer condition state
_currentRules:function _currentRules () {
if (this.conditionStack.length && this.conditionStack[this.conditionStack.length - 1]) {
return this.conditions[this.conditionStack[this.conditionStack.length - 1]].rules;
} else {
return this.conditions["INITIAL"].rules;
}
},
// return the currently active lexer condition state; when an index argument is provided it produces the N-th previous condition state, if available
topState:function topState (n) {
n = this.conditionStack.length - 1 - Math.abs(n || 0);
if (n >= 0) {
return this.conditionStack[n];
} else {
return "INITIAL";
}
},
// alias for begin(condition)
pushState:function pushState (condition) {
this.begin(condition);
},
// return the number of states currently on the stack
stateStackSize:function stateStackSize() {
return this.conditionStack.length;
},
options: {},
performAction: function anonymous(yy,yy_,$avoiding_name_collisions,YY_START) {
var YYSTATE=YY_START;
switch($avoiding_name_collisions) {
case 0: return 7;
break;
case 1: return 8;
break;
case 2: return 5;
break;
}
},
rules: [/^(?:[^\n\r]+)/,/^(?:{eola})/,/^(?:$)/],
conditions: {"INITIAL":{"rules":[0,1,2],"inclusive":true}}
});
return lexer;
})();
parser.lexer = lexer;
function Parser () {
this.yy = {};
}
Parser.prototype = parser;parser.Parser = Parser;
return new Parser;
})();
if (typeof require !== 'undefined' && typeof exports !== 'undefined') {
exports.parser = step_1_line;
exports.Parser = step_1_line.Parser;
exports.parse = function () { return step_1_line.parse.apply(step_1_line, arguments); };
exports.main = function commonjsMain (args) {
if (!args[1]) {
console.log('Usage: '+args[0]+' FILE');
process.exit(1);
}
var source = require('fs').readFileSync(require('path').normalize(args[1]), "utf8");
return exports.parser.parse(source);
};
if (typeof module !== 'undefined' && require.main === module) {
exports.main(process.argv.slice(1));
}
}
边栏推荐
- Solution to null JSON after serialization in golang
- C language - Introduction - Foundation - syntax - [main function, header file] (II)
- 2022-2028 global small batch batch batch furnace industry research and trend analysis report
- HMS core helps baby bus show high-quality children's digital content to global developers
- AMLOGIC gsensor debugging
- How should PMP learning ideas be realized?
- C语言-入门-基础-语法-[主函数,头文件](二)
- HMS core helps baby bus show high-quality children's digital content to global developers
- Launchpad x | mode
- Implementation principle of redis string and sorted set
猜你喜欢

C language - Introduction - Foundation - syntax - data type (4)

C语言-入门-基础-语法-数据类型(四)

Horizon sunrise X3 PI (I) first boot details

Mantis creates users without password options

Relationship and operation of random events

At the age of 30, I changed to Hongmeng with a high salary because I did these three things

How should PMP learning ideas be realized?
![Langage C - démarrer - base - syntaxe - [opérateur, conversion de type] (vi)](/img/3f/4d8f4c77d9fde5dd3f53ef890ecfa8.png)
Langage C - démarrer - base - syntaxe - [opérateur, conversion de type] (vi)
](/img/89/0f5f4ba07c637b09abe873016cba2d.png)
C语言-入门-基础-语法-[标识符,关键字,分号,空格,注释,输入和输出](三)

If you can quickly generate a dictionary from two lists
随机推荐
LeetCode 74. Search 2D matrix
Dynamic analysis and development prospect prediction report of high purity manganese dioxide in the world and China Ⓡ 2022 ~ 2027
C language - Introduction - Foundation - syntax - data type (4)
LinkedList in the list set is stored in order
swatch
Awk from entry to earth (8) array
Research Report on the development trend and Prospect of global and Chinese zinc antimonide market Ⓚ 2022 ~ 2027
2022-2028 global tensile strain sensor industry research and trend analysis report
Deadlock in channel
Launpad | Basics
2022-2028 global intelligent interactive tablet industry research and trend analysis report
165 webmaster online toolbox website source code / hare online tool system v2.2.7 Chinese version
Clion console output Chinese garbled code
Explanation of for loop in golang
如何编写单元测试用例
C語言-入門-基礎-語法-[運算符,類型轉換](六)
什么是uid?什么是Auth?什么是验证器?
Function comparison between cs5261 and ag9310 demoboard test board | cost advantage of cs5261 replacing ange ag9310
《网络是怎么样连接的》读书笔记 - 集线器、路由器和路由器(三)
Explanation of closures in golang