Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore Design and Implementation of Compiler

Design and Implementation of Compiler

Published by Willington Island, 2021-08-15 04:05:39

Description: This well-organized text provides the design techniques of complier in a simple and straightforward manner. It describes the complete development of various phases of complier with their imitation of C language in order to have an understanding of their application. Primarily designed as a text for undergraduate students of Computer Science and Information Technology and postgraduate students of MCA.

Key Features:

Chapter1 covers all formal languages with their properties.
More illustration on parsing to offer enhanced perspective of parser and also more examples in each segment to improve students skill to grasp the content discussed.
Code optimization refers various special techniques in much detail.
One complier project is also included to provide a most excellent simulation of compiler.
Some more instances in C to provide superior simulation of relevant section.

Search

Read the Text Version

130 Design and Implementation of Compiler temp += term( ); break; case ‘-’ : match(‘–’); temp – = term(); break; } return temp; } int term(void) { int temp = factor ( ); while (token == ‘*’) { match(‘*’); temp *= factor(); } return temp; } int factor(void) { int temp; if (token = = ‘(’) { match( ‘(’) temp = exp( ); match ( ‘)’ ); } else if( is digit (token)) { ungetc( token, stdin ); scanf(“ %d ” , &temp); token = getchar( ); } else error return temp; } void main() { int result; token = getchar( ); result = exp ( ); if(token = = ‘\\n’) printf(“result = %d”,result); else error ( ); return 0; }

Syntax Analysis and Directed Translation 131 4.7.2.2 LL parser An LL parser is a top-down parser for a subset of the context-free grammar. It parses the input from left to right, and constructs a leftmost derivation of the sentence. The class of grammar which are parsable in this way is known as the LL grammar. An LL parser is called An LL(k) parser if it uses k tokens of look-ahead when parsing a sentence. If such a parser exists for a certain grammar and it can parse sentences of this grammar without backtracking then it is called an LL(k) grammar. Of these grammar, LL(1) grammar, although fairly restrictive, are very popular because the corresponding LL parsers only need to look at the next token to make their parsing decisions. A predictive parser is a recursive descent parser with no backup or backtracking. Predictive parsing is possible only for the class of LL(k) grammar, which are the context-free grammar for which there exists some positive integer k that allows a recursive descent parser to decide which production to use by examin- ing only the next k tokens of input. (The LL(k) grammar therefore exclude all ambiguous grammar, as well as all grammar that contain left recursion. Any context-free grammar can be transformed into an equiva- lent grammar that has no left recursion, but removal of left recursion does not always yield an LL(k) grammar.) A predictive parser runs in linear time. This parser has an input buffer, a stack on which it keeps symbols from the grammar, a parsing table which tells it what grammar rule to use given the symbols on top of its stack and its input tape. To explain its workings we will use the following small grammar: (1) S → F (2) S → ( S + F ) (3) F → 1 Input 1 + 1 $ Stack Parser Output 6 3 0 Action table Goto table Figure 4.6 The parsing table for this grammar looks as follows: ( ) 1 +$ S 2 – 1 –– F – –3 –– (Note that there is special terminal, represented here as $, that is used to indicate the end of the input stream and bottom of the stack.) Depending on the top-most symbol on the stack and the current symbol in the input stream, the parser applies the rule stated in the matching row and column of the parsing table (e.g., if there is an ‘S’ on the top of the parser stack and a ‘1’ in the front-most position of the input stream, the parser executes rule number 1, i.e., it replaces the ‘S’ on its stack by ‘F’).

132 Design and Implementation of Compiler When the parser starts it always starts on its stack with [ S, $ ] where S is the start symbol of the grammar. The parser will attempt to rewrite the contents of this stack to what it sees on the input stream. However, it only keeps on the stack what still needs to be rewritten. For example, let’s assume that the input is “( 1 + 1 )”. When the parser reads the first “(” it knows that it has to rewrite S to “( S + F )” and writes the number of this rule to the output. The stack then becomes as [ (, S, +, F, ), $ ]. In the next step it removes the ‘(’ from its input stream and from its stack as [ S, +, F, ), $ ]. Now the parser sees a ‘1’ on its input stream so it knows that it has to apply rule (1) and then rule (3) from the grammar and write their number to the output stream. This results in the following stacks as [ F, +, F, ), $ ] and [ 1, +, F, ), $ ]. In the next two steps the parser reads the ‘1’ and ‘+’ from the input stream and also removes them from the stack, resulting in [ F, ), $ ]. In the next three steps the ‘F’ will be replaced on the stack with ‘1’, the number 3 will be written to the output stream and then the ‘1’ and ‘)’ will be removed from the stack and the input stream. So the parser ends with both ‘$’ on its stack and on its input steam. In this case it will report that it has accepted the input string and on the output stream it has written the list of numbers [ 2, 1, 3, 3 ] which is indeed a leftmost derivation of the input string (therefore, the derivation goes like this: S → ( S + F ) → ( F + F ) → ( 1 + F ) → ( 1 + 1 )). As can be seen from the example the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol $: • If the top is a nonterminal then it looks up in the parsing table on the basis of this nonterminal and the symbol on the input stream which rule of the grammar it should use to replace it with on the stack. The number of the rule is written to the output stream. If the parsing table indicates that there is no such rule then it reports an error and stops. • If the top is a terminal then it compares it to the symbol on the input stream and if they are equal they are both removed. If they are not equal the parser reports an error and stops. • If the top is $ and on the input stream there is also a $ then the parser reports that it has successfully parsed the input, otherwise it reports an error. In both cases the parser will stop. These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a leftmost derivation to the output stream or it will have reported an error. Constructing an LL(1) parsing table In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminal A on the top of its stack and a symbol a on its input stream. It is easy to see that such a rule should be of the form A → w and that the language corresponding to w should have at least one string starting with a. For this purpose we define the First-set and Follow-set of w. FIRST{w} : written as FIRST{w}, as the set of terminals that can be found at the start of any string in w, plus ε if the empty string also belongs to w. Given a grammar with the rules A1 → w1, ..., An → wn, we can compute the FIRST{wi} and FIRST{Ai} for every rule as follows: 1. initialize every FIRST{wi} and FIRST{Ai} with the empty set { }. 2. add FIRST{wi} to FIRST{Ai} for every rule Ai → wi, where FIRST is defined as follows: • If wi is terminal then add FIRST{Ai} = { a }. • If wi is ε then add FIRST{Ai} = { ε }. • If wi is nonterminal and wi → z1 z2 z3 z4 -------- zi-1 zi ---------- zk then add FIRST{Ai} =

Syntax Analysis and Directed Translation 133 {z1} if z1 is not derive ε else discard zi up-to ε is derived and add FIRST{Ai} = {zi+1 , ε } 3. add FIRST{w } to FIRST{A } for every rule A → w ii ii 4. do steps 2 and 3 until all First sets stay the same. Algorithm FIRST (A) for all non terminal A do First(A) = { } while there are changes to any FIRST(A) for each production choice A → X1X2 . ……..Xn do K=1; continue = true while continue = true and k<=n do add FIRST(Xk) ={ε} to FIRST (A) If ε is not in first (Xk) then continue = false K=K+1 If continue = true then add ε to FIRST(A); Example 4.22 : a simple grammar is as: 1. S → Aa | b 2. A → bdZ | eZ 3. Z → cZ | adZ | ε • FIRST(S) = {FIRST(A), FIRST(b)} (by rule 3) = {b, e} • FIRST(A) = {FIRST(b), FIRST(e)}= {b, e} (by rule 2, rule 1) • FIRST(Z) = {a, c, ε } (by rule 2, rule 1) Eample 4.23: Find the FIRST and FOLLOW for the following grammar: S ÆACBCbBBa A Æ daBC B Æ gε C Æ hε First Sets FIRST (S) = FIRST (ACB) ∪ FIRST (CbB) ∪ FIRST (Ba) ——————(1) FIRST(A) = FIRST(da) ∪ FIRST(BC) ———(2) FIRST(B) = FIRST(g) ∪ FIRST(ε) ———(3) FIRST(C) = FIRST (h) ∪ FIRST (ε) ———(4) So FIRST(B) = {g , ε} ————(5) (from 3) FIRST(C) = { h , ε} ———(6)(from 4) FIRST(BC) = [FIRST (B) –{ ε }] ∪ FIRST(C) = [ {g, ε }–{ε}] ∪ {h , ε} ={g} ∪ {h, ε } = {g, ε } ————————(7) So FIRST (A) = {d, g, ε} ————(8)(from 2&7) FIRST(ACB) = [ FIRST (A) – {ε}] ∪ [FIRST(C) – {ε}] ∪ FIRST(B) = {d, g} ∪ {h} ∪ {g, ε} = {d, g, h, ε} —————(8)

134 Design and Implementation of Compiler FIRST(CbB) = [ FIRST(C) –{ε}] ∪ [ FIRST(b)] ∪ [FIRST(B)] = {b, g, h, ε} ———(9) FIRST (Ba) = [FIRST(B) – {ε}] ∪ FIRST(a) = {g, a} ———————(10) So FIRST(S) = {a, b, d, g, h, ε} (from 1,8,9, & 10) Follow{w}: written as FOLLOW{w} here, which is defined as the set of terminals a such that there is a string of symbols αAaβ that can be derived from the start symbol. Computing the Follow-sets for the nonterminals in a grammar can be done as follows: 1. initialize every FOLLOW{Ai} with the empty set and a entry of $ at the end of this set. 2. if there is a rule of the form A → wA w′ , then ji • if the terminal a is in FIRST{w' }, then add a to FOLLOW{Ai} • if ε is in FIRST {w′ }, then add FOLLOW{A } to Fo{A } or any production of ji form occur like Aj → wAi 3. repeat step 2 until all FOLLOW sets stay the same. Algorithm FOLLOW (A) FOLLOW(start_symbol) ={$} for all non terminal ≠ start-symbol do FOLLOW(A) ={} while there are changes to any FOLLOW set do for each production A→ X X X ……..Xn do 123 for each Xi that is a nonterminal do add first(X X …………. X )- {ε} to FOLLOW (X ) i+1 i+2 n i If ε is in FIRST(Xi+1 Xi+2 …………. Xn) then Add FOLLOW(A) to FOLLOW(Xi) Example 4.24: another simple grammar is as: • S → Aa | b • A → bdZ |eZ • Z → cZ | adZ | ε Follow(S) = {$} Follow(A) = {a} Follow(Z) = {Follow(A)} = {a} The use of First ( ) and Follow ( ) if we want to expand S in this grammar: • S → A ... | B ... • A → a... • B → b ... | a... • If the next input character is b, we should rewrite S with A... or B ....? – since First(B) ={a, b}, and First(A)= {a}, we know to rewrite S with B; – First and FOLLOW gives us information about the next characters expected in the grammar. • If the next input character is a, how to rewrite S? – a is in both First(A) and First(B); – The grammar is not suitable for predicative parsing.

Syntax Analysis and Directed Translation 135 Constructing an LL{1} parsing table Now we can define exactly which rules will be contained where in the parsing table. If P[A, a] denotes the entry in the table for nonterminal A and terminal a, then 1. For each production A → w do step 2. 2. P[A,a] contains the rule A → w if and only if • a is in FIRST{w} or • ε is in FIRST{w} and a is in FOLLOW{A}. If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking. It is in precisely this case that the grammar is called an LL(1) grammar. Example 4.25: (i) (1) S → F (2) S → ( S + F ) (3) F → 1 FIRST – SET of ‘S’ and ‘ F’ • FIRST {S} = FIRST {F} = {1} • FIRST {S} = FIRST { ( } = { ( } • FIRST {F} = FIRST { 1 } = {1} So, FIRST {S} = { 1 ,( } AND FIRST {F} = {1} FOLLOW – SET of ‘S’ and ‘ F’ • FOLLOW { S } = FOLLOW { F ,$} • FOLLOW { S } = FOLLOW { +F) ,$ } • FOLLOW { S+} = FOLLOW { F) , $} • FOLLOW { S+F } = FOLLOW { ) , $} • FOLLOW { S } = FOLLOW { S+F) , $ } • FOLLOW {F} = FOLLOW {1} So, FOLLOW {S} = { ,$ } AND FOLLOW {F} = { ,$ } LL{1} Table + Nonterminals Input Symbols () 1 $ S S→(S+F) S→F F F→1 (ii) Consider following grammar and give predictive parser S → stmt S′ S′ → ; S | ε stmt → s Now FIRST SETs for given grammar are as FIRST (S) = {s} FIRST( S′ ) = {; , ε} FIRST ( stmt ) = {$}

136 Design and Implementation of Compiler & FOLLOW SETS FOLLOW (S) = { $} FOLLOW ( S′) = { $ } FOLLOW ( stmt } = { ; , $ } Now construction of parsering table take place as Parsing table : s; $ S S → stmt S' S′ → ε S′ S′ → ;S stmt stmt → s Example 4.26: 1. SÆP $ 2. PÆ { D; C} 3. DÆ d,D | d 4. CÆ c, C| c • The above grammar corresponds loosely to the structure of programs. Need to left factor the gram- mar first. 1. S→P $ 2. P → { D; C} 3. D → d D2 4. D2 → , D | ε 5. C → c C2 6. C2 → , C | ε • Now we find the FIRST & FOLLOW sets as: First Follow S{ $ P{ $ Dd ; D2 , ε ; Cc } C2 , ε } • Now, constructing LL parser table {} ; , c d$ S S → P$ D2 → ε P P → {D;C} D2 →, D D → dD2 D C2 → , C C → cC2 D2 C C2 C2 → ε

Syntax Analysis and Directed Translation 137 Example 4.27 : Find the FIRST and FOLLOW of the following grammar and then parse it via LL(1) parser. exp Æ term exp1 exp1 Æ op1 term exp1ε term Æ factor term1 term1 Æ op2 factor term1ε factor Æ (exp)num op1 Æ +– op2 Æ * We find the FIRST & FOLLOW sets as: First Sets Follow Sets FIRST(exp) = {(,num} FOLLOW(exp) = {),$} FIRST(exp1) = {+,–,ε} FOLLOW(exp1) = {),$} FIRST(term) = {(,num} FOLLOW(term) = {),+,–,$} FIRST(term1) = {*,ε} FOLLOW(term1) = {),+,–,$} FIRST(factor) = { (,num} FOLLOW(factor) = {),+,–,*,$} FIRST(op1) = {+,–} FOLLOW(op) = { (,num} FIRST(op2) = {*} Now, we parse given grammar and design LL(1) parser ( number )+ – *$ exp exp Æterm exp1 expÆterm expÆε exp1Æ op1 term exp1Æ exp1Æε exp1 exp1 op1 term exp1 term1Æε term1Æε exp1 op1Æ + term termÆfactor termÆfactor term1 term1 term1 term1Æε term1Æε term1Æε op2 factor term1 factor factorÆ(exp) factor Ænum op1 op1 Æ – op2 op2Æ * Example 4.28: Find the FOLLOW for the following grammar, if possible otherwise parse it using LL(1). SÆA A Æ aB B Æ bBCf CÆg In above grammar no ε production ,so there is no need for calculating FOLLOW SETS. We can parse it directly with the help of

138 Design and Implementation of Compiler FIRST SET. So, FIRST(S) = {a} FIRST (A) = {a} FIRST(B) = {b,f} FIRST(C) = {g} And parse table: a b f g d $ S S→A B → bBC B→f C→g A A → aB B C Example 4.29: Construct the LL(1) parsing table for the following grammar: S Æ aBDh B Æ cC C Æ bCε D Æ EF E Æ gε F Æ fε We find the FIRST & FOLLOW sets as: First Sets Follow Sets FIRST(S) = {a} FOLLOW(C) = {g,f,h,$} FIRST(B) = {c} FOLLOW(E) = {f,h,$} FIRST(C) = {b,ε} FOLLOW(F) = {h,$} FIRST(D) = {g,f,ε} FOLLOW(D) = {h,$}, FIRST(E) = {g,ε} FIRST(F) = {f,ε} And now, Parse table: AB C$ Q A→a A → bbD D→c B→ε S′ S′ → S B→a B→ C→ε S S → qABC C→b D→ε A D→ε D→ε B C D

Syntax Analysis and Directed Translation 139 Confliction in LL(1) Grammar • No ambiguous grammar can be LL(1) • No left recursive grammar can be LL(1) • Grammar is LL(1) if A → | α | β are two distinct production of grammar the following condition hold : • For no terminal a do both α and β derive strings beginning with a. • At most one of α and β can derive the empty string, • If β→ ε, then α does not derive any string beginning with a terminal in FOLLOW (A). In general, no two grammar rule will alive in same row. Example 4.30: non-LL(1) grammar with a conflict is: 1. S → iEtSS1 | a 2. S1 → eS | ε 3. E → b FIRST ( S ) = { i ,a } FIRST (S1) = { e , ε } FIRST ( E ) = { b } FOLLOW ( S ) = { ,$ } FOLLOW(S1) = { e , $ } FOLLOW(E) = { , $ } Non-terminals a b Input Symbols t$ S→ a E→b ei S1 → ε S S1 S → iEtSS1 E S1 → eS | ε Example 4.31: consider the following grammar (ambiguous) S → Ι | other I → if (exp) S E E → else S | ε Exp → 0 | 1 Now FIRST SETs FIRST(S) = {if , other } & FOLLOW sets FIRST(I) = {if} FIRST(E) = {else, ε} FIRST(Exp) = { 0,1 } 1. FOLLOW (S) = FOLLOW (I) = { $ } 2. FOLLOW (Exp) = FIRST ( ) ) = { ),$ } FOLLOW (S) = FIRST (E) = {else,$} 3. FOLLOW (E) = FOLLOW (S) FOLLOW (E) = FOLLOW (I)

140 Design and Implementation of Compiler so ; FOLLOW (S) = { else , $} FOLLOW (I) = { else , $} FOLLOW (E) = {else , $} FOLLOW (Exp) = { ), $ } & now constructing parsing table – if other else 0 1$ S S→I S → other E → else S | ε Exp → 0 I I → if (Exp ) S E E E→ε Exp Exp → 1 LL(k) parser generators Modern parser generators that generate LL parsers with multi-token lookahead include: ANTLR, Coco/R, JavaCC, PCCTS is now ANTLR, SLK : Strong LL(k) parsing, Spirit Parser Framework : is a flexible a LL(∞) parser generation framework in which the grammar themselves are written in pure C++, JetPAG: Jet Parser Auto-Generator. An optimizing LL(k) parser generator, Parsec : is a monadic parser combinator library for Haskell, which can parse LL(∞), context-sensitive grammar, but performs best when the grammar is LL(1), Ocaml Genlex Module, JFLAP : an educational tool to learn LL(1) parsing. Example in C Language is given in Appendix–A (A.5) Restrictions on grammar so as to allow LL(1) parsing The top-down approach to parsing looks so promising that we should consider what restrictions have to be placed on a grammar so as to allow us to use the LL(1) approach. Once these have been established we shall pause to consider the effects they might have on the design or specification of “real” languages. A little reflection on the examples will show that the problems arise when we have alternative productions for the next (left-most) non-terminal in a sentential form, and should lead to the insight that the initial symbols that can be derived from the alternative right sides of the production for a given non- terminal must be distinct. The effect of the LL(1) conditions on language design We should note that we cannot hope to transform every non-LL(1) grammar into an equivalent LL(1) grammar. To take an extreme example, an ambiguous grammar must have two parse trees for at least one input sentence. If we really want to allow this we shall not be able to use a parsing method that is capable of finding only one parse tree, as deterministic parsers must do. We can argue that an ambiguous grammar is of little interest, but the reader should not go away with the impression that it is just a matter of trial and error before an equivalent LL(1) grammar is found for an arbitrary grammar. Often a combination of substitution and refactorization will resolve problems.

Syntax Analysis and Directed Translation 141 4.7.2.3 Packrat parse A packrat parser is a form of top down parser similar to a recursive descent parser in construction, except that during the parsing process it memorizes the intermediate results of all invocations of the mutually recursive parsing functions. Because of this memorization, a packrat parser has the ability to parse many context-free grammar and any parsing expression grammar (including some that do not represent context-free languages) in linear time. 4.7.2.4 Tail recursive parser Tail recursive parsers are derived from the more common Recursive descent parsers. Tail recursive parsers are commonly used to parse left recursive grammar. They use a smaller amount of stack space than regular recur- sive descent parsers. They are also easy to write. Typical recursive descent parsers make parsing left recursive grammar impossible (because of an infinite loop problem). Tail recursive parsers use a node reparenting tech- nique that makes this allowable. A simple tail recursive parser can be written much like a recursive descent parser. The typical algorithm for parsing a grammar like this using an Abstract syntax tree is: 1. Parse the next level of the grammar and get its output tree, designate it the first tree, F 2. While there is terminating token, T, that can be put as the parent of this node: • Allocate a new node, N • Set N’s current operator as the current input token • Advance the input one token • Set N’s left subtree as F • Parse another level down again and store this as the next tree, X • Set N’s right subtree as X • Set F to N 3. Return N Example in C Language is given in Appendix–A (A.6) 4.7.3 Bottom-up Parsing Bottom-up parsing is a strategy for analyzing unknown data relationships that attempts to identify the most fundamental units first, and then to infer higher-order structures from them. It occurs in the analysis of both natural languages and computer languages. It is common for bottom-up parsers to take the form of general parsing engines, that can either parse or generate a parser for a specific programming language given a specification of its grammar. Bottom-up parsing is a parsing method that works by identifying terminal symbols first, and combines them successively to produce nonterminals. The productions of the parser can be used to build a parse tree of a program written in human-readable source code that can be compiled to assembly language or pseudocode. This parser also performs one of three actions. These are ‘shift’, ‘reduce’, and ‘accept’. • Shift means moving a symbol from the input to the stack. • Reduce means matching a set of symbols in the stack for a more general symbol. • Accept.

142 Design and Implementation of Compiler Type of Bottom-up Parsers • LR parser • LR (0): No look ahead symbol. • SLR(1): Simple with one look ahead symbol • LALR(1): Look ahead bottom up, not as powerful as full LR(1) but simpler to implement. • LR(1): Most general language, but most complex to implement. • LR(n) - where n is an integer>=0, indicates an LR parser with n look ahead symbols; while languages can be designed that require more than 1 look ahead practical languages try to avoid this because increasing n can theoretically require exponentially more code and data space (in practice, this may not be as bad). • Precedence parser • Simple precedence parser • Operator-precedence parser • Extended precedence parser Examples of shift-reduce parsing Take the grammar: S —> AB A —> a B —> b And the input : ab Then the bottom up parsing is: Stack Input —————+ +-+-+——— | |a|b| —————+ +-+-+——— Shift a ————+-+ +-+———— |a| |b| ————+-+ +-+———— Reduce a (A —> a) ————+-+ +-+———— |A| |b| ————+-+ +-+———— Shift b ———+-+-+ +————— |A|b| | ———+-+-+ +————— Reduce b (B —> b) ———+-+-+ +————— |A|B| | ———+-+-+ +—————

Syntax Analysis and Directed Translation 143 Reduce AB (S —> AB) ————+-+ +————— |S| | ————+-+ +————— Accept Example 4.32: 1. S → aABe 2. A→ Abc | b 3. B → d Reduce abbcde to S by four steps → abbcde → aAbcde → aAde → aABe →S – Notice the d should not be reduced into B in step 2. rm rm rm rm S ⇒ aABe ⇒ aAde ⇒ aAbcde ⇒ abbcde – How to get the right reduction steps? Handles rm rm rm rm S ⇒ aABe ⇒ aAde ⇒ aAbcde ⇒ abbcde S → aABe A → Abc | b B→d • Informally, a handle of a sentential form is a substring that “can” be reduced. – Abc is a handle of the right sentential form aAbcde, because • AàAbc, and • after Abc is replaced by A, the resulting string aAde is still a right sentential form. – Is d a handle of aAbcde? – No. this is because aAbcBe is not a right sentential form. • Formally, a handle of a right sentential form γ is a production A à β and a position in γ where the string β may be found and replaced by A. – If S rm* αAw rm* αβw , then A àβ in the position after αis a handle of αβw. ⇒ ⇒ – When the production A àβ and the position are clear, we simply say “the substring β is a handle”. 4.7.3.1 LR parser An LR parser is a method of parsing for context-free grammar . LR parsers read their input from left to right and produce a rightmost derivation. An LR parser is said to perform bottom-up parsing because it attempts to deduce the top level grammar productions by building up from the leaves. The term “LR (k) parser” is also used; here the k refers to the number of unconsumed “look ahead” input symbols that are used in making parsing decisions. Usually k is 1 and is often omitted. A context-free grammar is called LR (k) if there exists an LR (k) parser for it. In typical use when we refer to An LR parser we mean a particular parser capable of








































































Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook