Thomas a sudkamp languages and machines pdf download
File Name: thomas a sudkamp languages and machines pdf. Using the summation nota- tion, but we can use recursion to establish a method by which the sum of any two numbers can be mechanically calculated. A set is defined implicitly by specifying conditions that describe the elements of the set. Pef theoretical concepts and associated mathematics are made accessible by a "learn as you go" approach that develops an intuitive understanding of the concepts through numerous examples The third edition of Languages and Machines: An Introduction to the Theory of Computer Science provides readers with a mathematically sound presentation of the theory of computer science.
One cannot memorize the sum of all pos- sible combinations of natural numbers, we can write the preceding expression as.
Properties of the V vi Preface complexity of languages, are established. An arc from x thomae y in a directed graph is de- picted by an arrow from x to y. Each chapter ends with a set of exercises that reinforces and augments the material covered in the chapter. Proof Assume that the intersection of [a]- and [b]. If You're an Educator Goodreads helps you keep track of books you want to read. Updated The presentation of formal language and automata theory examines the relationships between the grammars and abstract machines of the Chomsky hierarchy.
Wicked a pretty little liars novel. Integrated chinese 4th edition volume 1 character workbook. Books on yoga and meditation. Similarly, strings of length one in W are also in P. L3 , obtained by applying the Kleene star operation to L2 , contains all strings with length divisible by four.
The null string, with length zero, is in L3. Since the null string is not allowed in the language, each string must contain at least one a or one b or one c.
However, it need not contain one of every symbol. Clearly every string in this language has substrings ab and ba. Every string described by the preceding expression has length four or more. Have we missed some strings with the desired property? Consider aba, bab, and aaaba. However, every string in this expression ends with b while strings in the language may end in a or aa as well as b. This expression is obtained by combining an expression for each of the component subsets with the union operation.
Exercises 37 and 38 illustrate the significant difference between union and intersection in describing patterns with regular expressions. There is nothing intuitive about a regular ex- pression for this language. The exercise is given, along with the hint, to indicate the need for an algorithmic approach for producing regular expressions. A technique accomplish this will follow from the ability to reduce finite state machines to regular expressions developed in Chapter 6.
There is no trick to obtaining this answer, you simply must count the possible derivations represented by the tree. The basis of the recursive definition consists of the null string. The first S rule produces this string. The recursive step consists of inserting an a and b in a previously generated string or by concatenating two previously generated strings.
These operations are captured by the second set of S rules. If the derivation begins with an S rule that begins with an a, the A rules ensure that an a occurs in the middle of the string. Similarly, the S and B rules combine to produce strings with a b in the first and middle positions. While the grammar described above generates the desired language, it is not regular. A regular grammar can be obtained by explicitly replacing S1 and S2 in the S rules with the right-hand sides of the S1 and S2 rules.
We will refer to this condition as the prefix property. The proof is by induction of the length of derivations of G. The prefix property is seen to hold for these strings by inspection. Inductive hypothesis: Assume that every sentential form that can be obtained by a deriva- tion of length n or less satisfies the prefix property.
By the inductive hypothesis uSv satisfies the prefix property. Thus w also satisfies the prefix property. To construct an unambiguous grammar that generates L G , it is necessary to determine precisely which strings are in L G. Each of these strings can be generated by two distinct leftmost derivations.
To produce an unambiguous grammar, it is necessary to ensure that these strings are generated by only one of A and B. Consequently, there is only one leftmost derivation of ai bj and the grammar is unambiguous. Thus every rule of G1 is either in G2 or derivable in G2. Now if w is derivable in G1 , it is also derivable in G2 by replacing the application of a rule of G1 by the sequence of rules of G2 that produce the same transformation.
Since every regular grammar is also right-linear, all regular languages are generated by right-linear grammars. We now must show that every language generated by a right-linear grammar is regular. Let G be a right-linear grammar. A regular grammar G0 that generates L G is constructed from the rules of G. Clearly, the grammar G0 constructed in this manner generates L G. An equivalent essentially noncontracting grammar GL with a nonrecursive start symbol is constructed following the steps given in the proof of Theorem 4.
Following the technique of Example 4. For each variable, Algorithm 4. An equivalent grammar GU without useless symbols is constructed in two steps. The first step involves the construction of a grammar GT that contains only variables that derive terminal strings. Algorithm 4. The transformation of a grammar G from Chomsky normal form to Greibach normal form is accomplished in two phases. In the first phase, the variables of the grammar are numbered and an intermediate grammar is constructed in which the first symbol of the right-hand side of every rule is either a terminal or a variable with a higher number than the number of the variable on the left-hand side of the rule.
This is done by removing direct left recursion and by applying the rule replacement schema of Lemma 4. The variables S, A, B, and C are numbered 1, 2, 3, and 4 respectively. The S rules are already in the proper form. Working backwards from the C rules and applying Lemma 4.
We begin by showing how to reduce the length of strings on the right-hand side of the rules of a grammar in Greibach normal form. If a rule already has the desired form, no change is necessary. There are three cases to consider in creating the new rules.
The process must be repeated until rules have been created for each variable created by the rule transformations.
The construction in case 3 requires the original rule to have at least three variables. This length reduction process can be repeated until each rule has a right-hand side of length at most 3. A detailed proof that this construction produces an equivalent grammar can be found in Harrison1. Therefore, the strings abaa, bababa, and bbbaa are in L M.
The proof is by induction on length of the input string. Any deviation from the prescribed order causes the computation to enter q3 and reject the string. States q1 or q3 are entered upon processing a single a.
The state q3 represents the combination of the two conditions required for acceptance. Upon reading a second a, the computation enters the nonaccepting state q4 and rejects the string. The states of the machine are used to count the number of symbols processed since an a has been read or since the beginning of the string, whichever is smaller.
If an input string has a substring of length four without an a, the computation enters q4 and rejects the string. All other strings accepted. The DFA to accept this language differs from the machine in Exercise 20 by requiring every fourth symbol to be an a. The first a may occur at position 1, 2, 3, or 4. If the string varies from this pattern, the computation enters state q8 and rejects the input.
After arriving at q2 there are two ways to leave and return, taking the cycle q0 , q1 , q2 or the cycle q1 , q2. If q1 is entered upon processing the third to the last symbol, the computation accepts the input. A computation that enters q1 in any other manner terminates unsuccessfully. Algorithm 5. The transition table of M is given in the solution to Exercise The nodes of the equivalent NFA are constructed in step 2 of Algorithm 5.
The generation of Q0 is traced in the table below. The set Y consists of the states that may be entered upon processing an a from a state in X. However, this equality follows from the result in Exercise Chapter 6 Properties of Regular Languages 2. The node deletion algorithm must be employed individu- ally for each accepting state. The language of M is the union of the strings accepted by q0 and q2.
Without loss of generality, we may assume that G does not contain useless symbols. The algorithm to remove useless symbols, presented in Section 4. Thus the equivalent grammar obtained by this transformation is also regular.
Intuitively but not precisely , the state qi represents the number of transitions from the start state when processing a string and the set X consists of states that require the same number of transitions to reach an accepting state. The start state of the machine is the ordered pair [q0 , F].
If it is not already in Q0 , the ordered pair [qi a , Y] is added to the states of M0. The process is repeated until a transition is defined for each state-symbol pair. Back to the intuition, this occurs when the number of transitions from the start state is the same as the number of transitions needed to reach an accepting state.
That is, when half of a string in L has been processed. Then ui vi is the palindrome ai bbai. We conclude, by Corollary 6. There are three cases to consider. Case 2: v has one b.
In this case, v contains a substring bat b. Pumping v produces two substrings of the form bat b in uv 2 w. By Theorem 6. We begin by showing that these set operations are preserved by homomorphisms. Let x be an element of h XY. To establish the opposite inclusion, let x be an element in h X h Y.
The other two set equalities can be established by similar arguments. We will use the recursive definition of regular sets to show that h X is regular whenever X is. The proof is by induction on the number of applications of the recursive step in Definition 2.
Inductive hypothesis: Now assume that the homomorphic image of every regular set definable using n or fewer applications of the recursive step is regular. By the inductive hypothesis, h Y and h Z are regular. Let G and G0 be corresponding regular and left-regular grammars. The proof is by induction on the length of the derivations. By the preceding lemma, the corresponding regular grammar G generates LR. Thus every language generated by a left-regular grammar is regular.
If L is a regular language, then so is LR. This implies that there is a regular grammar G that generates LR. The equivalence classes of the DFA in Example 5. By the argument in Theorem 6. Processing an a pushes A onto the stack.
Strings of the form ai are accepted in state q1. The transitions in q1 empty the stack after the input has been read. A computation with input ai bj enters state q2 upon processing the first b. Thus strings with either of these forms are rejected. A computation begins by pushing a C onto the stack, which serves as a bottom-marker throughout the computation. When an a is read with an A or C on the top of the stack, an A is pushed onto the stack.
This is accomplished by the transition to q2. If a B is on the top of the stack, the stack is popped removing one b. Processing a b with an A on the stack pops the A. The lone accepting state of the automaton is q5.
The states q2 and q3 are the accepting states of M. The null string is accepted in q3. A computation of M0 with input w begins by putting C on the stack and entering q0. The stack symbol C acts as a marker designating the bottom of the stack.
From q0 , M0 continues with a computation identical to that of M with input w. We must also guarantee that the any string w accepted by M0 by final state is accepted by M by final state and empty stack. We will outline the construction of a context- free grammar that generates L M. The steps follow precisely those given in Theorem 7. These rules transform a transition of M that does not remove an element from the stack into one that initially pops the stack and later replaces the same symbol on the top of the stack.
The alphabet of G is the input alphabet of M0. The variable hqi , A, qj i represents a computation that begins in state qi , ends in qj , and removes the symbol A from the stack. The rules of G are constructed as follows: 1.
A derivation begins with a rule of type 1 whose right-hand side represents a computation that begins in state q0 , ends in a final state, and terminates with an empty stack, in other words, a successful computation in M0. Rules of types 2 and 3 trace the action of the machine. Rules of type 4 are used to terminate derivations. The proof that the rules generate L M follows the same strategy as Theorem 7. Lemmas 7. To show that the assumption that L is context-free produces a contradiction, we examine all possible decompositions of z that satisfy the conditions of the pumping lemma.
By ii , one or both of v and x must be nonnull. That is, v occurs in z in a position of the form. In this case, v contains a substring ban b. Pumping v produces a string with two substrings of the form ban b. No string with this property is in L. Case 3: v has one b. Then v can be written ai baj and occurs in z as. Pumping v produces the substring. Regardless of its makeup, pumping any nonnull substring v of z produces a string that is not in the language L. A similar argument shows that pumping x produces a string not in L whenever x is nonnull.
Since one of v or x is nonnull, there is no decomposition of z that satisfies the requirements of the pumping lemma and we conclude that the language is not context-free. Because of the restriction on its length, the substring vwx must have the form ai , bi , ci , ai bj , or bi cj.
Pumping z produces the string uv 2 wx2 y. This operation increases the number of at least one, possibly two, but not all three types of terminals in z. Let z be any string of length 2 or more in L. That is, z must contain the substring ab or the substring ba.
Let L be a linear language. Let r by the number of variables of G and t be the maximum number of terminal symbols on the right-hand of any rule. Since the sentential form uvAxy is generated by at most r rule applications, the string uvxy must have length less than k as desired. The variables, alphabet, and start symbol of G0 are the same as those of G.
Induction on the length of derivations is used to show that a string u is a sentential form of G if, and only if, uR is a sentential form of G0. We consider sentential forms of G generated by leftmost derivations and of G0 by rightmost derivations. Basis: The basis consists of sentential forms produced by derivations of length one.
Inductive hypothesis: Assume that u is a sentential form of G derivable by n or fewer rule applications if, and only if, uR is a sentential form of G0 derivable by n or fewer rule applications.
By the inductive hypothesis, y R AxR is a sentential form of G0 derivable by n rule applications. In a like manner, we can prove that the reversal of every sentential form derivable in G 0 is derivable in G. The grammar G0 generates era L. The proof follows from a straightforward examination of the derivations of G and G 0.
Conversely, let u be any string in L G0. The rules of G0 are obtained from those of G by substitution. See the solution to Exercise 27 for an explanation of the notation. To show that h L is context-free, it suffices to show that it is generated by the grammar G0.
The image of this mapping, which is context-free by part a , is era L. In Example 7. Chapter 8 Turing Machines 1. If the input is the null string, the computation halts in state qf with the tape head in position zero as desired.
Otherwise, an a is moved to the right by transitions to states q 2 and q3. Similarly, states q4 and q3 shift a b. This process is repeated until the entire string has been shifted. The strategy employed by the machine is i. Mark tape position 0 with a. If the input string is not empty, change the first symbol to X or Y to record if it was an a or b, respectively. Move to the end of the string and use strategy from the machine in part a to move the unmarked input on square to the right.
Otherwise, repeat the process on the unmarked string. Shifting a string one square to the right is accomplished by states q3 to q7. The arc from q4 to q8 when the shift has been completed. If another shift is needed, the entire process is repeated by entering q 1. The tape head is then returned to position zero and a search is initiated for a corresponding b.
If a b is encountered in state q3 , an X is written and the tape head is repositioned to repeat the cycle q1 , q2 , q3 , q4. If no matching b is found, the computation halts in state q3 rejecting the input. We refer to the method of acceptance defined in the problem as acceptance by entering. We show that the languages accepted by entering are precisely those accepted by final state, that is, the recursively enumerable languages.
Computations of M0 are identical to those of M until M0 enters an accepting state. When this occurs, M0 halts accepting the input. A computation of M that enters an accepting state halts in that state in M0.
A computation of M may enter and leave accepting states prior to termination; the intermediate states have no bearing on the acceptance of the input. We construct a machine M0 that accepts L M by entering. A computation of M that accepts an input string halts in an accepting state q i. The corre- sponding computation of M0 reaches qi and then enters qf , the sole accepting state of M0. Clearly, every recursively enumerable language is accepted by a Turing machine in which stationary transitions are permitted.
A standard Turing machine can be considered to be such a machine whose transition function does not include any stationary transitions. We construct a standard Turing machine M0 that accepts L M. A transition of M that designates a movement of the tape head generates an identical transition of M0. The first transition prints a y, moves right and enters a new state. Regardless of the symbol being scanned, the subsequent transition moves left to the original position and enters qj.
We will build a context- sensitive Turing machine M0 that accepts L M. Intuitively, a computation of M0 ignores the second input symbol in the transition.
The computation of the standard machine should read the x in state qi , determine the symbol on the right, and rewrite the original position accordingly. If the context- sensitive transition is not applicable, the sequence of standard transitions will check the symbol to the right and halt in state qi. Part b provides the opposite inclusion; given a context- sensitive machine we can build a standard machine that accepts the same language.
Thus the family of languages accepted by these types of machines are identical. The input is copied onto tape 1 in state q 1. State q2 returns the head reading tape 1 to the initial position. With tape head 1 moving left-to-right and head 2 moving right-to-left, the strings on the two tapes are compared.
If both heads simultaneously read a blank, the computation terminates in q 4. The maximal number of transitions of a computation with an input string of length n occurs when the string is accepted. Tape head 1 reads right-to-left, left-to-right, and then right-to-left through the input. The computation of a string that is not accepted halts when the first mismatch of symbols on tape 1 and tape 2 is discovered. States p 0 to p7 nondeterministically select a substring of length three or more from the input.
The accepting states of the composite machine are the accepting states of M. This problem illustrates the two important features in the design of Turing machines. The first is the ability to use existing machines as components in more complex computations.
The machine constructed in Exercise 5 c provides the evaluation of a single substring needed in this computation. The flexibility and computational power that can be achieved by combining submachines will be thoroughly examined in Chapter 9. The second feature is the ability of a nondeterministic design to remove the need for the consideration of all possible substrings. If not, another substring is is generated and examined.
This process must be repeated until an acceptable substring is found or all substrings have been generated and evaluated. A nondeterministic transition to state q2 represents a guess that the current position of the head on tape 1 is the beginning of the second half of the string. The head on tape 2 is repositioned at the leftmost square in state q2. The remainder of the string on tape 1 is then compared with the initial segment that was copied onto tape 2.
If a blank is read simultaneously on these tapes, the string is accepted. Finding any mismatch between the tapes in this comparison causes the computation to halt without accepting the input. The maximal number of transitions occurs when all but the final symbol of the input string is copied onto tape 2. In this case, the head on tape 2 moves to tape position n, back to position 0, and begins the comparison.
Let M be a nondeterministic Turing machine that halts for all inputs. The technique in- troduced in the construction of an equivalent deterministic machine M0 given in Section 9. M0 systematically simulates all computa- tions of M, beginning with computations of length 0, length 1, length 2, etc. The simulation of a computation of M is guided by a sequence of numbers that are generated on tape 3.
When one of the simulated computations accepts the input string, M0 halts and accepts. If the input is not in L M , M0 indefinitely continues the cycle of generating a sequence of numbers on tape 3 and simulating the computation of M associated with that sequence.
Unfortunately, the preceding approach cannot be used to show that L is recursive since the computation of M0 will never terminate for a string that is not in the language. To show that L M is recursive, we must construct a deterministic machine that halts for all inputs.
This can be accomplished by adding a fourth tape to M0 that records the length of the longest simulated computation that has not halted prematurely. A computation terminates prematurely if the simulation terminates before executing all of the transitions indicated by the sequence on tape 3. Chapter 7 presents Regular Languages and Sets. I had trouble learning it from this book and had to find other places. Get fast, free shipping with Amazon Prime. Write a customer review. The author examines the languages of the Chomsky hierarchy, the grammars that generate them, andd the finite automata that accept them.
The book covers a lot of theory on computation and is not for a beginner. Explore the Home Gift Guide. One person found this helpful.
If you are a seller for this product, would you like to suggest updates through seller support? This is a major weakness, especially given the difficulty of the material. The 2nd chapter gives an excellent introduction to strings, languages and regular expressions along with relations on regular expressions. Amazon Advertising Find, attract, and engage customers.
0コメント