CS310 : Automata Theory 2020
Lecture 27: Turing Machines and Computability
Instructor: S. Akshay
IITB, India
Spring 2020
This lecture
Turing Machines and Computability
In this lecture we will cover I Variants of Turing Machines
I Encoding description of Turing Machines as Strings I Turing recognizability and decidability
Reading Material
I Section 8.4, 8.5, Introduction to Automata Theory, Languages and Computation, Hopcroft, Motwani and Ullman
I Section 3.2, 4.1, Introduction to the Theory of Computation, Michael Sipser
This lecture
Turing Machines and Computability
In this lecture we will cover I Variants of Turing Machines
I Encoding description of Turing Machines as Strings I Turing recognizability and decidability
Reading Material
I Section 8.4, 8.5, Introduction to Automata Theory, Languages and Computation, Hopcroft, Motwani and Ullman
I Section 3.2, 4.1, Introduction to the Theory of Computation, Michael Sipser
Recall: Definition of Turing Machines
Definition
A Turing Machine is a 7-tuple (Q,Σ,Γ, δ,q0,qacc,qrej) where 1. Q is a finite set ofstates,
2. Σ is afinite input alphabet,
3. Γ is afinite tape alphabet wheret ∈Γ and Σ⊆Γ, 4. q0 is the startstate,
5. qacc is the acceptstate, 6. qrej is the rejectstate,
7. δ :Q×Γ→Q×Γ× {L,R}is the transition function.
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by:
current state,tape contents andcurrent head location. Notation: C =uqv
where,uv is the tape content, current state isq and head is at start of v We defineC1 yields C2 if the TM can move fromC1 toC2 in one step:
I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by: current state,tape contents andcurrent head location.
Notation: C =uqv
where,uv is the tape content, current state isq and head is at start of v We defineC1 yields C2 if the TM can move fromC1 toC2 in one step:
I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by: current state,tape contents andcurrent head location.
Notation: C =uqv
where, uv is the tape content, current state is q and head is at start of v
We defineC1 yields C2 if the TM can move fromC1 toC2 in one step: I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by: current state,tape contents andcurrent head location.
Notation: C =uqv
where, uv is the tape content, current state is q and head is at start of v We define C1 yields C2 if the TM can move fromC1 toC2 in one step:
I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by: current state,tape contents andcurrent head location.
Notation: C =uqv
where, uv is the tape content, current state is q and head is at start of v We define C1 yields C2 if the TM can move fromC1 toC2 in one step:
I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by: current state,tape contents andcurrent head location.
Notation: C =uqv
where, uv is the tape content, current state is q and head is at start of v We define C1 yields C2 if the TM can move fromC1 toC2 in one step:
I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: configurations
A configuration is a snapshot of the system during computation.
Described by: current state,tape contents andcurrent head location.
Notation: C =uqv
where, uv is the tape content, current state is q and head is at start of v We define C1 yields C2 if the TM can move fromC1 toC2 in one step:
I left move: u a qi b v yields u qj a c v ifδ(qi,b) = (qj,c,L) I right move: u a qi b v yields u a c qj v ifδ(qi,b) = (qj,c,R)
I right-end: assume thatu a qi is the same asu a qi tas tape has blanks to the right.
I left-end (for single side infinite tape): qi b v yields (1)qj c v if transition is left moving or (2)c qj v if it is right moving
Recall: computation of a Turing Machine
I We define start (q0,w), accepting (∗qacc∗), rejecting (∗qrej∗) and halting configurations.
I A Turing Machine computation on a given input may not halt!
I A TM M acceptsinput word w if there exists a sequence of configurationsC1,C2, . . . ,Ck (called a run) such that
I C1is the start configuration I eachCi yieldsCi+1
I Ck is an accepting configuration
I Language of TMM, denotedL(M), is the set of strings accepted by it.
Recall: computation of a Turing Machine
I We define start (q0,w), accepting (∗qacc∗), rejecting (∗qrej∗) and halting configurations.
I A Turing Machine computation on a given input may not halt!
I A TM M acceptsinput word w if there exists a sequence of configurationsC1,C2, . . . ,Ck (called a run) such that
I C1is the start configuration I eachCi yieldsCi+1
I Ck is an accepting configuration
I Language of TMM, denotedL(M), is the set of strings accepted by it.
Recall: computation of a Turing Machine
I We define start (q0,w), accepting (∗qacc∗), rejecting (∗qrej∗) and halting configurations.
I A Turing Machine computation on a given input may not halt!
I A TMM acceptsinput word w if there exists a sequence of configurationsC1,C2, . . . ,Ck (called a run) such that
I C1 is the start configuration I eachCi yieldsCi+1
I Ck is an accepting configuration
I Language of TMM, denotedL(M), is the set of strings accepted by it.
Recall: computation of a Turing Machine
I We define start (q0,w), accepting (∗qacc∗), rejecting (∗qrej∗) and halting configurations.
I A Turing Machine computation on a given input may not halt!
I A TMM acceptsinput word w if there exists a sequence of configurationsC1,C2, . . . ,Ck (called a run) such that
I C1 is the start configuration I eachCi yieldsCi+1
I Ck is an accepting configuration
I Language of TMM, denotedL(M), is the set of strings accepted by it.
Recall: Turing recognizable and decidable languages
Turing recognizable or Recursively enumerable (r.e)
A language is Turing recognizable or r.eif there is a Turing machine accepting it.
Turing decidable or recursive
A language isdecidable(or recursive) if there is a Turing machine accepting it, which has the additional property that it halts on all possible inputs.
Every decidable language is Turing recognizable, but is the converse true?
Recall: Turing recognizable and decidable languages
Turing recognizable or Recursively enumerable (r.e)
A language is Turing recognizable or r.eif there is a Turing machine accepting it.
Turing decidable or recursive
A language is decidable(or recursive) if there is a Turing machine accepting it, which has the additional property that it halts on all possible inputs.
Every decidable language is Turing recognizable, but is the converse true?
Recall: Turing recognizable and decidable languages
Turing recognizable or Recursively enumerable (r.e)
A language is Turing recognizable or r.eif there is a Turing machine accepting it.
Turing decidable or recursive
A language is decidable(or recursive) if there is a Turing machine accepting it, which has the additional property that it halts on all possible inputs.
Every decidable language is Turing recognizable, but is the converse true?
Variants of a Turing Machine
I Multi-tape TMs.
I Non-deterministic TMs I Multi-head TMs
I Single sided vs double sided infinite tape TMs I ...
What are the relative expressive powers? Do we get something strictly more powerful than standard TMs?
Multi-tape to single-tape TM
Definition
A multitape TMis a TM with several tapes, each having its own head for reading and writing. Input is on first tape and others are blank.
Formally,
δ:Q×Γk →Q×Γk× {L,R}k, wherek is the number of tapes.
δ(qi,a1, . . . ,ak) = (qj,b1, . . . ,bk,L,R, . . . ,L)
Theorem
Every multi-tape TM has an “equivalent” single-tape TM. Proof idea:
I Keep a special marker # to separate tapes I Keep copy of alphabet to have different heads
I When you encounter # during simulation, shift cells to make space.
Multi-tape to single-tape TM
Definition
A multitape TMis a TM with several tapes, each having its own head for reading and writing. Input is on first tape and others are blank.
Formally, δ:Q×Γk →Q×Γk× {L,R}k, wherek is the number of tapes.
δ(qi,a1, . . . ,ak) = (qj,b1, . . . ,bk,L,R, . . . ,L)
Theorem
Every multi-tape TM has an “equivalent” single-tape TM. Proof idea:
I Keep a special marker # to separate tapes I Keep copy of alphabet to have different heads
I When you encounter # during simulation, shift cells to make space.
Multi-tape to single-tape TM
Definition
A multitape TMis a TM with several tapes, each having its own head for reading and writing. Input is on first tape and others are blank.
Formally, δ:Q×Γk →Q×Γk× {L,R}k, wherek is the number of tapes.
δ(qi,a1, . . . ,ak) = (qj,b1, . . . ,bk,L,R, . . . ,L)
Theorem
Every multi-tape TM has an “equivalent” single-tape TM. Proof idea:
I Keep a special marker # to separate tapes I Keep copy of alphabet to have different heads
I When you encounter # during simulation, shift cells to make space.
Multi-tape to single-tape TM
Definition
A multitape TMis a TM with several tapes, each having its own head for reading and writing. Input is on first tape and others are blank.
Formally, δ:Q×Γk →Q×Γk× {L,R}k, wherek is the number of tapes.
δ(qi,a1, . . . ,ak) = (qj,b1, . . . ,bk,L,R, . . . ,L)
Theorem
Every multi-tape TM has an “equivalent” single-tape TM.
Proof idea:
I Keep a special marker # to separate tapes I Keep copy of alphabet to have different heads
I When you encounter # during simulation, shift cells to make space.
Multi-tape to single-tape TM
Definition
A multitape TMis a TM with several tapes, each having its own head for reading and writing. Input is on first tape and others are blank.
Formally, δ:Q×Γk →Q×Γk× {L,R}k, wherek is the number of tapes.
δ(qi,a1, . . . ,ak) = (qj,b1, . . . ,bk,L,R, . . . ,L)
Theorem
Every multi-tape TM has an “equivalent” single-tape TM.
Proof idea:
I Keep a special marker # to separate tapes I Keep copy of alphabet to have different heads
Non-deterministic TMs
Non-deterministic TMs
At any point in the computation, the TM may proceed according to several possibilities. Thus the transition function has the form:
δ :Q×Γ→2Q×Γ×{L,R}
Theorem
Every non-deterministic TM is equivalent to a deterministic TM. Proof idea:
1. View NTMN’s computation as a tree.
2. explore tree using bfs and for each node (i.e., config) encountered, check if it is accepting.
For more details: See Theorem 3.16 (Section 3.2) in Sipser’s book or Theorem 8.4.4 in Hopcroft-Motwani-Ullman.
Non-deterministic TMs
Non-deterministic TMs
At any point in the computation, the TM may proceed according to several possibilities. Thus the transition function has the form:
δ :Q×Γ→2Q×Γ×{L,R}
Theorem
Every non-deterministic TM is equivalent to a deterministic TM.
Proof idea:
1. View NTMN’s computation as a tree.
2. explore tree using bfs and for each node (i.e., config) encountered, check if it is accepting.
For more details: See Theorem 3.16 (Section 3.2) in Sipser’s book or Theorem 8.4.4 in Hopcroft-Motwani-Ullman.
Non-deterministic TMs
Non-deterministic TMs
At any point in the computation, the TM may proceed according to several possibilities. Thus the transition function has the form:
δ :Q×Γ→2Q×Γ×{L,R}
Theorem
Every non-deterministic TM is equivalent to a deterministic TM.
Proof idea:
1. View NTMN’s computation as a tree.
2. explore tree using bfs and for each node (i.e., config) encountered, check if it is accepting.
For more details: See Theorem 3.16 (Section 3.2) in Sipser’s book or Theorem 8.4.4 in Hopcroft-Motwani-Ullman.
Non-deterministic TMs
Non-deterministic TMs
At any point in the computation, the TM may proceed according to several possibilities. Thus the transition function has the form:
δ :Q×Γ→2Q×Γ×{L,R}
Theorem
Every non-deterministic TM is equivalent to a deterministic TM.
Proof idea:
1. View NTMN’s computation as a tree.
2. explore tree
using bfs and for each node (i.e., config) encountered, check if it is accepting.
For more details: See Theorem 3.16 (Section 3.2) in Sipser’s book or Theorem 8.4.4 in Hopcroft-Motwani-Ullman.
Non-deterministic TMs
Non-deterministic TMs
At any point in the computation, the TM may proceed according to several possibilities. Thus the transition function has the form:
δ :Q×Γ→2Q×Γ×{L,R}
Theorem
Every non-deterministic TM is equivalent to a deterministic TM.
Proof idea:
1. View NTMN’s computation as a tree.
2. explore tree using bfs and for each node (i.e., config) encountered, check if it is accepting.
For more details: See Theorem 3.16 (Section 3.2) in Sipser’s book or Theorem 8.4.4 in Hopcroft-Motwani-Ullman.
Non-deterministic TMs
Non-deterministic TMs
At any point in the computation, the TM may proceed according to several possibilities. Thus the transition function has the form:
δ :Q×Γ→2Q×Γ×{L,R}
Theorem
Every non-deterministic TM is equivalent to a deterministic TM.
Proof idea:
1. View NTMN’s computation as a tree.
2. explore tree using bfs and for each node (i.e., config) encountered, check if it is accepting.
For more details: See Theorem 3.16 (Section 3.2) in Sipser’s book or
Decidable languages
I A TMaccepts language Lif it has an accepting runon each word inL.
I A TMdecides languageL if it acceptsL and halts on all inputs.
Decidable and Turing recognizable languages
I A languageL isdecidable (recursive) if there exists a Turing machine M which decidesL (i.e.,M halts on all inputs and M acceptsL).
I A languageL isTuring recognizable (recursively enumerable) if there exists a Turing machine M which acceptsL.
Algorithms and Decidability
Algorithms ⇐⇒ Decidable (i.e, TM decides it)
I A decision problem P is said to be decidable (i.e., have an algorithm)if the language Lof all yes instances to P is decidable.
I A decision problem P is said to be semi-decidable (i.e., have a semi-algorithm)if the language Lof all yes instances to P is r.e.
I A decision problem P is said to be undecidableif the languageLof all yes instances toP is not decidable.
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
But how do we encode <B>? In general a TM?
Encoding Turing machines as strings
Every Turing machine can be encoded as a string in {0,1}∗
I Just encode the description of the machine (in binary)!
I i.e., the 7-tuple: states, alphabet, transition etc... into a long binary string.
I See Section 3.3 of Sipser’s Book and Section 9.1.2 of Hopcroft, Motwani, Ullman’s book.
Every string in{0,1}∗ is a TM
Of course, some strings don’t represent a valid encoding, then we can map them to a TM that does nothing.
Notation
I M →<M >, a string representation ofM. I α→Mα, a TM representation of a string.
Encoding Turing machines as strings
Every Turing machine can be encoded as a string in {0,1}∗ I Just encode the description of the machine (in binary)!
I i.e., the 7-tuple: states, alphabet, transition etc... into a long binary string.
I See Section 3.3 of Sipser’s Book and Section 9.1.2 of Hopcroft, Motwani, Ullman’s book.
Every string in{0,1}∗ is a TM
Of course, some strings don’t represent a valid encoding, then we can map them to a TM that does nothing.
Notation
I M →<M >, a string representation ofM. I α→Mα, a TM representation of a string.
Encoding Turing machines as strings
Every Turing machine can be encoded as a string in {0,1}∗ I Just encode the description of the machine (in binary)!
I i.e., the 7-tuple: states, alphabet, transition etc... into a long binary string.
I See Section 3.3 of Sipser’s Book and Section 9.1.2 of Hopcroft, Motwani, Ullman’s book.
Every string in {0,1}∗ is a TM
Of course, some strings don’t represent a valid encoding, then we can map them to a TM that does nothing.
Notation
I M →<M >, a string representation ofM. I α→Mα, a TM representation of a string.
Encoding Turing machines as strings
Every Turing machine can be encoded as a string in {0,1}∗ I Just encode the description of the machine (in binary)!
I i.e., the 7-tuple: states, alphabet, transition etc... into a long binary string.
I See Section 3.3 of Sipser’s Book and Section 9.1.2 of Hopcroft, Motwani, Ullman’s book.
Every string in {0,1}∗ is a TM
Of course, some strings don’t represent a valid encoding, then we can map them to a TM that does nothing.
Notation
I M →<M >, a string representation ofM. I α→Mα, a TM representation of a string.
Encoding Turing machines as strings
Every Turing machine can be encoded as a string in {0,1}∗ I Just encode the description of the machine (in binary)!
I i.e., the 7-tuple: states, alphabet, transition etc... into a long binary string.
I See Section 3.3 of Sipser’s Book and Section 9.1.2 of Hopcroft, Motwani, Ullman’s book.
Every string in {0,1}∗ is a TM
Of course, some strings don’t represent a valid encoding, then we can map them to a TM that does nothing.
Notation
I M →<M >, a string representation ofM.
How many Turing machines are there?
Consider set S of all Turing Machines. Is this set countable?
Recall: Countable set
I A set is countable if there is an injective map from the set to N. I The set{0,1}∗ is countable
A corollary
I From previous slide, each TM corresponds to a string S I Number of strings is countable
I Hence, S is countable, i.e., there are countably many Turing machines. Also follows that set of Turing recognizable languages is countable!
How many Turing machines are there?
Consider set S of all Turing Machines. Is this set countable?
Recall: Countable set
I A set is countable if there is an injective map from the set to N. I The set{0,1}∗ is countable
A corollary
I From previous slide, each TM corresponds to a string S I Number of strings is countable
I Hence, S is countable, i.e., there are countably many Turing machines. Also follows that set of Turing recognizable languages is countable!
How many Turing machines are there?
Consider set S of all Turing Machines. Is this set countable?
Recall: Countable set
I A set is countable if there is an injective map from the set to N. I The set{0,1}∗ is countable
A corollary
I From previous slide, each TM corresponds to a string S I Number of strings is countable
I Hence, S is countable, i.e., there are countably many Turing machines.
Also follows that set of Turing recognizable languages is countable!
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
I (Emptiness problem for DFA) Given a DFA does it accept any word?
I L∅DFA={<B>|B is a DFA,L(B) =∅}
I (Equivalence problem for DFA) Given two DFAs, do they accept the same language?
I LEQDFA={<A,B >|A,B are DFAs,L(A) =L(B)} I What about NFAs, regular expressions
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
I (Emptiness problem for DFA) Given a DFA does it accept any word?
I L∅DFA={<B>|B is a DFA,L(B) =∅}
I (Equivalence problem for DFA) Given two DFAs, do they accept the same language?
I LEQDFA={<A,B >|A,B are DFAs,L(A) =L(B)} I What about NFAs, regular expressions
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
I (Emptiness problem for DFA) Given a DFA does it accept any word?
I L∅DFA={<B>|B is a DFA,L(B) =∅}
I (Equivalence problem for DFA) Given two DFAs, do they accept the same language?
I LEQDFA={<A,B >|A,B are DFAs,L(A) =L(B)} I What about NFAs, regular expressions
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
I (Emptiness problem for DFA) Given a DFA does it accept any word?
I L∅DFA={<B>|B is a DFA,L(B) =∅}
I (Equivalence problem for DFA) Given two DFAs, do they accept the same language?
I LEQDFA={<A,B >|A,B are DFAs, L(A) =L(B)}
I What about NFAs, regular expressions
Examples of Decidable languages and problems
I (Acceptance problem for DFA) Given a DFA does it accept a given word?
I LADFA={<B,w >|B is a DFA that accepts input wordw}
I (Emptiness problem for DFA) Given a DFA does it accept any word?
I L∅DFA={<B>|B is a DFA,L(B) =∅}
I (Equivalence problem for DFA) Given two DFAs, do they accept the same language?
I LEQDFA={<A,B >|A,B are DFAs, L(A) =L(B)}
I What about NFAs, regular expressions
Relationship among languages
Regular (Decidable ⊆
?
Turing recognizable ⊆
?
All languages
DFA/NFA <Algorithms/Halting TM≤
?
Semi-algorithms/TM
Relationship among languages
Regular (Decidable ⊆
?
Turing recognizable ⊆
?
All languages
DFA/NFA <Algorithms/Halting TM≤
?
Semi-algorithms/TM
Relationship among languages
Regular (Decidable ⊆
?
Turing recognizable ⊆
?
All languages
DFA/NFA <Algorithms/Halting TM≤
?
Semi-algorithms/TM