A Python library for simulating finite automata, pushdown automata, and Turing machines

Overview

Automata

Copyright 2016-2021 Caleb Evans
Released under the MIT license

Build Status Coverage Status

Automata is a Python 3 library which implements the structures and algorithms for finite automata, pushdown automata, and Turing machines. The library requires Python 3.6 or newer.

Huge thanks to @YtvwlD, @dengl11, @Tagl, and @lewiuberg for their invaluable code contributions to this project!

Migrating to v5

If you wish to migrate to Automata v5 from an older version, please follow the migration guide.

Installing

You can install the latest version of Automata via pip:

pip install automata-lib

API

class Automaton(metaclass=ABCMeta)

The Automaton class is an abstract base class from which all automata (including Turing machines) inherit. As such, it cannot be instantiated on its own; you must use a defined subclasses instead (or you may create your own subclass if you're feeling adventurous). The Automaton class can be found under automata/base/automaton.py.

If you wish to subclass Automaton, you can import it like so:

from automata.base.automaton import Automaton

The following methods are common to all Automaton subtypes:

Automaton.read_input(self, input_str)

Reads an input string into the automaton, returning the automaton's final configuration (according to its subtype). If the input is rejected, the method raises a RejectionException.

Automaton.read_input_stepwise(self, input_str)

Reads an input string like read_input(), except instead of returning the final configuration, the method returns a generator. The values yielded by this generator depend on the automaton's subtype.

If the string is rejected by the automaton, the method still raises a RejectionException.

Automaton.accepts_input(self, input_str)

Reads an input string like read_input(), except it returns a boolean instead of returning the automaton's final configuration (or raising an exception). That is, the method always returns True if the input is accepted, and it always returns False if the input is rejected.

Automaton.validate(self)

Checks whether the automaton is actually a valid automaton (according to its subtype). It returns True if the automaton is valid; otherwise, it will raise the appropriate exception (e.g. the state transition is missing for a particular symbol).

This method is automatically called when the automaton is initialized, so it's only really useful if a automaton object is modified after instantiation.

Automaton.copy(self)

Returns a deep copy of the automaton according to its subtype.

class FA(Automaton, metaclass=ABCMeta)

The FA class is an abstract base class from which all finite automata inherit. The FA class can be found under automata/fa/fa.py.

If you wish to subclass FA, you can import it like so:

from automata.fa.fa import FA

class DFA(FA)

The DFA class is a subclass of FA and represents a deterministic finite automaton. It can be found under automata/fa/dfa.py.

Every DFA has the following (required) properties:

  1. states: a set of the DFA's valid states, each of which must be represented as a string

  2. input_symbols: a set of the DFA's valid input symbols, each of which must also be represented as a string

  3. transitions: a dict consisting of the transitions for each state. Each key is a state name and each value is a dict which maps a symbol (the key) to a state (the value).

  4. initial_state: the name of the initial state for this DFA

  5. final_states: a set of final states for this DFA

  6. allow_partial: by default, each DFA state must have a transition to every input symbol; if allow_partial is True, you can disable this characteristic (such that any DFA state can have fewer transitions than input symbols)

from automata.fa.dfa import DFA
# DFA which matches all binary strings ending in an odd number of '1's
dfa = DFA(
    states={'q0', 'q1', 'q2'},
    input_symbols={'0', '1'},
    transitions={
        'q0': {'0': 'q0', '1': 'q1'},
        'q1': {'0': 'q0', '1': 'q2'},
        'q2': {'0': 'q2', '1': 'q1'}
    },
    initial_state='q0',
    final_states={'q1'}
)

DFA.read_input(self, input_str)

Returns the final state the DFA stopped on, if the input is accepted.

dfa.read_input('01')  # returns 'q1'
dfa.read_input('011')  # raises RejectionException

DFA.read_input_stepwise(self, input_str)

Yields each state reached as the DFA reads characters from the input string, if the input is accepted.

dfa.read_input_stepwise('0111')
# yields:
# 'q0'
# 'q0'
# 'q1'
# 'q2'
# 'q1'

DFA.accepts_input(self, input_str)

if dfa.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

DFA.validate(self)

dfa.validate()  # returns True

DFA.copy(self)

dfa.copy()  # returns deep copy of dfa

DFA.minify(self, retain_names=False)

Creates a minimal DFA which accepts the same inputs as the old one. Unreachable states are removed and equivalent states are merged. States are renamed by default.

minimal_dfa = dfa.minify()
minimal_dfa_with_old_names = dfa.minify(retain_names=True)

DFA.complement(self)

Creates a DFA which accepts an input if and only if the old one does not.

complement_dfa = ~dfa

DFA.union(self, other, minify=True)

Given two DFAs which accept the languages A and B respectively, creates a DFA which accepts the union of A and B. Minifies by default.

minimal_union_dfa = dfa | other_dfa
union_dfa = dfa.union(other_dfa, minify=False)

DFA.intersection(self, other, minify=True)

Given two DFAs which accept the languages A and B respectively, creates a DFA which accepts the intersection of A and B. Minifies by default.

minimal_intersection_dfa = dfa & other_dfa
intersection_dfa = dfa.intersection(other_dfa, minify=False)

DFA.difference(self, other, minify=True)

Given two DFAs which accept the languages A and B respectively, creates a DFA which accepts the set difference of A and B, often denoted A \ B or A - B. Minifies by default.

minimal_difference_dfa = dfa - other_dfa
difference_dfa = dfa.difference(other_dfa, minify=False)

DFA.symmetric_difference(self, other, minify=True)

Given two DFAs which accept the languages A and B respectively, creates a DFA which accepts the symmetric difference of A and B. Minifies by default.

minimal_symmetric_difference_dfa = dfa ^ other_dfa
symmetric_difference_dfa = dfa.symmetric_difference(other_dfa, minify=False)

DFA.issubset(self, other_dfa)

Given two DFAs which accept the languages A and B respectively, returns True of the A is a subset of B, False otherwise.

dfa <= other_dfa
dfa.issubset(other_dfa)

DFA.issuperset(self, other_dfa)

Given two DFAs which accept the languages A and B respectively, returns True of the A is a superset of B, False otherwise.

dfa >= other_dfa
dfa.issuperset(other_dfa)

DFA.isdisjoint(self, other_dfa)

Given two DFAs which accept the languages A and B respectively, returns True of A and B are disjoint, False otherwise.

dfa.isdisjoint(other_dfa)

DFA.isempty(self)

Returns True if the DFA does not accept any inputs, False otherwise.

dfa.isempty()

DFA.isfinite(self)

Returns True if the DFA accepts a finite language, False otherwise.

dfa.isfinite()

DFA.from_nfa(cls, nfa)

Creates a DFA that is equivalent to the given NFA.

from automata.fa.dfa import DFA
from automata.fa.nfa import NFA
dfa = DFA.from_nfa(nfa)  # returns an equivalent DFA

DFA.show_diagram(self, path=None)

dfa.show_diagram(path='./dfa1.png')

class NFA(FA)

The NFA class is a subclass of FA and represents a nondeterministic finite automaton. It can be found under automata/fa/nfa.py.

Every NFA has the same five DFA properties: state, input_symbols, transitions, initial_state, and final_states. However, the structure of the transitions object has been modified slightly to accommodate the fact that a single state can have more than one transition for the same symbol. Therefore, instead of mapping a symbol to one end state in each sub-dict, each symbol is mapped to a set of end states.

from automata.fa.nfa import NFA
# NFA which matches strings beginning with 'a', ending with 'a', and containing
# no consecutive 'b's
nfa = NFA(
    states={'q0', 'q1', 'q2'},
    input_symbols={'a', 'b'},
    transitions={
        'q0': {'a': {'q1'}},
        # Use '' as the key name for empty string (lambda/epsilon) transitions
        'q1': {'a': {'q1'}, '': {'q2'}},
        'q2': {'b': {'q0'}}
    },
    initial_state='q0',
    final_states={'q1'}
)

NFA.read_input(self, input_str)

Returns a set of final states the FA stopped on, if the input is accepted.

nfa.read_input('aba')  # returns {'q1', 'q2'}
nfa.read_input('abba')  # raises RejectionException

NFA.read_input_stepwise(self, input_str)

Yields each set of states reached as the NFA reads characters from the input string, if the input is accepted.

nfa.read_input_stepwise('aba')
# yields:
# {'q0'}
# {'q1', 'q2'}
# {'q0'}
# {'q1', 'q2'}

NFA.accepts_input(self, input_str)

if nfa.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

NFA.validate(self)

nfa.validate()  # returns True

NFA.copy(self)

nfa.copy()  # returns deep copy of nfa

NFA.reverse(self)

nfa.reverse()
reversed(nfa)

NFA.concatenate(self, other)

nfa1 + nfa2
nfa1.concatenate(nfa2)

NFA.kleene_star(self)

nfa1.kleene_star()

NFA.from_dfa(cls, dfa)

Creates an NFA that is equivalent to the given DFA.

from automata.fa.nfa import NFA
from automata.fa.dfa import DFA
nfa = NFA.from_dfa(dfa)  # returns an equivalent NFA

class PDA(Automaton, metaclass=ABCMeta)

The PDA class is an abstract base class from which all pushdown automata inherit. It can be found under automata/pda/pda.py.

class DPDA(PDA)

The DPDA class is a subclass of PDA and represents a deterministic finite automaton. It can be found under automata/pda/dpda.py.

Every DPDA has the following (required) properties:

  1. states: a set of the DPDA's valid states, each of which must be represented as a string

  2. input_symbols: a set of the DPDA's valid input symbols, each of which must also be represented as a string

  3. stack_symbols: a set of the DPDA's valid stack symbols

  4. transitions: a dict consisting of the transitions for each state; see the example below for the exact syntax

  5. initial_state: the name of the initial state for this DPDA

  6. initial_stack_symbol: the name of the initial symbol on the stack for this DPDA

  7. final_states: a set of final states for this DPDA

  8. acceptance_mode: a string defining whether this DPDA accepts by 'final_state', 'empty_stack', or 'both'; the default is 'both'

from automata.pda.dpda import DPDA
# DPDA which which matches zero or more 'a's, followed by the same
# number of 'b's (accepting by final state)
dpda = DPDA(
    states={'q0', 'q1', 'q2', 'q3'},
    input_symbols={'a', 'b'},
    stack_symbols={'0', '1'},
    transitions={
        'q0': {
            'a': {'0': ('q1', ('1', '0'))}  # transition pushes '1' to stack
        },
        'q1': {
            'a': {'1': ('q1', ('1', '1'))},
            'b': {'1': ('q2', '')}  # transition pops from stack
        },
        'q2': {
            'b': {'1': ('q2', '')},
            '': {'0': ('q3', ('0',))}  # transition does not change stack
        }
    },
    initial_state='q0',
    initial_stack_symbol='0',
    final_states={'q3'},
    acceptance_mode='final_state'
)

DPDA.read_input(self, input_str)

Returns a PDAConfiguration object representing the DPDA's config. This is basically a tuple containing the final state the DPDA stopped on, the remaining input (an empty string) as well as a PDAStack object representing the DPDA's stack (if the input is accepted).

dpda.read_input('ab')  # returns PDAConfiguration('q3', '', PDAStack(('0',)))
dpda.read_input('aab')  # raises RejectionException

DPDA.read_input_stepwise(self, input_str)

Yields sets of PDAConfiguration objects. These are basically tuples containing the current state, the remaining input and the current stack as a PDAStack object, if the input is accepted.

dpda.read_input_stepwise('ab')
# yields:
# PDAConfiguration('q0', 'ab', PDAStack(('0')))
# PDAConfiguration('q1', 'a', PDAStack(('0', '1')))
# PDAConfiguration('q3', '', PDAStack(('0')))

DPDA.accepts_input(self, input_str)

if dpda.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

DPDA.validate(self)

dpda.validate()  # returns True

DPDA.copy(self)

dpda.copy()  # returns deep copy of dpda

class NPDA(PDA)

The NPDA class is a subclass of PDA and represents a nondeterministic pushdown automaton. It can be found under automata/pda/npda.py.

Every NPDA has the following (required) properties:

  1. states: a set of the NPDA's valid states, each of which must be represented as a string

  2. input_symbols: a set of the NPDA's valid input symbols, each of which must also be represented as a string

  3. stack_symbols: a set of the NPDA's valid stack symbols

  4. transitions: a dict consisting of the transitions for each state; see the example below for the exact syntax

  5. initial_state: the name of the initial state for this NPDA

  6. initial_stack_symbol: the name of the initial symbol on the stack for this NPDA

  7. final_states: a set of final states for this NPDA

  8. acceptance_mode: a string defining whether this NPDA accepts by 'final_state', 'empty_stack', or 'both'; the default is 'both'

from automata.pda.npda import NPDA
# NPDA which matches palindromes consisting of 'a's and 'b's
# (accepting by final state)
# q0 reads the first half of the word, q1 the other half, q2 accepts.
# But we have to guess when to switch.
npda = NPDA(
    states={'q0', 'q1', 'q2'},
    input_symbols={'a', 'b'},
    stack_symbols={'A', 'B', '#'},
    transitions={
        'q0': {
            '': {
                '#': {('q2', '#')},
            },
            'a': {
                '#': {('q0', ('A', '#'))},
                'A': {
                    ('q0', ('A', 'A')),
                    ('q1', ''),
                },
                'B': {('q0', ('A', 'B'))},
            },
            'b': {
                '#': {('q0', ('B', '#'))},
                'A': {('q0', ('B', 'A'))},
                'B': {
                    ('q0', ('B', 'B')),
                    ('q1', ''),
                },
            },
        },
        'q1': {
            '': {'#': {('q2', '#')}},
            'a': {'A': {('q1', '')}},
            'b': {'B': {('q1', '')}},
        },
    },
    initial_state='q0',
    initial_stack_symbol='#',
    final_states={'q2'},
    acceptance_mode='final_state'
)

NPDA.read_input(self, input_str)

Returns a set of PDAConfigurations representing all of the NPDA's configurations. Each of these is basically a tuple containing the final state the NPDA stopped on, the remaining input (an empty string) as well as a PDAStack object representing the NPDA's stack (if the input is accepted).

npda.read_input("aaaa") # returns {PDAConfiguration('q2', '', PDAStack('#',))}
npda.read_input('ab')  # raises RejectionException

NPDA.read_input_stepwise(self, input_str)

Yields sets of PDAConfiguration object. Each of these is basically a tuple containing the current state, the remaining input and the current stack as a PDAStack object, if the input is accepted.

npda.read_input_stepwise('aa')
# yields:
# {PDAConfiguration('q0', 'aa', PDAStack('#',))}
# {PDAConfiguration('q0', 'a', PDAStack('#', 'A')), PDAConfiguration('q2', 'aa', PDAStack('#',))}
# {PDAConfiguration('q0', '', PDAStack('#', 'A', 'A')), PDAConfiguration('q1', '', PDAStack('#',))}
# {PDAConfiguration('q2', '', PDAStack('#',))}

NPDA.accepts_input(self, input_str)

if npda.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

NPDA.validate(self)

npda.validate()  # returns True

NPDA.copy(self)

npda.copy()  # returns deep copy of npda

class TM(Automaton, metaclass=ABCMeta)

The TM class is an abstract base class from which all Turing machines inherit. It can be found under automata/tm/tm.py.

class DTM(TM)

The DTM class is a subclass of TM and represents a deterministic Turing machine. It can be found under automata/tm/dtm.py.

Every DTM has the following (required) properties:

  1. states: a set of the DTM's valid states, each of which must be represented as a string

  2. input_symbols: a set of the DTM's valid input symbols represented as strings

  3. tape_symbols: a set of the DTM's valid tape symbols represented as strings

  4. transitions: a dict consisting of the transitions for each state; each key is a state name and each value is a dict which maps a symbol (the key) to a state (the value)

  5. initial_state: the name of the initial state for this DTM

  6. blank_symbol: a symbol from tape_symbols to be used as the blank symbol for this DTM

  7. final_states: a set of final states for this DTM

from automata.tm.dtm import DTM
# DTM which matches all strings beginning with '0's, and followed by
# the same number of '1's
dtm = DTM(
    states={'q0', 'q1', 'q2', 'q3', 'q4'},
    input_symbols={'0', '1'},
    tape_symbols={'0', '1', 'x', 'y', '.'},
    transitions={
        'q0': {
            '0': ('q1', 'x', 'R'),
            'y': ('q3', 'y', 'R')
        },
        'q1': {
            '0': ('q1', '0', 'R'),
            '1': ('q2', 'y', 'L'),
            'y': ('q1', 'y', 'R')
        },
        'q2': {
            '0': ('q2', '0', 'L'),
            'x': ('q0', 'x', 'R'),
            'y': ('q2', 'y', 'L')
        },
        'q3': {
            'y': ('q3', 'y', 'R'),
            '.': ('q4', '.', 'R')
        }
    },
    initial_state='q0',
    blank_symbol='.',
    final_states={'q4'}
)

The direction N (for no movement) is also supported.

DTM.read_input(self, input_str)

Returns a TMConfiguration. This is basically a tuple containing the final state the machine stopped on, as well as a TMTape object representing the DTM's internal tape (if the input is accepted).

dtm.read_input('01')  # returns TMConfiguration('q4', TMTape('xy..', 3))

Calling config.print() will produce a more readable output:

dtm.read_input('01').print()
# q4: xy..
#        ^
dtm.read_input('011')  # raises RejectionException

DTM.read_input_stepwise(self, input_str)

Yields sets of TMConfiguration objects. Those are basically tuples containing the current state and the current tape as a TMTape object.

dtm.read_input_stepwise('01')
# yields:
# TMConfiguration('q0', TMTape('01', 0))
# TMConfiguration('q1', TMTape('x1', 1))
# TMConfiguration('q2', TMTape('xy', 0))
# TMConfiguration('q0', TMTape('xy', 1))
# TMConfiguration('q3', TMTape('xy.', 2))
# TMConfiguration('q4', TMTape('xy..', 3))

DTM.accepts_input(self, input_str)

if dtm.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

DTM.validate(self)

dtm.validate()  # returns True

DTM.copy(self)

dtm.copy()  # returns deep copy of dtm

class NTM(TM)

The NTM class is a subclass of TM and represents a nondeterministic Turing machine. It can be found under automata/tm/ntm.py.

Every NTM has the following (required) properties:

  1. states: a set of the NTM's valid states, each of which must be represented as a string

  2. input_symbols: a set of the NTM's valid input symbols represented as strings

  3. tape_symbols: a set of the NTM's valid tape symbols represented as strings

  4. transitions: a dict consisting of the transitions for each state; each key is a state name and each value is a dict which maps a symbol (the key) to a set of states (the values)

  5. initial_state: the name of the initial state for this NTM

  6. blank_symbol: a symbol from tape_symbols to be used as the blank symbol for this NTM

  7. final_states: a set of final states for this NTM

from automata.tm.ntm import NTM
# NTM which matches all strings beginning with '0's, and followed by
# the same number of '1's
# Note that the nondeterminism is not really used here.
ntm = NTM(
    states={'q0', 'q1', 'q2', 'q3', 'q4'},
    input_symbols={'0', '1'},
    tape_symbols={'0', '1', 'x', 'y', '.'},
    transitions={
        'q0': {
            '0': {('q1', 'x', 'R')},
            'y': {('q3', 'y', 'R')},
        },
        'q1': {
            '0': {('q1', '0', 'R')},
            '1': {('q2', 'y', 'L')},
            'y': {('q1', 'y', 'R')},
        },
        'q2': {
            '0': {('q2', '0', 'L')},
            'x': {('q0', 'x', 'R')},
            'y': {('q2', 'y', 'L')},
        },
        'q3': {
            'y': {('q3', 'y', 'R')},
            '.': {('q4', '.', 'R')},
        }
    },
    initial_state='q0',
    blank_symbol='.',
    final_states={'q4'}
)

The direction 'N' (for no movement) is also supported.

NTM.read_input(self, input_str)

Returns a set of TMConfigurations. These are basically tuples containing the final state the machine stopped on, as well as a TMTape object representing the NTM's internal tape (if the input is accepted).

ntm.read_input('01')  # returns {TMConfiguration('q4', TMTape('xy..', 3))}

Calling config.print() will produce a more readable output:

ntm.read_input('01').pop().print()
# q4: xy..
#        ^
ntm.read_input('011')  # raises RejectionException

NTM.read_input_stepwise(self, input_str)

Yields sets of TMConfiguration objects. Those are basically tuples containing the current state and the current tape as a TMTape object.

ntm.read_input_stepwise('01')
# yields:
# {TMConfiguration('q0', TMTape('01', 0))}
# {TMConfiguration('q1', TMTape('x1', 1))}
# {TMConfiguration('q2', TMTape('xy', 0))}
# {TMConfiguration('q0', TMTape('xy', 1))}
# {TMConfiguration('q3', TMTape('xy.', 2))}
# {TMConfiguration('q4', TMTape('xy..', 3))}

NTM.accepts_input(self, input_str)

if ntm.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

NTM.validate(self)

ntm.validate()  # returns True

NTM.copy(self)

ntm.copy()  # returns deep copy of ntm

class MNTM(TM)

The MNTM class is a subclass of TM and represents a multitape (non)deterministic Turing machine. It can be found under automata/tm/mntm.py.

Every MNTM has the following (required) properties:

  1. states: a set of the MNTM's valid states, each of which must be represented as a string

  2. input_symbols: a set of the MNTM's valid input symbols represented as strings

  3. tape_symbols: a set of the MNTM's valid tape symbols represented as strings

  4. n_tapes: an int which dictates the number of tapes of this MNTM

  5. transitions: a dict consisting of the transitions for each state; each key is a state name and each value is a dict which maps a symbol (the key) to a set of states (the values)

  6. initial_state: the name of the initial state for this MNTM

  7. blank_symbol: a symbol from tape_symbols to be used as the blank symbol for this MNTM

  8. final_states: a set of final states for this MNTM

from automata.tm.mntm import MNTM
# MNTM which accepts all strings in {0, 1}* and writes all
# 1's from the first tape (input) to the second tape.
self.mntm1 = MNTM(
    states={'q0', 'q1'},
    input_symbols={'0', '1'},
    tape_symbols={'0', '1', '#'},
    n_tapes=2,
    transitions={
        'q0': {
            ('1', '#'): [('q0', (('1', 'R'), ('1', 'R')))],
            ('0', '#'): [('q0', (('0', 'R'), ('#', 'N')))],
            ('#', '#'): [('q1', (('#', 'N'), ('#', 'N')))],
        }
    },
    initial_state='q0',
    blank_symbol='#',
    final_states={'q1'},
)

The direction 'N' (for no movement) is also supported.

MNTM.read_input(self, input_str)

Returns a set of MTMConfigurations. These are basically tuples containing the final state the machine stopped on, as well as a list of TMTape objects representing the MNTM's internal tape (if the input is accepted).

mntm.read_input('01')  # returns {MTMConfiguration('q1', [TMTape('01#', 2), TMTape('1#', 1)])}

Calling config.print() will produce a more readable output:

ntm.read_input('01').pop().print()
# q1: 
#> Tape 1: 01#
#            ^
#> Tape 2: 1#
#           ^
ntm.read_input('2')  # raises RejectionException

MNTM.read_input_stepwise(self, input_str)

Yields sets of MTMConfiguration objects. Those are basically tuples containing the current state and the list of TMTape objects.

ntm.read_input_stepwise('0111')
# yields:
# {MTMConfiguration('q0', (TMTape('0111', 0), TMTape('#', 0)))}
# {MTMConfiguration('q0', (TMTape('0111', 1), TMTape('#', 0)))}
# {MTMConfiguration('q0', (TMTape('0111', 2), TMTape('1#', 1)))}
# {MTMConfiguration('q0', (TMTape('0111', 3), TMTape('11#', 2)))}
# {MTMConfiguration('q0', (TMTape('0111#', 4), TMTape('111#', 3)))}
# {MTMConfiguration('q1', (TMTape('0111#', 4), TMTape('111#', 3)))}

MNTM.read_input_as_ntm(self, input_str)

Simulates the MNTM as an NTM by using an extended tape consisting of all tapes of the MNTM separated by a tape_separator_symbol = _ and 'virtual heads' for each 'virtual tape' (which are basically the portions of the extended tape separated by '_'). Each 'virtual head' corresponds to a special symbol (original_symbol + '^') on the extended tape which denotes where the actual head of that tape would be if the MNTM was being run as a MNTM and not a NTM. This is the classic algorithm for performing a single-tape simulation of a multi-tape Turing machine. For more information, visit Sipser's Introduction to the Theory of Computation 3rd Edition, Section 3.2.

ntm.read_input_as_ntm('0111')
# yields:
# {TMConfiguration('q0', TMTape('0^111_#^_', 0))}
# {TMConfiguration('q0', TMTape('01^11_#^_', 8))}
# {TMConfiguration('q0', TMTape('011^1_1#^_', 9))}
# {TMConfiguration('q0', TMTape('0111^_11#^_', 10))}
# {TMConfiguration('q0', TMTape('0111#^_111#^_', 12))}
# {TMConfiguration('q1', TMTape('0111#^_111#^_', 12))}

MNTM.accepts_input(self, input_str)

if mntm.accepts_input(my_input_str):
    print('accepted')
else:
    print('rejected')

MNTM.validate(self)

ntm.validate()  # returns True

MNTM.copy(self)

ntm.copy()  # returns deep copy of ntm

Base exception classes

The library also includes a number of exception classes to ensure that errors never pass silently (unless explicitly silenced). See automata/base/exceptions.py for these class definitions.

To reference these exceptions (so as to catch them in a try..except block or whatnot), simply import automata.base.exceptions however you'd like:

import automata.base.exceptions as exceptions

class AutomatonException

A base class from which all other automata exceptions inherit (including finite automata and Turing machines).

class InvalidStateError

Raised if a specified state does not exist within the automaton's states set.

class InvalidSymbolError

Raised if a specified symbol does not exist within the automaton's symbols set.

class MissingStateError

Raised if a specified transition definition is missing a defined start state. This error can also be raised if the initial state does not have any transitions defined.

class MissingSymbolError

Raised if a given symbol is missing where it would otherwise be required for this type of automaton (e.g. the automaton is missing a transition for one of the listed symbols).

class InitialStateError

Raised if the initial state fails to meet some required condition for this type of automaton (e.g. if the initial state is also a final state, which is prohibited for Turing machines).

class FinalStateError

Raised if a final state fails to meet some required condition for this type of automaton (e.g. the final state has transitions to other states, which is prohibited for Turing machines).

class RejectionException

Raised if the automaton did not accept the input string after validating (e.g. the automaton stopped on a non-final state after validating input).

Turing machine exception classes

The automata.tm package also includes a module for exceptions specific to Turing machines. You can reference these exception classes like so:

import automata.tm.exceptions as tm_exceptions

class TMException

A base class from which all other Turing machine exceptions inherit.

class InvalidDirectionError

Raised if a direction specified in this machine's transition map is not a valid direction (valid directions include 'L', 'R', and 'N').

class InconsistentTapesException(TMException):

Raised if the number of tapes defined for the mntm is not consistent with the transitions.

class MalformedExtendedTape(TMException):

Raised if the extended tape for simulating a mntm as a ntm is not valid.

Comments
  • Make Automaton instances immutable

    Make Automaton instances immutable

    @eliotwrobson I want to run something by you that I've been mulling over for the past week.

    Part 1: Precomputing lambda closures was a good idea

    When you integrated networkx into the library back in ed1cdffcccb90a03f1892147f42ea1a9f62414f6, you added logic to precompute+cache the lambda closures for an NFA (via self._lambda_closure_dict). Previously, these lambda closures were (arguably excessively) computed on the fly when reading input via my NFA._get_lambda_closure method.

    Given the performance/efficiency consequences of my previous approach, I very much appreciate your contribution here (enough so that I am planning to expose this dictionary as part of the public API, as you may have noticed in my recent commits to develop).

    Part 2: Internal Automaton state

    However, your changes do introduce an interesting dilemma. You see, before Automata v6 was released with your precomputed lambda closure logic, NFA instances could be mutated without consequence just like every other Automaton instance in the library.

    But now that lambda_closures (a derived attribute, in database terms) is in the picture, the NFA instance could end up in a situation where the user updates states, transitions, and/or input_symbols, leaving lambda_closures stale.

    As I see it, there are three solutions to this stale lambda_closures situation if we want NFAs to remain fully mutable:

    1. We watch for changes to states, input_symbols, and transitions (which is a nested structure) and automatically recompute lambda_closures when any of the aforementioned attributes change; this is the ideal solution, but not sure how feasible it is to implement

    2. We require the user to explicitly recompute the lambda closures (via some public method) when they choose to mutate the NFA. This is not my favorite approach, as the precomputing of lambda closures is really an implementation detail the user should not be concerned with. But it would balance implementation simplicity with performance.

    3. We revert to caching nothing and computing the lambda closures on the fly when reading input; naturally, this would re-introduce a significant performance consequence, especially if you want to read many input strings with the same NFA instance

    Of course, if we make NFAs fully immutable, then this problem goes away, and we don't need to worry about recomputing lambda_closures post-instantiation.

    Part 3: Mutable or immutable?

    So far, my blurb above is focused primarily on NFAs, but I think it opens up a larger question about whether any automaton should be a mutable structure, or whether it should be immutable. My ultimate aim for this library is to have a consistent and friendly API that makes working with these data structures pleasant and practical. And right now, NFAs are inconsistent with the rest of the Automaton sub-types because NFAs have some additional internal state (i.e. lambda_closures) which must be managed.

    Questions for you

    Therefore, I will sum up my musings with a few questions regarding your own use of this library:

    1. How frequently are you reading input with the same automaton?
    2. How frequently (if ever) do you mutate an automaton after its instantiation?
    3. What are your thoughts about making all automata instances immutable?
    4. Anything else you want to add?

    @abhinavsinha-adrino @Tagl If you guys happen to have a moment, I would love any feedback you have as well. This library isn't any good if it doesn't serve real-world use cases for working with these machines.

    Much thanks in advance, Caleb

    enhancement question 
    opened by caleb531 49
  • Eliminate Lambda/epsilon

    Eliminate Lambda/epsilon

    I have made a method to transform an ε-NFA to an NFA, or eliminating lambda/epsilon if you will. Since this is a step in the process of transforming an ε-NFA to DFA, I thought it could be nice to include it in this project?

                 0   1   λ
    →*q0         ∅  q1  q2
    q1    {*q0,q2}  q2   ∅
    q2           ∅   ∅   ∅
    

    nfa

    transforms to:

                 0   1
    →*q0         ∅  q1
    q1    {*q0,q2}  q2
    q2           ∅   ∅
    

    eliminated_nfa

    enhancement 
    opened by lewiuberg 25
  • Regular Expression conversions in both direction + NFA.eliminate_labmda + some other functions and changes

    Regular Expression conversions in both direction + NFA.eliminate_labmda + some other functions and changes

    Thanks for this nice library.

    So basically, I have worked on converting an NFA/DFA to regular expression and also backward, regular expression to NFA conversion. All other changes were done to help this task, test it, and remove existing errors.

    1. New class added: GNFA - Generalised Nondetermistic Finite automaton. (as automata/fa/gnfa.py). I have added a description of GNFA in README.me explaining its characteristics.
    • GNFA.from_nfa(cls, nfa)
    • GNFA.from_dfa(cls, dfa)
    • GNFA.to_regex(self) #13
    • GNFA.show_diagram(self, path=None, show_None = True All other functions were defined as helper functions or for initializing the class. Check README.me for all additions.
    1. Additions /Changes to NFA (automata/fa/nfa.py)
    • NFA.from_regex(cls, regex) #13
    • NFA.eliminate_lambda(self) #31
    • NFA.kleene_star(self) Debugged and changed it according to #45
    • NFA.union(self, other)
    • NFA.show_diagram(self, path=None)
    • NFA.option(self) for ? but can't read '' because of the way this empty symbol is defined here. Nevertheless, this can help in building other NFAs (as helped in regex to NFA conversion)
    1. Regular expressions (automata/base/regex.py): Just a set of methods for dealing with regular expressions. I have explained the syntax for a regular expression in README.me
    • automata.base.regex.validate(regex)
    • automata.base.regex,issequal(re1, re2)
    • automata.base.regex.issubset(re1, re2)
    • automata.base.regex.issuperset(re1, re2)
    opened by abhinavsinha-adrino 20
  • Regex Parser Refactor

    Regex Parser Refactor

    Rewrites the parser for converting regexes to NFAs. Still contains type hints from the base project and comments / docstrings are not updated, but passes the test cases where actual parsing happens. @caleb531figured it was better to have something up before taking the time to polish up all of that to get some comments on the design as early as possible.

    opened by eliotwrobson 19
  • Feature Request: NFA <-> Regular Expression

    Feature Request: NFA <-> Regular Expression

    Hello,

    Thank you for your work and very nice library.

    If ever you get the time, would it be possible to implement the conversion NFA <-> Regular expression ?

    The direction Regex -> NFA is easy, the other one more annoying. A good reference is "Introduction to the Theory of Computation" by Sisper, chapter 1, definition 1.64 for the concept of "generalised nondeterministic finite automaton" which is useful in the conversion NFA -> Regex.

    Thank you very much again, Tristan

    enhancement 
    opened by tcosmo 19
  • Make validation optional

    Make validation optional

    I was trying to optimize some code of mine using the library (billions of DFA operations) so I ran a profiler and saw a lot of time was spent doing unions, specifically in PartitionRefinement.refine, cross product graph creation and then what surprised me was the __init__ function. And half of the time was due to validation. I think validation should be optional, so the user could pass in validate=False as a keyword argument when the user is a 100% sure that the construction is valid.

    Thoughts on this?

    enhancement 
    opened by Tagl 15
  • NFA.to_regex() not implemented

    NFA.to_regex() not implemented

    @caleb531 I think by mistake I wrote NFA.to_regex() but this function is not there, instead NFA.from_regex(regex) is there. For converting NFA to regex, one has to convert NFA to GNFA and then GNFA to regex.

    Very sorry for this.

    bug 
    opened by abhinavsinha-adrino 15
  • Feature request: DFA Union, Intersection, Complement, etc.

    Feature request: DFA Union, Intersection, Complement, etc.

    I would like to have operations like these implemented within the library. The most common operations used are probably:

    • Union
    • Intersection
    • Complement
    • Concatenation
    • Reverse
    • Kleene star

    Another useful operation would be checking for equivalence of two DFAs. If an equivalence check is added should the eq method be overloaded with the equivalence check? I'd like to open a discussion on what additional operations could be added to the library. I have implementations for all the listed operations in mind and I have already written code for the first three so I'll gladly add these operations myself.

    enhancement 
    opened by Tagl 15
  • Conditions of PDA Acceptance

    Conditions of PDA Acceptance

    DPDAs can accept when the stack is empty or a final state has been reached (assuming that all input has been read).

    The related code is DPDA._check_for_input_rejection which is basically this:

    if current_state not in self.final_states and stack:
                raise exceptions.RejectionException()
    

    This leads to the problem that when the DPDA accepts via final state, it will also accept if the stack is empty after reading a given input, even when no final state has been reached.

    I would prefer specifying whether a DPDA accepts either via empty stack or via final state.

    (This can probably be deducted by looking at whether self.final_states is empty but I'm not so sure about that.)

    bug enhancement 
    opened by YtvwlD 15
  • Enumeration and iteration for DFA

    Enumeration and iteration for DFA

    Code for #79 I had to update the copy function in automaton to skip copying the caches. Don't really like the fix I made. Rest is just as discussed and then added min and max word length functions which are calculated with BFS and longest path in DAG, respectively.

    opened by Tagl 13
  • Type Annotations

    Type Annotations

    Hello,

    Thanks for your work on this library! I've been using it as part of the backend for a course I'm TAing. For our purposes, we're working on adding type hints (which can be checked with mypy) to the library. We think this makes the code more readable, and easier to tell what certain functions do. Would there be interest in including these changes in this project? I'd be happy to put up a pull request once we're done.

    Best, Eliot

    enhancement 
    opened by eliotwrobson 13
  • Help on user input

    Help on user input

    Hi im traying to build a program that let you introduce your own NFA and transform it to DFA. I would like to recive some advice

    So I would like to know what would be the best option for TRANSITIONS user input. image (In the case where the symbol is repeated)

    I have someting like this but it doesn't work, cause how dicts works. In a while: transitions_aux ={ input('State?: '): { input('value: '): set(input('transitionState:').split()) }, } transitions.update(transitions_aux)

    Should the user be ask just for a string instead?

    Sorry for bad english.

    question 
    opened by alatorre-sebastian 1
  • Allow non-alphanumeric literal characters in regular expressions

    Allow non-alphanumeric literal characters in regular expressions

    @eliotwrobson Per your brief comment from #112:

    Currently, only alphanumeric characters are supported in the regex parsing here (which I'm now realizing is kindof a breaking change from v6, whoops). But I think some way of adding escape characters could be useful to people. I think it's just a matter of reconfiguring the lexer to treat characters coming after a backslash as a literal. But I think it should definitely be in a separate PR.

    I think it would be helpful to allow non-alphanumeric characters in a regex, such that you could create a regex for an email address, @username, etc. This enhancement would imply that you can also escape symbols to be literal characters.

    enhancement 
    opened by caleb531 1
  • Support for quantifiers and the shuffle operator in regular expressions

    Support for quantifiers and the shuffle operator in regular expressions

    Is there a specific reason why quantifiers for regular expressions are not supported? (so things like a{2,10}) If there is no specific reason, would you be open to a PR that implements it?

    Also, what do you think of adding a special regex shuffle operator? This would be very handy to express languages like "all permutations of a,b,c". My proposal would be to use the ~ character and implement a separate parser for that

    enhancement 
    opened by EduardoGoulart1 16
  • Adding directory for usage examples?

    Adding directory for usage examples?

    So the API documentation reorg (#98 and #99) is complete—great! But as I look through the API documentation, I realize that there is a lot of text to sift through between each block of code.

    @Tagl @eliotwrobson What do you think about adding an examples/ directory with Python files that demonstrate the code needed to perform the most common DFA/etc. operations. Since the API is so large at this point, we would probably need to be selective about what use cases we plan to demonstrate.

    One option is that we could make each example built around a real-world use case. For example:

    # examples/reading_input_from_cli.py
    from automata.fa.dfa import DFA
    
    # DFA which matches all binary strings ending in an odd number of '1's
    my_dfa = DFA(
        states={'q0', 'q1', 'q2'},
        input_symbols={'0', '1'},
        transitions={
            'q0': {'0': 'q0', '1': 'q1'},
            'q1': {'0': 'q0', '1': 'q2'},
            'q2': {'0': 'q2', '1': 'q1'}
        },
        initial_state='q0',
        final_states={'q1'}
    )
    
    try:
        while True:
            if my_dfa.accepts_input(input('Please enter your input: ')):
                print('Accepted')
            else:
                print('Rejected')
    except KeyboardInterrupt:
        print('')
    

    Another option is that we could make each example a sort of rapid-fire walkthrough of the different things you can do. For example:

    # examples/dfa_reading_input.py
    from automata.fa.dfa import DFA
    
    # DFA which matches all binary strings ending in an odd number of '1's
    my_dfa = DFA(
        states={'q0', 'q1', 'q2'},
        input_symbols={'0', '1'},
        transitions={
            'q0': {'0': 'q0', '1': 'q1'},
            'q1': {'0': 'q0', '1': 'q2'},
            'q2': {'0': 'q2', '1': 'q1'}
        },
        initial_state='q0',
        final_states={'q1'}
    )
    
    my_dfa.read_input('001') # returns a final state (e.g. 'q1')
    
    my_dfa.read_input('001!') # raises RejectionException
    
    # loop over each state of the DFA as you read input
    for state in my_dfa.read_input_stepwise('00111'):
        print(state)
    
    # etc.
    

    We may also need to be careful of adding too many examples, such that it becomes an additional maintenance burden on top of the existing API documentation. But it might be helpful for beginners. I'm a bit ambivalent—what do you think?

    housekeeping 
    opened by caleb531 1
  • More functionality for DPDA

    More functionality for DPDA

    My next focus was gonna be a little bit on DPDAs. I've started implementing enumeration and iteration. Are there any specific operations that you're particularly interested in prioritizing? Some of them may not have any known efficient algorithm (regularity of language) or may even be undecidable (although that's more common for the nondeterministic version). @caleb531 @eliotwrobson

    Some ideas:

    • Enumeration
    • Iteration
    • Union with regular language
    • Intersection with regular language
    • Complement
    • Equivalence
    • Emptiness
    • Regularity (exponential complexity)
    enhancement 
    opened by Tagl 5
Releases(v7.0.1)
  • v7.0.1(Nov 13, 2022)

    • Minor style/performance improvements to GNFA class
    • Corrections to documentation
      • DFA.cardinality() actually raises an InfiniteLanguageException for infinite languages, rather than returning float('inf')
      • DFA.maximum_word_length() actually returns None for infinite languages, rather than returning float('inf')
      • Added missing documentation for EmptyLanguageException
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0(Nov 11, 2022)

    Automata v7 is a packed release, introducing immutable automata instances, new DFA/NFA operations, and performance optimizations across the board to make automaton machines easier to construct, validate, and work with. The documentation has also been reorganized to make browsing much easier.

    Huge thanks to @eliotwrobson and @Tagl for their massive code contributions to this release, as well as their feedback on v7's development!

    New Features

    Immutability

    All Automaton instances are now fully immutable to protect against common pitfalls, such as mutating an automaton to an invalid state after it's already been validated.

    This means that if you wish to make a change to an automaton instance, you must retrieve its attributes as a dictionary (using the new input_parameters property), make your desired change, then pass those parameters to the relevant constructor. For example:

    from automata.fa.dfa import DFA
    
    dfa1 = DFA(
        states={'q0', 'q1', 'q2'},
        input_symbols={'0', '1'},
        transitions={
            'q0': {'0': 'q0', '1': 'q1'},
            'q1': {'0': 'q0', '1': 'q2'},
            'q2': {'0': 'q2', '1': 'q1'}
        },
        initial_state='q0',
        final_states={'q1'}
    )
    # You can still copy an automaton just fine
    dfa2 = dfa.copy()
    # If you want to make a change, you must create a new instance; please note
    # that dfa2.input_parameters is always a deep copy of the input parameters for
    # dfa2 (in other words, mutating dfa2.input_parameters will not actually mutate
    # dfa2)
    params = dfa2.input_parameters
    params['final_states'] = {'q2'}
    dfa3 = DFA(**params)
    

    DFA and NFA Equivalence via ==

    You can now use == to check if two DFA accept the same language. You can also use == to check if two NFA instances accept the same language.

    dfa1 == dfa2
    
    nfa1 == nfa2
    

    New DFA Methods

    DFAs now include more methods for working with accepted languages. For example, the library can now build you a DFA from a language represented by any sequence of strings. Similarly, you can now iterate over a DFA, which will yield a string accepted by that DFA at each iteration.

    len(my_dfa)  # get the cardinality of a DFA
    for word in my_dfa:  # loop over every string accepted by the DFA
        print(word)
    

    Please see the DFA documentation for the full details on these new methods (outlined below).

    Click to expand new DFA methods

    DFA.from_finite_language(cls, input_symbols, language)

    Constructs the minimal DFA corresponding to the given finite language and input symbols.

    DFA.from_finite_language(
        language={'aa', 'aaa', 'aaba', 'aabbb', 'abaa', 'ababb', 'abbab',
                  'baa', 'babb', 'bbaa', 'bbabb', 'bbbab'},
        input_symbols={'a', 'b'})
    

    DFA.complement(self, retain_names=False, minify=True)

    Returns a new DFA which accepts all input rejected by the DFA on which complement() is called.

    minimal_complement_dfa = ~dfa
    complement_dfa = dfa.complement(minify=False)
    

    DFA.predecessor(self, input_str, strict=True, key=None)

    Returns the first string accepted by the DFA that comes before the input string in lexicographical order.

    prev_word = dfa.predecessor('0110')
    same_word = dfa.predecessor(prev_word, strict=False)
    

    DFA.predecessors(self, input_str, strict=True, key=None)

    Generates all strings that come before the input string in lexicographical order.

    # Generates all strings in a language in lexographical order
    for word in dfa.predecessors(None):
        print(word)
    

    DFA.successor(self, input_str, strict=True, key=None)

    Returns the first string accepted by the DFA that comes after the input string in lexicographical order.

    next_word = dfa.successor('0110')
    same_word = dfa.predecessor(next_word, strict=False)
    

    DFA.successors(self, input_str, strict=True, key=None, reverse=False)

    Generates all strings that come after the input string in lexicographical order

    # Generates all strings in a language in lexographical order
    for word in dfa.successors(None):
        print(word)
    

    DFA.minimum_word_length(self)

    Returns the length of the shortest word in the language represented by the DFA.

    dfa.minimum_word_length()
    

    DFA.maximum_word_length(self)

    Returns the length of the longest word in the language represented by the DFA. In the case of infinite languages, float('inf') is returned.

    dfa.maximum_word_length()
    

    DFA.count_words_of_length(self, k)

    Counts words of length k accepted by the DFA.

    dfa.count_words_of_length(3)
    

    DFA.words_of_length(self, k)

    Generates words of length k accepted by the DFA.

    for word in dfa.words_of_length(3):
        print(word)
    

    You can also iterate through all words accepted by the DFA. Be aware that the generator may be infinite.

    for word in dfa:
        if len(word) > 10:
            break
        print(word)
    

    DFA.cardinality(self)

    Returns the cardinality of the language represented by the DFA. Note that len(dfa) raises a ValueError for infinite languages, whereas DFA.cardinality will return float('inf').

    dfa.cardinality()
    len(dfa)
    

    DFA.from_prefix(cls, input_symbols, prefix, contains=True)

    Directly computes the minimal DFA recognizing strings with the given prefix.

    contains_prefix_nano = DFA.from_prefix({'a', 'n', 'o', 'b'}, 'nano')
    avoids_prefix_nano = DFA.from_prefix({'a', 'n', 'o', 'b'}, 'nano', contains=False)
    

    DFA.from_suffix(cls, input_symbols, suffix, contains=True)

    Directly computes the minimal DFA recognizing strings with the given prefix.

    contains_suffix_nano = DFA.from_suffix({'a', 'n', 'o', 'b'}, 'nano')
    avoids_suffix_nano = DFA.from_suffix({'a', 'n', 'o', 'b'}, 'nano', contains=False)
    

    DFA.from_substring(cls, input_symbols, substring, contains=True, must_be_suffix=False)

    Directly computes the minimal DFA recognizing strings containing the given substring.

    contains_substring_nano = DFA.contains_substring({'a', 'n', 'o', 'b'}, 'nano')
    avoids_substring_nano = DFA.contains_substring({'a', 'n', 'o', 'b'}, 'nano', contains=False)
    

    DFA.from_subsequence(cls, input_symbols, subsequence, contains=True)

    Directly computes the minimal DFA recognizing strings containing the given subsequence.

    contains_substring_dcba = DFA.contains_subsequence({'a', 'b', 'c', 'd'}, 'dcba')
    avoids_substring_dcba = DFA.contains_subsequence({'a', 'b', 'c', 'd'}, 'dcba', contains=False)
    

    DFA.of_length(cls, input_symbols, min_length=0, max_length=None, symbols_to_count=None)

    Directly computes the minimal DFA which accepts all words whose length is between min_length and max_length, inclusive. To allow infinitely long words, the value None can be passed in for max_length. If symbols_to_count is None (default behavior), then counts all symbols. Otherwise, only counts symbols present in the set symbols_to_count and ignores other symbols.

    DFA.count_mod(cls, input_symbol, k, remainders=None, symbols_to_count=None)

    Directly computes a DFA that counts given symbols and accepts all strings where the remainder of division by k is in the set of remainders given. The default value of remainders is {0} and all symbols are counted by default.

    even_length_strings = DFA.count_mod({'0', '1'}, 2)
    odd_number_of_ones = DFA.count_mod({'0', '1'}, 2, remainders={1}, symbols_to_count={'1'})
    

    DFA.nth_from_start(cls, input_symbols, symbol, n)

    Directly computes the minimal DFA which accepts all words whose n-th character from the end is symbol, where n is a positive integer.

    dfa = DFA.nth_from_start({'0', '1'}, '1', 4)
    

    DFA.nth_from_end(cls, input_symbols, symbol, n)

    Directly computes the minimal DFA which accepts all words whose n-th character from the end is symbol, where n is a positive integer.

    dfa = DFA.nth_from_end({'0', '1'}, '1', 4)
    

    DFA.count_words_of_length()

    Counts words (of the given length) that are accepted by the DFA

    dfa.count_words_of_length(3)
    

    DFA.universal_language(cls, input_symbols)

    Returns a new DFA which accepts all strings formed from the given input symbols.

    DFA.universal_language(input_symbols={'a', 'b'})
    

    DFA.empty_language(cls, input_symbols)

    Returns a new DFA which rejects all strings formed from the given input symbols.

    DFA.empty_language(input_symbols={'a', 'b'})
    

    New NFA Methods

    Please see the NFA documentation for the full details on these new methods.

    Click to expand new NFA methods

    NFA.left_quotient(self, other)

    Returns the left quotient of one NFA with respect to another.

    new_nfa = nfa1.left_quotient(nfa2)
    

    NFA.right_quotient(self, other)

    Returns the right quotient of one NFA with respect to another.

    new_nfa = nfa1.right_quotient(nfa2)
    

    NFA.shuffle_product(self, other)

    Returns shuffle product of two NFAs. This produces an NFA that accepts all interleavings of strings in the input NFAs.

    new_nfa = nfa1.shuffle_product(nfa2)
    

    NFA.edit_distance(cls, input_symbols, reference_str, max_edit_distance, insertion=True, deletion=True, substitution=True)

    Constructs the NFA for the given reference_str for the given Levenshtein distance.

    This NFA recognizes strings within the given Levenshtein distance (commonly called edit distance) of the reference_str. Parameters control which error types the NFA will recognize (insertions, deletions, or substitutions).

    If insertion and deletion are False and substitution is True, then this is the same as Hamming distance. If insertion and deletion are True and substitution is False, then this is the same as LCS distance.

    levenshtein_nfa = NFA.edit_distance({'0', '1'}, '0101', 2)
    hamming_nfa = NFA.edit_distance({'0', '1'}, '0101', 2, insertion=False, deletion=False)
    LCS_nfa = NFA.edit_distance({'0', '1'}, '0101', 2, substitution=False)
    

    Tuples for state names (instead of strings)

    • DFA operations that generate new states (like conversion to NFA, cross product, etc.) now create these new states as tuples rather than strings
      • This allows for greater flexibility when your automaton has state names of mixed data types (e.g. both str and int within the same states set)

    Additional Regular Expression Characters

    The following characters are now recognized within NFA-based regular expressions:

    • ? (optional)
    • . (wildcard)
    • + (one-or-more)
    • & (intersection)

    Whitespace characters such as spaces, tabs, newlines, etc. are now also ignored in all regular expressions parsed by the library.

    Better performance

    • Made substantial improvements to the performance of the regular expression engine (automata.base.regex, now automata.regex.regex) when handling substantially longer regexes (thanks to @eliotwrobson in #62!)
    • Made performance improvements (10x in testing) to the DFA.minify() method (also thanks to @eliotwrobson in #66)

    Optional Validation

    For performance, you can now globally disable the validation that is automatically performed when instantiating an Automaton instance. This can be done via the new global configuration functionality.

    import automata.base.config as global_config
    
    global_config.should_validate_automata = False
    
    # The rest of your code...
    

    Bug Fixes

    • Fixed a bug where state sets of mixed types (e.g. sets containing both str and int states) were not handled correctly (#60)
    • The NFA methods that are unsupported by the GNFA class (such as read_input_stepwise) now throw a NotImplementedError, since the GNFA class is designed specifically for regular expression conversion anyway
      • If you need input-reading functionality and NFA operations, simply use an NFA directly
    • The GNFA class not properly raises a NotImplementedError or returns NotImplemented for the relevant unsupported NFA operations; previously, the behavior was unpredictable for those operations when applied to a GNFA

    Breaking Changes

    • All Automaton instances are now immutable, meaning several things:
      • You can no longer set or delete any attributes on an instance
      • The set and dict parameters that you pass when instantiating an Automaton type are converted to immutable structures as well. So if you pass states={'q0', 'q1'} to the DFA constructor, later retrieving dfa.states will be of type frozenset, not set
    • In accordance with the immutability change, the frozendict package is a new required dependency of the library
    • Moved the automata.base.regex module to automata.regex.regex alongside the other regular expression-related modules
    • The default value of the retain_names parameter for DFA.minify() has been corrected from True to False; the README has always stated that the default value should be False, however the default value in the code was actually True; therefore, the code has been updated to match the README (#59)
      • Since this code correction may break existing developer code, this is labeled as a backwards-incompatible change rather than just a mere bugfix

    Reorganized Documentation

    The API documentation has been moved out of the README and into a dedicated docs/ folder in the GitHub repository. With this move, the documentation has also been broken up into separate files for easier navigation and discoverability.

    https://github.com/caleb531/automata/tree/main/docs

    Source code(tar.gz)
    Source code(zip)
  • v6.0.2(Aug 27, 2022)

    • Fixed a critical bug caused by v6.0.1, where v6.0.1 attempted to implement the missing NFA.to_regex() method. However, the addition of the method caused a circular import error that was forced to be remove for the sake of the library's stability. So sorry about that.
    Source code(tar.gz)
    Source code(zip)
  • v6.0.1(Aug 27, 2022)

    • Added missing implementation for NFA.to_regex(), which was absent from the v6.0.0 release. Sorry about that!
    • Corrections to documentation in README
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0(Aug 26, 2022)

    Automata v6 is a major new release with new features and a few breaking changes that keep the library moving forward. Big thanks to @abhinavsinha-adrino and @eliotwrobson for most of these improvements.

    New Features

    • A new GNFA class to represent generalized non-deterministic finite automata (thanks @abhinavsinha-adrino!)
      • Please see the README for documentation on this new class
    • A new automata.base.regex module with utilities for working with automata-style regular expressions; a new NFA.to_regex method has also been added
      • Please see the README for documentation on this new module
    • A new NFA.eliminate_lambda method to return an NFA with lambda/epsilon transitions removed (#31)
    • Significant performance optimizations to various DFA methods (thanks @eliotwrobson!):
      • DFA.from_nfa
      • DFA.isempty
      • DFA.isfinite
      • DFA.__eq__

    Breaking Changes

    • Added networkx as a required dependency, providing substantial performance improvements for certain DFA/NFA methods, and also streamlining the code to improve maintainability
    • Dropped Python 3.6 support, since it has been end-of-life since December 2021. Python 3.7 is currently the minimum version of Python required by this library.

    Bug Fixes

    • Fixed a bug with NFA.kleene_star() where outgoing transitions for every final state were required (but shouldn't be)
    • Fixed a bug where the DFA union and intersection operations were not returning the correct type if you had subclassed from DFA
    Source code(tar.gz)
    Source code(zip)
  • v5.0.0(Aug 16, 2021)

    Automata v5 is a major new release with new goodies and also some backwards-incompatible changes:

    New Features

    New DFA/NFA Operations

    • Added a plethora of new DFA and NFA operations (big thanks to @Tagl for contributing!); please see the README for documentation on all new methods/operators:
      • DFA Operations
        • Empty check (dfa1.isempty())
        • Finite check (dfa1.isfinite())
        • Disjoint check (dfa1.isdisjoint(dfa2))
        • Subset check (dfa1 <= dfa2)
        • Superset check (dfa1 >= dfa2)
        • Strict subset check (dfa1 < dfa2)
        • Strict superset check (dfa1 > dfa2)
        • Equivalence check (dfa1 == dfa2)
        • Complement (~dfa1)
        • Union (dfa1 | dfa2)
        • Intersection (dfa1 & dfa2)
        • Set difference (dfa1 - dfa2)
        • Symmetric difference (dfa1 ^ dfa2)
      • NFA Operations
        • Reversal (reversed(nfa1))
        • Concatenation (nfa1 + nfa2)
        • Kleene Star (nfa1.kleene_star())
    • Added new visualization capability for DFAs using pydot (thanks to @lewiuberg)
      • Please note that pydot is now a dependency of this library
      • To use, call the DFA show_diagram() method; see the README for usage
      • On macOS, run brew install gprof2dot to allow pydot to function fully

    Breaking Changes

    • Dropped Python 3.5 support, since it has been end-of-life since September 2020. Python 3.6 is now the minimum supported version of Python.
    • Added pydot as a required dependency (per the new visualization feature below)
    • No backwards-incompatible API changes otherwise

    Flag to Enable Partial DFAs

    • Added an allow_partial parameter for DFAs that permits a state to not have transitions for every input symbol; see the README for usage (#36)

    Other improvements

    • Improved the performance of the DFA.minify() method
    • Improved handling of automata with non-string state names (i.e. an InvalidStateError is no longer raised)
    Source code(tar.gz)
    Source code(zip)
  • v4.0.0+post.1(Sep 20, 2020)

  • v3.1.0(Feb 9, 2019)

    New Features

    • Added a new (optional) acceptance_mode parameter for DPDAs and NPDAs. Accepted values are 'final_state', 'empty_stack', or 'both'; the default value is 'both'
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Jan 21, 2019)

    New Features

    • Added a new NPDA class for nondeterministic pushdown automata

      • A new PDAConfiguration class has also been added, which is immutable and hashable; it is essentially a tuple of state, remaining input, and stack
    • Added a new NTM class for nondeterministic Turing machines

      • A new TMConfiguration class has also been added, which is immutable and hashable; it is essentially a tuple of state and tape
    • Added N direction for DTM and NTM to indicate no tape movement

    Breaking changes

    • PDAStack is now immutable and hashable; it still represents the current stack of a PDA
    • TMTape is now immutable and hashable; it still represents the tape of a TM and the current cursor position
    • The copy methods on TMTape and PDAStack have been removed, since they are now immutable types
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(May 29, 2018)

  • v2.0.1(May 5, 2018)

    • Refactored DFA.from_nfa() to improve its efficiency and handle more complex cases (thanks @YtvwlD)
    • Added __init__.py to every module to improve compatibility with interpreters like bpython (also @YtvwlD)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Apr 30, 2018)

    • Automata v2 introduces a number of backwards-incompatible API changes to clean up and improve the API
    • Added a new input method, accepts_input(), which returns a boolean
      • For more details, see the README
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-rev.3(Jul 9, 2016)

  • v1.0.0-rev.2(Jul 7, 2016)

  • v1.0.0-rev.1(Jul 7, 2016)

  • v1.0.0(Jul 7, 2016)

Owner
Caleb Evans
Hi, I'm Caleb, a web developer who lives for Christ by coding enjoyable apps and useful tools. I write primarily in HTML5, JavaScript, PHP, and Python.
Caleb Evans
Policy Gradient Algorithms (One Step Actor Critic & PPO) from scratch using Numpy

Policy Gradient Algorithms From Scratch (NumPy) This repository showcases two policy gradient algorithms (One Step Actor Critic and Proximal Policy Op

1 Jan 17, 2022
Optimal skincare partition finder using graph theory

Pigment The problem of partitioning up a skincare regime into parts such that each part does not interfere with itself is equivalent to the minimal cl

Jason Nguyen 1 Nov 22, 2021
Tic-tac-toe with minmax algorithm.

Tic-tac-toe Tic-tac-toe game with minmax algorithm which is a research algorithm his objective is to find the best move to play by going through all t

5 Jan 27, 2022
Genetic algorithms are heuristic search algorithms inspired by the process that supports the evolution of life.

Genetic algorithms are heuristic search algorithms inspired by the process that supports the evolution of life. The algorithm is designed to replicate the natural selection process to carry generatio

Mahdi Hassanzadeh 4 Dec 24, 2022
A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

2 May 22, 2022
A Python project for optimizing the 8 Queens Puzzle using the Genetic Algorithm implemented in PyGAD.

8QueensGenetic A Python project for optimizing the 8 Queens Puzzle using the Genetic Algorithm implemented in PyGAD. The project uses the Kivy cross-p

Ahmed Gad 16 Nov 13, 2022
Algorithms written in different programming languages

Data Structures and Algorithms Clean example implementations of data structures and algorithms written in different languages. List of implementations

Zoran Pandovski 1.3k Jan 03, 2023
Algorithmic virtual trading using the neostox platform

Documentation Neostox doesnt have an API Support, so this is a little selenium code to automate strategies How to use Clone this repository and then m

Abhishek Mittal 3 Jul 20, 2022
Implementation of an ordered dithering algorithm used in computer graphics

Ordered Dithering Project In this project, we use an ordered dithering method to turn an RGB image, first to a gray scale image and then, turn the gra

1 Oct 26, 2021
This project consists of a collaborative filtering algorithm to predict movie reviews ratings from a dataset of Netflix ratings.

Collaborative Filtering - Netflix movie reviews Description This project consists of a collaborative filtering algorithm to predict movie reviews rati

Shashank Kumar 1 Dec 21, 2021
Evol is clear dsl for composable evolutionary algorithms that optimised for joy.

Evol is clear dsl for composable evolutionary algorithms that optimised for joy. Installation We currently support python3.6 and python3.7 and you can

GoDataDriven 178 Dec 27, 2022
Esse repositório tem como finalidade expor os trabalhos feitos para disciplina de Algoritmos computacionais e estruturais do CEFET-RJ no ano letivo de 2021.

Exercícios de Python 🐍 Esse repositório tem como finalidade expor os trabalhos feitos para disciplina de Algoritmos computacionais e estruturais do C

Rafaela Bezerra de Figueiredo 1 Nov 20, 2021
Python-Strongest-Encrypter - Transform your text into encrypted symbols using their dictionary

How does the encrypter works? Transform your text into encrypted symbols using t

1 Jul 10, 2022
A fast, pure python implementation of the MuyGPs Gaussian process realization and training algorithm.

Fast implementation of the MuyGPs Gaussian process hyperparameter estimation algorithm MuyGPs is a GP estimation method that affords fast hyperparamet

Lawrence Livermore National Laboratory 13 Dec 02, 2022
An NUS timetable generator which uses a genetic algorithm to optimise timetables to suit the needs of NUS students.

A timetable optimiser for NUS which uses an evolutionary algorithm to "breed" a timetable suited to your needs.

Nicholas Lee 3 Jan 09, 2022
QDax is a tool to accelerate Quality-Diveristy (QD) algorithms through hardware accelerators and massive parallelism

QDax: Accelerated Quality-Diversity QDax is a tool to accelerate Quality-Diveristy (QD) algorithms through hardware accelerators and massive paralleli

Adaptive and Intelligent Robotics Lab 183 Dec 30, 2022
So far implements A* will add more later

Pathfinding_Visualization Finds the shortest path between two nodes. The light blue path is the shortest path. The black nodes are barriers. Created i

Lukas DeLoach 1 Jan 18, 2022
Classic algorithms including Fizz Buzz, Bubble Sort, the Fibonacci Sequence, a Sudoku solver, and more.

Algorithms Classic algorithms including Fizz Buzz, Bubble Sort, the Fibonacci Sequence, a Sudoku solver, and more. Algorithm Complexity Time and Space

1 Jan 14, 2022
Path tracing obj - (taichi course final project) a path tracing renderer that can import and render obj files

Path tracing obj - (taichi course final project) a path tracing renderer that can import and render obj files

5 Sep 10, 2022
A selection of a few algorithms used to sort or search an array

Sort and search algorithms This repository has some common search / sort algorithms written in python, I also included the pseudocode of each algorith

0 Apr 02, 2022