An Active Automata Learning Library Written in Python

Overview

AALpy

An Active Automata Learning Library

Python application CodeQL PyPI - Downloads

GitHub issues GitHub pull requests Python 3.6 PyPI - Wheel Binder Maintenance License: MIT


AALpy is a light-weight active automata learning library written in pure Python. You can start learning automata in just a few lines of code.

Whether you work with regular languages or you would like to learn models of (black-box) reactive systems, AALpy supports a wide range of modeling formalisms, including deterministic, non-deterministic, and stochastic automata.

Automata Type Supported Formalisms Features
Deterministic Deterministic Finite Automata
Mealy Machines
Moore Machines
Counterexample Processing
Seamless Caching
11 Equivalence Oracles
Non-Deterministic Observable Non-Deterministic FSM
Abstracted Non-Deterministic FSM
Size Reduction Trough Abstraction
Stochastic Markov Decision Processes
Stochastic Mealy Machines
Markov Chains
Counterexample Processing
Row/Cell Compatability Metrics
Model Checking with PRISM
Alergia Passive Learning

AALpy enables efficient learning by providing a large set of equivalence oracles, implementing various conformance testing strategies. Learning is mostly based on Angluin's L* algorithm, for which AALpy supports a selection of optimizations, including efficient counterexample processing and caching.

AALpy also has an efficient implementation of the ALERGIA algorithm, suited for passive learning of Markov Chains and Markov Decision processes. With ALERGIA, one can also passively learn deterministic Moore machines.

Installation

Use the package manager pip to install AALpy.

pip install aalpy

The minimum required version of Python is 3.6.
Ensure that you have Graphviz installed and added to your path if you want to visualize models.

For manual installation, clone the master and install the following dependency.

pip install pydot
# and to install the library
python setup.py install

Documentation and Wiki

If you are interested in automata learning or would like to understand the automata learning process in more detail, please check out our Wiki. On Wiki, you will find more detailed examples on how to use AALpy.

For the official documentation of all classes and methods, check out:

Interactive examples can be found in the notebooks folder. If you would like to interact/change those examples in the browser, click on the following badge. (Navigate to the notebooks folder and select one notebook)

Binder

Examples.py contains many examples and it is a great starting point.

Usage

All automata learning procedures follow this high-level approach:

For more detailed examples, check out:

Examples.py contains examples covering almost the whole AALpy's functionality, and it is a great starting point/reference. Wiki has a step-by-step guide to using AALpy and can help you understand AALpy and automata learning in general.

The following snippet demonstrates a short example in which an automaton is either loaded or randomly generated and then learned.

from aalpy.utils import load_automaton_from_file, save_automaton_to_file, visualize_automaton, generate_random_dfa
from aalpy.SULs import DfaSUL
from aalpy.oracles import RandomWalkEqOracle
from aalpy.learning_algs import run_Lstar

# load an automaton
# automaton = load_automaton_from_file('path_to_the_file.dot', automaton_type='dfa')

# or randomly generate one
random_dfa = generate_random_dfa(alphabet=[1,2,3,4,5],num_states=20, num_accepting_states=8)
big_random_dfa = generate_random_dfa(alphabet=[1,2,3,4,5],num_states=2000, num_accepting_states=500)

# get input alphabet of the automaton
alphabet = random_dfa.get_input_alphabet()

# loaded or randomly generated automata are considered as BLACK-BOX that is queried
# learning algorithm has no knowledge about its structure
# create a SUL instance for the automaton/system under learning
sul = DfaSUL(random_dfa)

# define the equivalence oracle
eq_oracle = RandomWalkEqOracle(alphabet, sul, num_steps=5000, reset_prob=0.09)

# start learning
learned_dfa = run_Lstar(alphabet, sul, eq_oracle, automaton_type='dfa')

# save automaton to file and visualize it
save_automaton_to_file(learned_dfa, path='Learned_Automaton', file_type='dot')

# visualize automaton
visualize_automaton(learned_dfa)
# or just print its DOT representation
print(learned_dfa)

To make experiments reproducible, define a random seed at the beginning of your program.

from random import seed
seed(2) # all experiments will be reproducible

Selected Applications

AALpy has been used to:

Cite AALpy and Research Contact

If you use AALpy in your research, please cite:

@inproceedings{aalpy,
	title = {{AALpy}: An Active Automata Learning Library},
	author = {Edi Mu\v{s}kardin and Bernhard K. Aichernig and Ingo Pill and Andrea Pferscher and Martin Tappler},
	booktitle = {Automated Technology for Verification and Analysis - 19th International
	Symposium, {ATVA} 2021, Gold Coast, Australia, October 18-22, 2021, Proceedings},
	series    = {Lecture Notes in Computer Science},  
	publisher = {Springer},
	year      = {2021},
}

If you have research suggestions or you need specific help concerning your research, feel free to start a discussion or contact [email protected]. We are happy to help you and consult you in applying automata learning in various domains.

Contributing

Pull requests are welcome. For significant changes, please open an issue first to discuss what you would like to change. In case of any questions or possible bugs, please open issues.

Comments
  • Compute characterization set returns empty set if automata is not minimal

    Compute characterization set returns empty set if automata is not minimal

    The function compute_characterization_set on DeterministicAutomaton returns an empty list (set) if the provided automata is not minimal. This might lead to confusion, as an empty characterization set might actually exists, e.g., automata with 1 state, which cannot directly be differentiated from a "not minimal" automaton.

    I would recommend returning None in the case that the automata is not minimal (or raising an Exception) instead of returning the empty set.

    opened by icezyclon 5
  • RPNI data labelling format

    RPNI data labelling format

    Hi, I started using RPNI algorithm in the package recently. From what I understood in the literature, in the standard RPNI the entire words are labeled as accepting\rejecting and not every prefix of it. I wonder if there is a way to format the data in the same way. For example, the data record:

    [('b', False), ('b', False), ('a', True)]
    

    will be represented as something like:

    [('b', None), ('b', None), ('a', True)]
    

    Am I missing something? is this option available in some way?

    opened by tomyaacov 5
  • W-Method Missing Test-Cases

    W-Method Missing Test-Cases

    Address comments raised in: https://github.com/DES-Lab/AALpy/pull/26

    Note to self:

    • override compute_char_set in Dfa and Moore, to solve the point about char_set_init
    opened by emuskardin 5
  • Could the merging heuristics be abandoned in RPNI passive deterministic automata learning?

    Could the merging heuristics be abandoned in RPNI passive deterministic automata learning?

    Hi, I started using RPNI algorithm in the package recently. I wonder if the merging heuristics in RPNI passive deterministic automata learning could be abandoned or customized?

    opened by wqqqy 3
  • Table shrinking bug in non-deterministic learning

    Table shrinking bug in non-deterministic learning

    Onfsm learning still can run into infinite loops, which should not happen with table shrinking! The idea behind is that the table is constantly updated with respect to all seen observations and this avoids infinite closing loops.

    Smallest reproducible example

    from Examples import onfsm_mealy_paper_example
    from random import seed
    seed(14)
    onfsm_mealy_paper_example()
    

    What happens is the infinite closing loop. Most likely cause is that you do not update properly the cells based on previously seen data.

    Originally posted by @emuskardin in https://github.com/DES-Lab/AALpy/pull/29#issuecomment-1097197292

    opened by emuskardin 3
  • Bug in WMethod Oracle Implementation

    Bug in WMethod Oracle Implementation

    Problem: The WMethodOracle is not implemented correctly. It produces too few test sequences and does not find a counterexample even though its maxmimum state variable is high enough. Please see and try the tests included in this PR.

    Context: The WMethodOracle uses combinations from itertools for extending the middle of its testing set. However, these sequences are much too short to actually cover the entire state space given by the maximum state variable. I believe that the author though the second argument of combinations is the length of the resulting sequences, when in reality it is the number of buckets to split the alphabet into. See https://docs.python.org/3/library/itertools.html#itertools.combinations

    Problem 2: Also, on an unrelated note, I saw that the calculation for the characterization set does not work for Moore Machines in general. Calling compute_characterization_set on the Moore Machine generated by generate_hypothesis in the newly added test will result in the algorithm claiming that the machine is non-canonical, even though it isn't.

    opened by icezyclon 3
  • Do not mask underlying OSError in save_automaton_to_file

    Do not mask underlying OSError in save_automaton_to_file

    Problem: The underlying OSError in https://github.com/DES-Lab/AALpy/blob/065e9ac5c388760770861bf5e95da37848aae337/aalpy/utils/FileHandler.py#L139 is masked by a different error that does not (necessarily) match the actual problem and is actively misleading.

    Explanation: The new error message says that the file could not be written and give a "(Permission denied)" as reason. This is not the only case were an OSError could be thrown and may actually mask important information from the exception message below. This message is printed to stdout and the original exception is lost.

    Example:

    1. Pydot writes to a temporary file. If this operation is not allowed or does not succeed, then the error message should reflect that and not point towards the path of the actual file (as it does now).
    2. If Pydot does not find or cannot use graphviz then it returns an OSError instead of a FileNotFoundError, which will be caught and overwritten by the given message (it turned out the PATH Python was using was not correctly configured, but Powershell could still find it)

    Both errors happened to me which took quite some time to figure out what was going on as not one but two different errors were masked.

    Solution: You should never just mask previous error messages. More information is always good.

    1. The simplest solution is to print out the original error message (and potentially the traceback) to stderr. This has the disadvantage of cluttering the output and may not contain all the information a user would want but it would be enough to get someone onto the right track with fixing the underlying problem. Note: Because writing happens in a separate thread, output may interleave with other output from the main thread which makes it even less clear what happened.

    2. Raise an exception (do not let the error pass silently). This has the advantage of making it clear an error happened and what happened but may not always be the preferred behavior eg. if the rest of an algorithm is stopped due to an uncaught exception. You could either

    • not catch the original error message (always raise)
    • catch only the specific error of not being able to write to the file and re-raising others. (may change if Pydot changes)
    • always re-raise the exception but add additional information

    2.5) You could make a flag if the exception should pass silently or not (which should be the default? hard to say)

    To 2):

    Python support raising exceptions from a given context like so:

    try:
        raise OSError("This is the original message")
    except OSError as e:
        raise OSError("Could not read from file...") from e
    

    which will read as:

    Traceback (most recent call last):
      File "catch.py", line 5, in <module>
        raise OSError("This is the original message")
    OSError: This is the original message
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "catch.py", line 7, in <module>
        raise OSError("Could not read from file...") from e
    OSError: Could not read from file...
    

    or (what you more likely want) to extend the existing exception like so:

    try:
        raise OSError("This is the original message")
    except OSError as e:
        e.args = e.args + ("Could not read from file...", )
        raise
    

    which will read as:

    Traceback (most recent call last):
      File "catch.py", line 2, in <module>
        raise OSError("This is the original message")
    OSError: ('This is the original message', 'Could not read from file...')
    

    For a longer discussion about the topic and also Python 2 conform solutions see: https://stackoverflow.com/questions/9157210/how-do-i-raise-the-same-exception-with-a-custom-message-in-python

    opened by icezyclon 3
  • Arbitrary order of tests in WMethodEqOracle

    Arbitrary order of tests in WMethodEqOracle

    Problem: Experiments using the same initial conditions may not be reproducible due to different order of test_set in WMethodEqOracle.

    Explanation: In find_cex of the WMethodEqOracle class, the test_set is generated from iterating over sets. The order of theses sets, especially transition_cover is arbitrary and may not have the same order - and in turn the same iteration sequence - every time. (It depends on implementation details). This in turn, influences the order of elements in test_set and finally, when shuffle or sort(key=len,...) is called, the order may be different which will change the order of resulting cex and the trace of the experiment.

    Solution: I propose sorting the test_set first according to elements in the alphabet and only then shuffling or sorting respectively. This should work because the Python sort is stable, aka. it will keep the order of elements of equal lengths for the second sort. This will also allow the seeding of shuffle to always produce the same order (this is not the case at the moment).

    
            test_set = []
            for seq in product(transition_cover, middle, hypothesis.characterization_set):
                inp_seq = tuple([i for sub in seq for i in sub])
                if inp_seq not in self.cache:
                    test_set.append(inp_seq)
    
            test_set.sort()  # sort sequences first, then shuffle or sort by len will be deterministic
            if self.shuffle:
                shuffle(test_set)
            else:
                test_set.sort(key=len, reverse=True)
    

    Time complexity: shuffle: O(n) sort: O(n * log(n))

    An additional sort will add O(n * log(n)) complexity to the given operation: sort + shuffle: O(n * log(n)) + O(n) = O(n * log(n)) 2 * sort: 2 * O(n * log(n)) = O(n * log(n))

    Time complexity of shuffle will be increased by a factory of log(n) while the complexity in the case of shuffle = False will not change.

    Downside: This solution will require that letters in alphabet are comparable via < and <=. The first sort will require, that letters are compared to each other to order them relative to each other. This could be an edge case if letters in alphabet are custom classes (instead of letters or numbers) which would then require a custom equality operator. If such an equality operator does not exist, this would raise a TypeError. This requirement does not exist at the moment, because the sort by length does not require direct comparison of letters.

    opened by icezyclon 3
  • mdp learning using AALpy

    mdp learning using AALpy

    https://github.com/DES-Lab/AALpy/blob/84b2835925ea8429c8e3e3aef0dfa0a78121813b/Examples.py#L171

    Hello:)

    I'm trying to to do the same for an example I created this mdp and tried to learn it same way: https://github.com/roiDaniela/AALpy/blob/examples_and_tests/myExample.py

    but the prediction of probabilities where different than expected

    maybe my configuration is not propriate?

    https://github.com/roiDaniela/AALpy/blob/examples_and_tests/graphs/learned.pdf

    https://github.com/roiDaniela/AALpy/blob/examples_and_tests/graphs/original.pdf

    thanks

    opened by roiDaniela 3
  • Issue with Examples.py

    Issue with Examples.py

    Problem:

    In Examples.py, line 189, I got TypeError: cannot unpack non-iterable Mdp object while doing: mdp, input_alphabet = generate_random_mdp(num_states, input_len, num_outputs)

    Solution:

    In aalpy/utils/AutomatonGenerators.py, line 269, replace: return Mdp(states[0], states) by return Mdp(states[0], states), inputs

    :)

    opened by Rapfff 2
  • Defect in Computation of Characterization Set

    Defect in Computation of Characterization Set

    There is a defect affecting the computation of characterization sets for DFAs and Moore machines. The partitioning of states performed by the private method _split_blocks in class Automaton computes incomplete sequences to differentiate states. The computed output sequences lack the output (acceptance flag for DFA states) of the states that shall be distinguished. As a result, states that with different labels that are otherwise indistinguishable are considered equivalent, which causes the characterization set computation to throw an exception noting that the automaton is non-canonical.

    The defect does not affect the computation for Mealy machines where outputs label the transitions.

    A possible fix would be to handle empty sequences as a special case, i.e., check the outputs of two states to determine whether empty sequence distinguishes the states. Alternatively, the outputs of the states to be differentiate could be prepended to the computed output sequences.

    opened by mtappler 2
  • Active MDP learning can result in a dead state

    Active MDP learning can result in a dead state

    from aalpy.SULs import MdpSUL
    from aalpy.automata import Mdp, MdpState
    from aalpy.learning_algs import run_stochastic_Lstar
    from aalpy.oracles import RandomWordEqOracle
    
    states = []
    for i in range(13):
        # curr_output = state_outputs.pop(0) if state_outputs else random.choice(outputs)
        if i == 3 or i == 6 or i == 9 or i == 12:
            states.append(MdpState(f'q{i}', output=True))
        else:
            states.append(MdpState(f'q{i}', output=False))
    
    # 0
    states[0].transitions['a'].append((states[1], 0.25))
    states[0].transitions['a'].append((states[0], 0.75))
    
    states[0].transitions['b'].append((states[4], 0.25))
    states[0].transitions['b'].append((states[0], 0.75))
    
    states[0].transitions['c'].append((states[7], 0.25))
    states[0].transitions['c'].append((states[0], 0.75))
    
    states[0].transitions['d'].append((states[10], 0.25))
    states[0].transitions['d'].append((states[0], 0.75))
    
    # 1
    states[1].transitions['a'].append((states[2], 0.25))
    states[1].transitions['a'].append((states[1], 0.75))
    
    states[1].transitions['b'].append((states[1], 0.2))
    states[1].transitions['b'].append((states[1], 0.8))
    
    states[1].transitions['c'].append((states[1], 0.2))
    states[1].transitions['c'].append((states[1], 0.8))
    
    states[1].transitions['d'].append((states[1], 0.2))
    states[1].transitions['d'].append((states[1], 0.8))
    
    # 2
    states[2].transitions['a'].append((states[3], 0.25))
    states[2].transitions['a'].append((states[2], 0.75))
    
    states[2].transitions['b'].append((states[2], 0.2))
    states[2].transitions['b'].append((states[2], 0.8))
    
    states[2].transitions['c'].append((states[2], 0.2))
    states[2].transitions['c'].append((states[2], 0.8))
    
    states[2].transitions['d'].append((states[2], 0.2))
    states[2].transitions['d'].append((states[2], 0.8))
    
    # 3
    states[3].transitions['a'].append((states[3], 0.25))
    states[3].transitions['a'].append((states[3], 0.75))
    
    states[3].transitions['b'].append((states[3], 0.2))
    states[3].transitions['b'].append((states[3], 0.8))
    
    states[3].transitions['c'].append((states[3], 0.2))
    states[3].transitions['c'].append((states[3], 0.8))
    
    states[3].transitions['d'].append((states[3], 0.2))
    states[3].transitions['d'].append((states[3], 0.8))
    
    # 4
    states[4].transitions['a'].append((states[4], 0.2))
    states[4].transitions['a'].append((states[4], 0.8))
    
    states[4].transitions['b'].append((states[5], 0.25))
    states[4].transitions['b'].append((states[4], 0.75))
    
    states[4].transitions['c'].append((states[4], 0.2))
    states[4].transitions['c'].append((states[4], 0.8))
    
    states[4].transitions['d'].append((states[4], 0.2))
    states[4].transitions['d'].append((states[4], 0.8))
    
    # 5
    states[5].transitions['a'].append((states[5], 0.2))
    states[5].transitions['a'].append((states[5], 0.8))
    
    states[5].transitions['b'].append((states[6], 0.25))
    states[5].transitions['b'].append((states[5], 0.75))
    
    states[5].transitions['c'].append((states[5], 0.2))
    states[5].transitions['c'].append((states[5], 0.8))
    
    states[5].transitions['d'].append((states[5], 0.2))
    states[5].transitions['d'].append((states[5], 0.8))
    
    # 6
    states[6].transitions['a'].append((states[6], 0.2))
    states[6].transitions['a'].append((states[6], 0.8))
    
    states[6].transitions['b'].append((states[6], 0.25))
    states[6].transitions['b'].append((states[6], 0.75))
    
    states[6].transitions['c'].append((states[6], 0.2))
    states[6].transitions['c'].append((states[6], 0.8))
    
    states[6].transitions['d'].append((states[6], 0.2))
    states[6].transitions['d'].append((states[6], 0.8))
    
    # 7
    states[7].transitions['a'].append((states[7], 0.2))
    states[7].transitions['a'].append((states[7], 0.8))
    
    states[7].transitions['b'].append((states[8], 0.25))
    states[7].transitions['b'].append((states[7], 0.75))
    
    states[7].transitions['c'].append((states[7], 0.2))
    states[7].transitions['c'].append((states[7], 0.8))
    
    states[7].transitions['d'].append((states[7], 0.2))
    states[7].transitions['d'].append((states[7], 0.8))
    
    # 8
    states[8].transitions['a'].append((states[8], 0.2))
    states[8].transitions['a'].append((states[8], 0.8))
    
    states[8].transitions['b'].append((states[9], 0.25))
    states[8].transitions['b'].append((states[8], 0.75))
    
    states[8].transitions['c'].append((states[8], 0.2))
    states[8].transitions['c'].append((states[8], 0.8))
    
    states[8].transitions['d'].append((states[8], 0.2))
    states[8].transitions['d'].append((states[8], 0.8))
    
    # 9
    states[9].transitions['a'].append((states[9], 0.2))
    states[9].transitions['a'].append((states[9], 0.8))
    
    states[9].transitions['b'].append((states[9], 0.25))
    states[9].transitions['b'].append((states[9], 0.75))
    
    states[9].transitions['c'].append((states[9], 0.2))
    states[9].transitions['c'].append((states[9], 0.8))
    
    states[9].transitions['d'].append((states[9], 0.2))
    states[9].transitions['d'].append((states[9], 0.8))
    
    # 10
    states[10].transitions['a'].append((states[10], 0.2))
    states[10].transitions['a'].append((states[10], 0.8))
    
    states[10].transitions['b'].append((states[11], 0.25))
    states[10].transitions['b'].append((states[10], 0.75))
    
    states[10].transitions['c'].append((states[10], 0.2))
    states[10].transitions['c'].append((states[10], 0.8))
    
    states[10].transitions['d'].append((states[10], 0.2))
    states[10].transitions['d'].append((states[10], 0.8))
    
    # 11
    states[11].transitions['a'].append((states[11], 0.2))
    states[11].transitions['a'].append((states[11], 0.8))
    
    states[11].transitions['b'].append((states[11], 0.2))
    states[11].transitions['b'].append((states[11], 0.8))
    
    states[11].transitions['c'].append((states[12], 0.25))
    states[11].transitions['c'].append((states[11], 0.75))
    
    states[11].transitions['d'].append((states[11], 0.2))
    states[11].transitions['d'].append((states[11], 0.8))
    
    # 12
    states[12].transitions['a'].append((states[12], 0.2))
    states[12].transitions['a'].append((states[12], 0.8))
    
    states[12].transitions['b'].append((states[12], 0.2))
    states[12].transitions['b'].append((states[12], 0.8))
    
    states[12].transitions['c'].append((states[12], 0.25))
    states[12].transitions['c'].append((states[12], 0.75))
    
    states[12].transitions['d'].append((states[12], 0.2))
    states[12].transitions['d'].append((states[12], 0.8))
    
    mdp = Mdp(states[0], states)  # , list(range(len_input))
    
    al = mdp.get_input_alphabet()
    sul = MdpSUL(mdp)
    
    eq_oracle = RandomWordEqOracle(al, sul, num_walks=1000, min_walk_len=3, max_walk_len=6)
    
    learned_model = run_stochastic_Lstar(al, sul, eq_oracle, automaton_type='mdp', min_rounds=60, max_rounds=100, cex_processing=None)
    
    learned_model.visualize()
    
    opened by emuskardin 0
  • Adding pre-commit configuration

    Adding pre-commit configuration

    Hi there, after issue #22 I thought an easy way of preventing pushing such errors to pypi would be to automatically run the unittests on every commit.

    This can either be done using CI/CD using a GitHub ci-cd pipline which could be run after every commit by GitHub in the cloud.

    Alternatively, they can be run locally before every commit. A great tool to do this easily is pre-commit. It allows to install pre-commit hooks into your local git infrastructure to run any number of commands before any commit. If any of them fail, the commit will be aborted and the issue can be resolved locally.

    This PR adds a basic pre-commit configuration including checking:

    • if .py files are syntactically valid python
    • if any files have names that would conflict on a case-insensitive filesystem
    • if any files contain strings from merge conflicts
    • if any files have CRLF file endings instea of LF. These will be automatically changed to LF
    • if .py files in tests/ start with test*.py
    • Finally, the unittests will be executed using python -m unittest discover. To make them discoverable, I added a dunder-init file in tests.

    However, at the moment the unittests use relative file paths that break when running them from the project root folder. Additionally, they are pretty slow. If unittests should be run on every commit, the tests would have to be faster than they currently are, or alternatively be curated to only include fast ones. Please check this out before accepting this PR!

    Fast setup for pre-commit: Install it by using pip install pre-commit. Then type pre-commit install to install the hooks them into the local git repository. Run the hooks on the entire project by using pre-commit run -a.

    By default, commits will only run on changed files instead of the entire project. You can manually run the hooks on only changed files using pre-commit run.

    opened by icezyclon 0
Releases(v.1.3.0)
  • v.1.3.0(Nov 29, 2022)

    Major note: our implementation of KV with 'rs' counterexample processing on average requires much less system interaction than L*

    Major changes

    • Added KV
    • Optimized and rewrite non-deterministic learning

    Minor additions

    • minimize method for deterministic automata
    • small bug fixes
    Source code(tar.gz)
    Source code(zip)
  • v.1.2.9(Oct 12, 2022)

  • v.1.2.7(May 16, 2022)

    Algorithm updates

    added RPNI, a passive deterministic automata learning algorithm for DFAs, Moore, and Mealy machines
    non-deterministic learning does no longer rely on all weather assumption (table shrinking and dynamic observation table update)
    

    Features updates

    following functions added to all model types
        mode.save()
        model.visualize()
        model.make_input_complete()
    refactor file handler
    
    Source code(tar.gz)
    Source code(zip)
  • v.1.1.13(Mar 22, 2022)

    Added passive learning of Stochastic Mealy Machines (SMMs)

    Experimental setting which adapts Alergia for learning of SMMs. Active SMM learning is for the most part more sample-efficient than active MDP learning, but in the passive setting we cannot compare sample efficiency only the quality of the learned model. From initial experiments passive SMM learning is for the most part as precise as passive MDP learning, but in some cases it is even less precise. However, if the system that was used to generate data for passive learning has many input/output pairs originating from the same state, or can be efficiently encoded as SMM, passive SMM learning seems to be more precise. Note that this conclusions are made based on few experiments.

    Other Changes

    • minor usability tweaks
    • Alergia implicit delete of data structures
    • optimization of FPTA creation
    Source code(tar.gz)
    Source code(zip)
  • v.1.1.9(Jan 24, 2022)

  • v.1.1.0(Sep 13, 2021)

    Alergia is implemented and added to AALpy

    • Efficient passive learning of Markov Chains and Markov Decision Processes
    • Simple to use, just pass the data to the run_Alergia
    • Active version of Alergia is also included
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.5(Aug 23, 2021)

    • Add new eq. oracle (Combinatorial Test Set Coverage)
    • Fix minor bugs
    • Make stochastic learning more extensible by introducing custom Comparability Check class
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.3(May 11, 2021)

  • 1.0.0(Apr 21, 2021)

    Time to start learning automata.

    New releases with minor changes and functionality additions will come out frequently. Next major release, containing new learning algorithms and many other features will come out later this year.

    Wiki is still work in progress, but for average user current state of Wiki and Examples.py should be enough to get conformable with AALpy.

    Source code(tar.gz)
    Source code(zip)
Owner
TU Graz - SAL Dependable Embedded Systems Lab (DES Lab)
In the DES Lab we conduct fundamental research in order to ensure the dependability of computer-based systems.
TU Graz - SAL Dependable Embedded Systems Lab (DES Lab)
FMA: A Dataset For Music Analysis

FMA: A Dataset For Music Analysis Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson. International Society for Music Information

Michaël Defferrard 1.8k Dec 29, 2022
Trajectory Extraction of road users via Traffic Camera

Traffic Monitoring Citation The associated paper for this project will be published here as soon as possible. When using this software, please cite th

Julian Strosahl 14 Dec 17, 2022
MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL [ Documentation | Demo Video ] MetaDrive is a driving simulator with the following

DeciForce: Crossroads of Machine Perception and Autonomy 276 Jan 04, 2023
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

COCON_ICLR2021 This is our Pytorch implementation of COCON. CoCon: A Self-Supervised Approach for Controlled Text Generation (ICLR 2021) Alvin Chan, Y

alvinchangw 79 Dec 18, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
Code Release for the paper "TriBERT: Full-body Human-centric Audio-visual Representation Learning for Visual Sound Separation"

TriBERT This repository contains the code for the NeurIPS 2021 paper titled "TriBERT: Full-body Human-centric Audio-visual Representation Learning for

UBC Computer Vision Group 8 Aug 31, 2022
Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic plasticity".

Impression-Learning-Camera-Ready Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic

2 Feb 09, 2022
Reinforcement Learning for the Blackjack

Reinforcement Learning for Blackjack Author: ZHA Mengyue Math Department of HKUST Problem Statement We study playing Blackjack by reinforcement learni

Dolores 3 Jan 24, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 05, 2023
[CVPR2021 Oral] UP-DETR: Unsupervised Pre-training for Object Detection with Transformers

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers This is the official PyTorch implementation and models for UP-DETR paper: @a

dddzg 430 Dec 23, 2022
Automatic caption evaluation metric based on typicality analysis.

SeMantic and linguistic UndeRstanding Fusion (SMURF) Automatic caption evaluation metric described in the paper "SMURF: SeMantic and linguistic UndeRs

Joshua Feinglass 6 Jan 09, 2022
Transformer Huffman coding - Complete Huffman coding through transformer

Transformer_Huffman_coding Complete Huffman coding through transformer 2022/2/19

3 May 19, 2022
Code repository for the paper "Tracking People with 3D Representations"

Tracking People with 3D Representations Code repository for the paper "Tracking People with 3D Representations" (paper link) (project site). Jathushan

Jathushan Rajasegaran 77 Dec 03, 2022
DI-HPC is an acceleration operator component for general algorithm modules in reinforcement learning algorithms

DI-HPC: Decision Intelligence - High Performance Computation DI-HPC is an acceleration operator component for general algorithm modules in reinforceme

OpenDILab 185 Dec 29, 2022
Code accompanying paper: Meta-Learning to Improve Pre-Training

Meta-Learning to Improve Pre-Training This folder contains code to run experiments in the paper Meta-Learning to Improve Pre-Training, NeurIPS 2021. P

28 Dec 31, 2022
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Semiconductor Machine learning project

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

kunal suryawanshi 1 Jan 15, 2022
Some experiments with tennis player aging curves using Hilbert space GPs in PyMC. Only experimental for now.

NOTE: This is still being developed! Setup notes This document uses Jeff Sackmann's tennis data. You can obtain it as follows: git clone https://githu

Martin Ingram 1 Jan 20, 2022
Repository for Driving Style Recognition algorithms for Autonomous Vehicles

Driving Style Recognition Using Interval Type-2 Fuzzy Inference System and Multiple Experts Decision Making Created by Iago Pachêco Gomes at USP - ICM

Iago Gomes 9 Nov 28, 2022