Declarative HTTP Testing for Python and anything else

Overview
Documentation Status

Gabbi

Release Notes

Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest test looks like this:

tests:
- name: A test
  GET: /api/resources/id

See the docs for more details on the many features and formats for setting request headers and bodies and evaluating responses.

Gabbi is tested with Python 3.6, 3.7, 3.8, 3.9 and pypy3.

Tests can be run using unittest style test runners, pytest or from the command line with a gabbi-run script.

There is a gabbi-demo repository which provides a tutorial via its commit history. The demo builds a simple API using gabbi to facilitate test driven development.

Purpose

Gabbi works to bridge the gap between human readable YAML files that represent HTTP requests and expected responses and the obscured realm of Python-based, object-oriented unit tests in the style of the unittest module and its derivatives.

Each YAML file represents an ordered list of HTTP requests along with the expected responses. This allows a single file to represent a process in the API being tested. For example:

  • Create a resource.
  • Retrieve a resource.
  • Delete a resource.
  • Retrieve a resource again to confirm it is gone.

At the same time it is still possible to ask gabbi to run just one request. If it is in a sequence of tests, those tests prior to it in the YAML file will be run (in order). In any single process any test will only be run once. Concurrency is handled such that one file runs in one process.

These features mean that it is possible to create tests that are useful for both humans (as tools for improving and developing APIs) and automated CI systems.

Testing and Developing Gabbi

To get started, after cloning the repository, you should install the development dependencies:

$ pip install -r requirements-dev.txt

If you prefer to keep things isolated you can create a virtual environment:

$ virtualenv gabbi-venv
$ . gabbi-venv/bin/activate
$ pip install -r requirements-dev.txt

Gabbi is set up to be developed and tested using tox (installed via requirements-dev.txt). To run the built-in tests (the YAML files are in the directories gabbi/tests/gabbits_* and loaded by the file gabbi/test_*.py), you call tox:

tox -epep8,py37

If you have the dependencies installed (or a warmed up virtualenv) you can run the tests by hand and exit on the first failure:

python -m subunit.run discover -f gabbi | subunit2pyunit

Testing can be limited to individual modules by specifying them after the tox invocation:

tox -epep8,py37 -- test_driver test_handlers

If you wish to avoid running tests that connect to internet hosts, set GABBI_SKIP_NETWORK to True.

Comments
  • Coerce JSON types into correct values for later $RESPONSE replacements

    Coerce JSON types into correct values for later $RESPONSE replacements

    Resolves #147, having $RESPONSE replacements that contained integer or decimal values would wind up as quoted strings after substitution, e.g. {"id": 825} would later become {"id": "825"}.

    This passes tox -epep8, but running tox -epy27 seems to have the first few tests fail, unfortunately. This patch works for my use case, however; I could not POST data to particular endpoints of an API as the number values were wrapped in quotes (and the API was doing type checks on the values).

    This may not be an ideal (or eloquent) solution, but I've also tried to keep performance in mind in lieu of large JSON response bodies, namely that I expect the exception cases to be more common than exceptional, so I've added some additional checking to see if it's really worth parsing particular values as strings (line no. 359) and or if it even looks like a number in the first place (line no. 376). That said, if this performance hit is not an issue, it certainly is a lot more readable without the checks.

    I also chose to do two try/excepts instead of simply using float. Firstly we try parsing for int and then for float as I prefer the resulting JSON to be correct i.e. I would rather not an id field that was initially an int be cast into a float. For example, consider we get a response of {"id": 825} and we simply had one try/except that used float. The value would parse, but the resulting JSON (from json.dumps) would be {"id": 825.0}. This pragmatically doesn't matter as I'm sure most endpoints will accept a decimal value with an appended .0 to be valid as an integer, but I felt the semantics would be a surprise to other users of the lib and it's still possible that certain APIs might have an issue with.

    And thanks for all the effort you've put into the lib!

    opened by justanotherdot 20
  • Unable to make a relative Content Handler import from the command-line

    Unable to make a relative Content Handler import from the command-line

    On the command line, importing a custom Response Handler using a relative path requires manipulation of the PYTHONPATH environment variable to add . to the list of paths.

    Should Gabbi allow relative imports to work out-of-the-box?

    e.g.

    gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... fails with, ModuleNotFoundError: No module named 'foo'.

    Updating PYTHONPATH...

    PYTHONPATH=${PYTHONPATH}:. gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... works.

    opened by scottwallacesh 17
  • Allow to load python object from yaml

    Allow to load python object from yaml

    It can be interesting to write custom object to compare values.

    For example, I need to ensure an output is equal to .NAN

    Because .NAN == .NAN always returns false. We currently can't compare it with assert_equals().

    With the unsafe yaml loader we can register a custom method to check NAN, for example:

    class IsNAN(object):
        @classmethod
        def constructor(cls, loader, node):
            return cls()
    
        def __eq__(self, other):
            return numpy.isnan(other)
    
    yaml.add_constructor(u'!ISNAN', ISNAN.constructor)
    
    opened by sileht 17
  • extra verbosity to include request/response bodies

    extra verbosity to include request/response bodies

    Currently it can be somewhat tricky to debug unexpected outcomes, as verbose: true only prints headers.

    In my case, I wanted to verify that a CSRF token was included in a form submission. The simplest way to check the request body was to start netcat and change my test's URL to http://localhost:9999.

    It would be useful if gabbi provided a way to inspect the entire data being sent over the wire.

    opened by FND 15
  • Ability to run gabbi test cases individually

    Ability to run gabbi test cases individually

    The ability to run an individual gabbi test without any of the tests preceding it in the yaml file could be useful. I created a project where I drive gabbi test cases using Robot Framework (https://github.com/dkt26111/robotframework-gabbilibrary). In order for that to work I explicitly set the prior field of the gabbi test case being run to None.

    opened by dkt26111 14
  • Verbose misses response body

    Verbose misses response body

    I have got the following test spec:

    tests:
    -   name: auth
        verbose: all
        url: /api/sessions
        method: POST
        data: "asdsad"
        status: 200
    

    which has got data not a proper json on purpose. The response results in 400 instead of 200 code. Verbose is set to all, but it still does not print the response body, although detects non empty content:

    ... #### auth ####
    > POST http://localhost:7000/api/sessions
    > user-agent: gabbi/1.40.0 (Python urllib3)
    
    < 400 Bad Request
    < content-length: 48
    < date: Wed, 19 Aug 2020 06:11:24 GMT
    
    ✗ gabbi-runner.input_auth
    
    FAIL: gabbi-runner.input_auth
            Traceback (most recent call last):
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 94, in wrapper
                func(self)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 143, in test_request
                self._run_test()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 550, in _run_test
                self._assert_response()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 188, in _assert_response
                self._test_status(self.test_data['status'], self.response['status'])
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 591, in _test_status
                self.assert_in_or_print_output(observed_status, statii)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 654, in assert_in_or_print_output
                self.assertIn(expected, iterable)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 417, in assertIn
                self.assertThat(haystack, Contains(needle), message)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in assertThat
                raise mismatch_error
            testtools.matchers._impl.MismatchError: '400' not in ['200']
    ----------------------------------------------------------------------
    

    The expected behavior:

    • response body is printed to the stdout alongside the response headers.
    opened by avkonst 11
  • Q: Persist (across > 1 tests) value to variable based on JSON response?

    Q: Persist (across > 1 tests) value to variable based on JSON response?

    Example algorithm to be expressed in YAML:

    • Create an object given an object name.
    • Store in "$FOO" (or similar) the UUID given to object per JSON response.
    • Do unrelated tests.
    • Perform a GET using "$FOO" (the UUID)

    Thanks as always!

    enhancement 
    opened by josdotso 11
  • Variable is not replace to previous result in a request body

    Variable is not replace to previous result in a request body

    Seen from https://gabbi.readthedocs.io/en/latest/format.html#any-previous-test

    There are 2 requests:

    1. post a task, will return a taskId
    2. query the task with the taskId
    • previous test return: {"dataSet": {"header": {"serverIp": "xxx.xxx.xxx.xxx", "version": "1.0", "errorKeys": [{"error_key" : "2-0-0"}], "errorInfo": "", "returnCode": 0}, "data": {"taskId": "3008929"}}}

    • yaml define data: taskId: $HISTORY['start live migrate a vm'].$RESPONSE['$.dataSet.data.taskId']

    • actual result: "data": { "taskId": "$.dataSet.data.taskId" }

    opened by taget 9
  • jsonhandler: allow reading yaml data from disk

    jsonhandler: allow reading yaml data from disk

    This commit aims to change the jsonhandler to be able to read data from disk if it is a yaml file.

    Note:

    • Simply replacing the loads call with yaml.safe_load is not enough due to the nature of the NaN checker requiring an unsafe load[1].

    closes #253 [1] https://github.com/cdent/gabbi/commit/98adca65e05b7de4f1ab2bf90ab0521af5030f35

    opened by trevormccasland 9
  • pytest not working correctly!

    pytest not working correctly!

    Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:

    import os
    
    from gabbi import driver
    
    # By convention the YAML files are put in a directory named
    # "gabbits" that is in the same directory as the Python test file. 
    TESTS_DIR = 'gabbits'
    
    def test_gabbits():
        test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
        test_generator = driver.py_test_generator(
            test_dir, host="http://www.nfdsfdsfdsf.se", port=80)
    
        for test in test_generator:
            yield test
    

    The yaml-file looks very simple:

    tests:
      - name: Do get to a faulty site
        url: /sdsdsad
        method: GET
        status: 200
    

    The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?

    opened by keyhan 9
  • Add yaml-based tests for host header and sni checking

    Add yaml-based tests for host header and sni checking

    The addition of server_hostname to the httpclient PoolManager, without sufficient testing, has revealed some issues:

    • The minimum urllib3 required is too low. server_hostname was introduced in 1.24.x
    • However, there is a bug [1] in PoolManager when mixing schemes in the same pool manager. This is being fixed so the minimum urllib3 will need to be higher still.

    Tests are added here, and the minimum value for urllib3 will be set when a release is made.

    Some of the tests are "live" meaning they require network, and can be skipped via the live test fixture if the GABBI_SKIP_NETWORK env variable is set to "true".

    [1] https://github.com/urllib3/urllib3/issues/2534

    Fixes #307 Fixes #309

    opened by cdent 8
  • gabbi doesn't support client cert yet

    gabbi doesn't support client cert yet

    Gabbi doesn't support client cert yet

    Help gabbi could support: gabbi-run ... --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/client.crt --key /etc/kubernetes/pki/client.key ...

    opened by wu-wenxiang 4
  • Socket leak with large YAML test files

    Socket leak with large YAML test files

    I have a YAML file with nearly 2000 tests in it. When invoked from the command line, I run out of open file handles due to large amounts of sockets left open:

    ERROR: gabbi-runner.input_/foo/bar/__test_l__
    	[Errno 24] Too many open files
    

    By default a Linux user has 1024 file handles:

    $ ulimit -n
    1024
    

    Inspecting the open file handles:

    $ ls -l /proc/$(pgrep gabbi-run)/fd | awk '{print $NF}' | cut -f1 -d: | sort | uniq -c
          1 0
          2 /path/to/a/file.txt
          1 /path/to/another/file.yaml
       1021 socket
    
    opened by scottwallacesh 3
  • Consider per-suite pre & post executables

    Consider per-suite pre & post executables

    Like fixtures, but a call to an external executable, for when gabbi-run is being used.

    This could be explicit, by putting something in the yaml file, or implicit off the name of the yaml file. That is:

    • if gabbit is foo.yaml
    • if foo-start and foo-end exist in the same dir and are executable

    either way, when the start is called gabbi should save, as a list, the line separated stdout, if any, it produced

    and provide that as args (or stdin?) to foo-end

    this would allow passing things like pids of started stuff

    /cc @FND for sanity check

    enhancement 
    opened by cdent 7
  • some fixtures that

    some fixtures that "capture" no longer work with the removal of testtools

    In https://github.com/cdent/gabbi/pull/279 testtools was removed.

    Fixtures in the openstack community that do output capturing rely on some "end of test" handling in testtools to dump the accumulated data. You can see this by trying a LOG.critical("hi") somewhere in the (e.g.) placement code and causing a test to fail. Dropping to a gabbi <2 makes it work again.

    We're definitely not going to add testtools back in, but the test case subclass in gabbi itself may be able to handling the data gathering that's required. Some investigation required.

    /cc @lbragstad for awareness

    opened by cdent 0
  • Faster tests development with gold files

    Faster tests development with gold files

    There is a cool method to speed up development of tests. It would be great if gabbi supported it too.

    Here is the idea:

    1. a test defines that a response should be compared with a gold file (reference to gold file can be custom configurable per every test)

    2. gabbi runs tests with a new flag 'generate-gold-files', which forces gabbi to capture response bodies and headers and (re-)write gold files containing the captured response data

    3. developer reviews the gold files (usually happens one by one as tests are added one by one during development)

    4. gabbi runs tests as usually

      a) if a test has got a reference to a gold file, it captures actual response output and compares with gold file b) if content of the actual output matches the gold file content, verification is considered to be passed c) otherwise test is failed

    This would allow me to reduce size of my test files by half at least.

    opened by avkonst 3
  • test files with - in the name can lead to failing tests when looking for content-type

    test files with - in the name can lead to failing tests when looking for content-type

    Bear with me, this is hard to explain

    Python v 3.6.9

    gabbi: 1.49.0

    A test file with named device-types.yaml with a test of:

    tests:                                                                          
    - name: get only 405                                                            
      POST: /device-types                                                           
      status: 405    
    

    errors with the following when run in a unittest-style harness:

        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 68, in action'
        b'    response_value = str(response[header])'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/urllib3/_collections.py", line 156, in __getitem__'
        b'    val = self._container[key.lower()]'
        b"KeyError: 'content-type'"
        b''
        b'During handling of the above exception, another exception occurred:'
        b''
        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/suitemaker.py", line 96, in do_test'
        b'    return test_method(*args, **kwargs)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 95, in wrapper'
        b'    func(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 149, in test_request'
        b'    self._run_test()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 556, in _run_test'
        b'    self._assert_response()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 196, in _assert_response'
        b'    handler(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/base.py", line 54, in __call__'
        b'    self.action(test, item, value=value)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 72, in action'
        b'    header, response.keys()))'
        b"AssertionError: 'content-type' header not present in response: KeysView(HTTPHeaderDict({'Vary': 'Origin', 'Date': 'Tue, 24 Mar 2020 14:17:33 GMT', 'Content-Length': '0', 'status': '405', 'reason': 'Method Not Allowed'}))"
        b''
    

    However, rename the file to foo.yaml and the test works, or run the device-types.yaml file with gabbi-run and the tests work. Presumably something about test naming.

    So the short term workaround is to rename the file, but this needs to be fixed because using - in filenames is idiomatic for gabbi.

    opened by cdent 1
Releases(2.3.0)
  • 2.3.0(Sep 3, 2021)

    • For the $ENVIRON and $RESPONSE :ref:substitutions <state-substitution> it is now possible to :ref:cast <casting> the value to a type of int, float, str, or bool.
    • The JSONHandler is now more strict about how it detects that a body content is JSON, avoiding some errors where the content-type header suggests JSON but the content cannot be decoded as such.
    • Better error message when content cannot be decoded.
    • Addition of the disable_response_handler test setting for those cases when the test author has no control over the content-type header and it is wrong.
    Source code(tar.gz)
    Source code(zip)
Thin-wrapper around the mock package for easier use with pytest

pytest-mock This plugin provides a mocker fixture which is a thin-wrapper around the patching API provided by the mock package: import os class UnixF

pytest-dev 1.5k Jan 05, 2023
This repository has automation content to test Arista devices.

Network tests automation Network tests automation About this repository Requirements Requirements on your laptop Requirements on the switches Quick te

Netdevops Community 17 Nov 04, 2022
Python program that uses pynput to simulate key presses. Probably only works on Windows.

AutoKey Python program that uses pynput to simulate key presses. Probably only works on Windows. Can be used for pretty much whatever you want except

2 Oct 28, 2022
tidevice can be used to communicate with iPhone device

tidevice can be used to communicate with iPhone device

Alibaba 1.8k Jan 08, 2023
Aplikasi otomasi klik di situs popcat.click menggunakan Python dan Selenium

popthe-popcat Aplikasi Otomasi Klik di situs popcat.click. aplikasi ini akan secara otomatis melakukan click pada kucing viral itu, sehingga anda tida

cndrw_ 2 Oct 07, 2022
Django test runner using nose

django-nose django-nose provides all the goodness of nose in your Django tests, like: Testing just your apps by default, not all the standard ones tha

Jazzband 880 Dec 15, 2022
Fidelipy - Semi-automated trading on fidelity.com

fidelipy fidelipy is a simple Python 3.7+ library for semi-automated trading on fidelity.com. The scope is limited to the Trade Stocks/ETFs simplified

Darik Harter 8 May 10, 2022
Coverage plugin for pytest.

Overview docs tests package This plugin produces coverage reports. Compared to just using coverage run this plugin does some extras: Subprocess suppor

pytest-dev 1.4k Dec 29, 2022
pytest plugin providing a function to check if pytest is running.

pytest-is-running pytest plugin providing a function to check if pytest is running. Installation Install with: python -m pip install pytest-is-running

Adam Johnson 21 Nov 01, 2022
masscan + nmap 快速端口存活检测和服务识别

masnmap masscan + nmap 快速端口存活检测和服务识别。 思路很简单,将masscan在端口探测的高速和nmap服务探测的准确性结合起来,达到一种相对比较理想的效果。 先使用masscan以较高速率对ip存活端口进行探测,再以多进程的方式,使用nmap对开放的端口进行服务探测。 安

starnightcyber 75 Dec 19, 2022
Subprocesses for Humans 2.0.

Delegator.py — Subprocesses for Humans 2.0 Delegator.py is a simple library for dealing with subprocesses, inspired by both envoy and pexpect (in fact

Amit Tripathi 1.6k Jan 04, 2023
A friendly wrapper for modern SQLAlchemy and Alembic

A friendly wrapper for modern SQLAlchemy (v1.4 or later) and Alembic. Documentation: https://jpsca.github.io/sqla-wrapper/ Includes: A SQLAlchemy wrap

Juan-Pablo Scaletti 129 Nov 28, 2022
An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

Brandon Galbraith 276 Mar 03, 2021
Testing - Instrumenting Sanic framework with Opentelemetry

sanic-otel-splunk Testing - Instrumenting Sanic framework with Opentelemetry Test with python 3.8.10, sanic 20.12.2 Step to instrument pip install -r

Donler 1 Nov 26, 2021
Automated mouse clicker script using PyAutoGUI and Typer.

clickpy Automated mouse clicker script using PyAutoGUI and Typer. This app will randomly click your mouse between 1 second and 3 minutes, to prevent y

Joe Fitzgibbons 0 Dec 01, 2021
Test django schema and data migrations, including migrations' order and best practices.

django-test-migrations Features Allows to test django schema and data migrations Allows to test both forward and rollback migrations Allows to test th

wemake.services 382 Dec 27, 2022
Object factory for Django

Model Bakery: Smart fixtures for better tests Model Bakery offers you a smart way to create fixtures for testing in Django. With a simple and powerful

Model Bakers 632 Jan 08, 2023
Web testing library for Robot Framework

SeleniumLibrary Contents Introduction Keyword Documentation Installation Browser drivers Usage Extending SeleniumLibrary Community Versions History In

Robot Framework 1.2k Jan 03, 2023
Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

yin kaisheng 1.6k Dec 29, 2022
A Proof of concept of a modern python CLI with click, pydantic, rich and anyio

httpcli This project is a proof of concept of a modern python networking cli which can be simple and easy to maintain using some of the best packages

Kevin Tewouda 17 Nov 15, 2022