Sixpack is a language-agnostic a/b-testing framework

Related tags

Testingsixpack
Overview

Sixpack

https://travis-ci.org/seatgeek/sixpack.png?branch=master https://coveralls.io/repos/seatgeek/sixpack/badge.png?branch=master

Sixpack is a framework to enable A/B testing across multiple programming languages. It does this by exposing a simple API for client libraries. Client libraries can be written in virtually any language.

Sixpack has two main parts. The first, Sixpack-server, is responsible for responding to web requests. The second, Sixpack-web, is a web dashboard for tracking and acting on your A/B tests. Sixpack-web is optional.

Requirements

  • Redis >= 2.6
  • Python >= 2.7 (3.0 untested, pull requests welcome)

Getting Started

To get going, create (or don't, but you really should) a new virtualenv for your Sixpack installation. Follow that up with pip install:

$ pip install sixpack

Note: If you get an error like src/hiredis.h:4:20: fatal error: Python.h: No such file or directory you need to install the python development tools. apt-get install python-dev on Ubuntu.

Next, create a Sixpack configuration. A configuration must be created for Sixpack to run. Here's the default:

redis_port: 6379                            # Redis port
redis_host: localhost                       # Redis host
redis_prefix: sixpack                       # all Redis keys will be prefixed with this
redis_db: 15                                # DB number in redis

metrics: false                              # send metrics to StatsD (response times, # of calls, etc)?
statsd_url: 'udp://localhost:8125/sixpack'  # StatsD url to connect to (used only when metrics: true)

# The regex to match for robots
robot_regex: $^|trivial|facebook|MetaURI|butterfly|google|amazon|goldfire|sleuth|xenu|msnbot|SiteUptime|Slurp|WordPress|ZIBB|ZyBorg|pingdom|bot|yahoo|slurp|java|fetch|spider|url|crawl|oneriot|abby|commentreader|twiceler
ignored_ip_addresses: []                    # List of IP

asset_path: gen                             # Path for compressed assets to live. This path is RELATIVE to sixpack/static
secret_key: '<your secret key here>'        # Random key (any string is valid, required for sixpack-web to run)

You can store this file anywhere (we recommend /etc/sixpack/config.yml). As long as Redis is running, you can now start the Sixpack server like this:

$ SIXPACK_CONFIG=<path to config.yml> sixpack

Sixpack-server will be listening on port 5000 by default but can be changed with the SIXPACK_PORT environment variable. For use in a production environment, please see the "Production Notes" section below.

Alternatively, as of version 1.1, all Sixpack configuration can be set by environment variables. The following environment variables are available:

  • SIXPACK_CONFIG_ENABLED
  • SIXPACK_CONFIG_REDIS_PORT
  • SIXPACK_CONFIG_REDIS_HOST
  • SIXPACK_CONFIG_REDIS_PASSWORD
  • SIXPACK_CONFIG_REDIS_PREFIX
  • SIXPACK_CONFIG_REDIS_DB
  • SIXPACK_CONFIG_ROBOT_REGEX
  • SIXPACK_CONFIG_IGNORE_IPS - comma separated
  • SIXPACK_CONFIG_ASSET_PATH
  • SIXPACK_CONFIG_SECRET
  • SIXPACK_CORS_ORIGIN
  • SIXPACK_CORS_HEADERS
  • SIXPACK_CORS_CREDENTIALS
  • SIXPACK_CORS_METHODS
  • SIXPACK_CORS_EXPOSE_HEADERS
  • SIXPACK_METRICS
  • STATSD_URL

Using the API

All interaction with Sixpack is done via HTTP GET requests. Sixpack allows for cross-language testing by accepting a unique client_id (which the client is responsible for generating) that links a participation to a conversion. All requests to Sixpack require a client_id.

The Sixpack API can be used from front-end Javascript via CORS-enabled requests. The Sixpack API server will accept CORS requests from any domain.

Participating in an Experiment

You can participate in an experiment with a GET request to the participate endpoint:

$ curl http://localhost:5000/participate?experiment=button_color&alternatives=red&alternatives=blue&client_id=12345678-1234-5678-1234-567812345678

If the test does not exist, it will be created automatically. You do not need to create the test in Sixpack-web.

Experiment names are not validated, so it is possible to explode the Redis keyspace. If you need to validate that the experiments being created are only those you wish to whitelist, consider fronting Sixpack with either Nginx+Lua/Openresty or Varnish, and performing your whitelisting logic there.

Arguments

experiment (required) is the name of the test. Valid experiment names must be a lowercase alphanumeric string and can contain _ and -.

alternatives (required) are the potential responses from Sixpack. One of them will be the bucket that the client_id is assigned to.

client_id (required) is the unique id for the user participating in the test.

user_agent (optional) user agent of the user making a request. Used for bot detection.

ip_address (optional) IP address of user making a request. Used for bot detection.

force (optional) force a specific alternative to be returned, example:

$ curl http://localhost:5000/participate?experiment=button_color&alternatives=red&alternatives=blue&force=red&client_id=12345678-1234-5678-1234-567812345678

In this example, red will always be returned. This is used for testing only, and no participation will be recorded.

record_force (optional) for use with force, participation will be recorded.

traffic_fraction (optional) Sixpack allows for limiting experiments to a subset of traffic. You can pass the percentage of traffic you'd like to expose the test to as a decimal number here. (?traffic_fraction=0.10 for 10%)

Response

A typical Sixpack participation response will look something like this:

{
    status: "ok",
    alternative: {
        name: "red"
    },
    experiment: {
        name: "button_color"
    },
    client_id: "12345678-1234-5678-1234-567812345678"
}

The most interesting part of this is alternative. This is a representation of the alternative that was chosen for the test and assigned to a client_id. All subsequent requests to this experiment/client_id combination will be returned the same alternative.

Converting a user

You can convert a user with a GET request to the convert endpoint:

$ curl http://localhost:5000/convert?experiment=button_color&client_id=12345678-1234-5678-1234-567812345678

Conversion Arguments

  • experiment (required) the name of the experiment you would like to convert on.
  • client_id (required) the client you would like to convert.
  • kpi (optional) sixpack supports recording multiple KPIs. If you would like to track conversion against a specfic KPI, you can do that here. If the KPI does not exist, it will be created automatically.

Notes

You'll notice that the convert endpoint does not take an alternative query parameter. This is because Sixpack handles that internally with the client_id.

We've included a 'health-check' endpoint, available at /_status. This is helpful for monitoring and alerting if the Sixpack service becomes unavailable. The health check will respond with either 200 (success) or 500 (failure) headers.

Clients

We've already provided clients in four languages. We'd love to add clients in additional languages. If you feel inclined to create one, please first read the CLIENTSPEC. After writing your client, please update and pull request this file so we know about it.

Algorithm

As of version 2.0 of Sixpack, we use a deterministic algorithm to choose which alternative a client will receive. The algorithm was ported from Facebook's Planout project, and more information can be found HERE.

Dashboard

Sixpack comes with a built in dashboard. You can start the dashboard with:

$ SIXPACK_CONFIG=<path to config.yml> sixpack-web

The Sixpack dashboard allows you to visualize how each experiment's alternatives are doing compared to the rest, select alternatives as winners, and update experiment descriptions to something more human-readable.

Sixpack-web defaults to run on port 5001 but can be changed with the SIXPACK_WEB_PORT environment variable. Sixpack-web will not work properly until you set the secret_key variable in the configuration file.

API

Sixpack web dashboard has a bit of a read-only API built in. To get a list of all experiment information you can make a request like:

$ curl http://localhost:5001/experiments.json

To get the information for a single experiment, you can make a request like:

$ curl http://localhost:5001/experiments/blue-or-red-header.json

Production Notes

We recommend running Sixpack on gunicorn in production. You will need to install gunicorn in your virtual environment before running the following.

To run the sixpack server using gunicorn/gevent (a separate installation) you can run the following:

gunicorn --access-logfile - -w 8 --worker-class=gevent sixpack.server:start

To run the sixpack web dashboard using gunicorn/gevent (a separate installation) you can run the following:

gunicorn --access-logfile - -w 2 --worker-class=gevent sixpack.web:start

Note: After selecting an experiment winner, it is best to remove the Sixpack experiment code from your codebase to avoid unnecessary http requests.

CORS

Cross-origin resource sharing can be adjusted with the following config attributes:

cors_origin: *
cors_headers: ...
cors_credentials: true
cors_methods: GET
cors_expose_headers: ...

Contributing

  1. Fork it

  2. Start Sixpack in development mode with:

    $ PYTHONPATH=. SIXPACK_CONFIG=<path to config.yml> bin/sixpack
    

    and:

    $ PYTHONPATH=. SIXPACK_CONFIG=<path to config.yml> bin/sixpack-web
    

    We've also included a small script that will seed Sixpack with lots of random data for testing and development on sixpack-web. You can seed Sixpack with the following command:

    $ PYTHONPATH=. SIXPACK_CONFIG=<path to config.yml> sixpack/test/seed
    

    This command will make a few dozen requests to the participate and convert endpoints. Feel free to run it multiple times to get additional data.

    Note: By default the server runs in production mode. If you'd like to turn on Flask and Werkzeug debug modes set the SIXPACK_DEBUG environment variable to true.

  3. Create your feature branch (git checkout -b my-new-feature)

  4. Write tests

  5. Run tests with nosetests

  6. Commit your changes (git commit -am 'Added some feature')

  7. Push to the branch (git push origin my-new-feature)

  8. Create new pull request

Please avoid changing versions numbers; we'll take care of that for you.

Using Sixpack in production?

If you're a company using Sixpack in production, kindly let us know! We're going to add a 'using Sixpack' section to the project landing page, and we'd like to include you. Drop Jack a line at jack [at] seatgeek dot.com with your company name.

License

Sixpack is released under the BSD 2-Clause License.

Comments
  • Can't open sixpack-web on http://localhost:5001/

    Can't open sixpack-web on http://localhost:5001/

    Summary: I have just finished installing sixpack, and can't open sixpack-web, opening http://localhost:5001/ always returns error, sixpack server is fine, following are the errors:

     * Running on http://0.0.0.0:5001/
    10.0.2.2 - - [19/May/2014 17:12:34] "GET / HTTP/1.1" 500 -
    Error on request:
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi
        execute(self.server.app)
      File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute
        application_iter = app(environ, start_response)
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1836, in __call__
        return self.wsgi_app(environ, start_response)
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1820, in wsgi_app
        response = self.make_response(self.handle_exception(e))
      File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1410, in handle_exception
        return handler(e)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/web.py", line 160, in internal_server_error
        return render_template('errors/500.html'), 500
      File "/usr/local/lib/python2.7/dist-packages/flask/templating.py", line 128, in render_template
        context, ctx.app)
      File "/usr/local/lib/python2.7/dist-packages/flask/templating.py", line 110, in _render
        rv = template.render(context)
      File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 969, in render
        return self.environment.handle_exception(exc_info, True)
      File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 742, in handle_exception
        reraise(exc_type, exc_value, tb)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/templates/errors/500.html", line 1, in top-level template code
        {% extends "layout.html" %}
      File "/usr/local/lib/python2.7/dist-packages/sixpack/templates/layout.html", line 12, in top-level template code
        {% assets "css_all" %}
      File "/usr/local/lib/python2.7/dist-packages/webassets/ext/jinja2.py", line 181, in _render_assets
        urls = bundle.urls(env=env)
      File "/usr/local/lib/python2.7/dist-packages/webassets/bundle.py", line 681, in urls
        urls.extend(bundle._urls(env, extra_filters, *args, **kwargs))
      File "/usr/local/lib/python2.7/dist-packages/webassets/bundle.py", line 643, in _urls
        *args, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/webassets/bundle.py", line 498, in _build
        force, disable_cache=disable_cache, extra_filters=extra_filters)
      File "/usr/local/lib/python2.7/dist-packages/webassets/bundle.py", line 453, in _merge_and_apply
        return filtertool.apply(final, selected_filters, 'output')
      File "/usr/local/lib/python2.7/dist-packages/webassets/merge.py", line 269, in apply
        return self._wrap_cache(key, func)
      File "/usr/local/lib/python2.7/dist-packages/webassets/merge.py", line 216, in _wrap_cache
        content = func().getvalue()
      File "/usr/local/lib/python2.7/dist-packages/webassets/merge.py", line 249, in func
        getattr(filter, type)(data, out, **kwargs_final)
      File "/usr/local/lib/python2.7/dist-packages/webassets/filter/yui.py", line 50, in output
        ['--charset=utf-8', '--type=%s' % self.mode], out, _in)
      File "/usr/local/lib/python2.7/dist-packages/webassets/filter/__init__.py", line 527, in subprocess
        [self.java_bin, '-jar', self.jar] + args, out, data)
      File "/usr/local/lib/python2.7/dist-packages/webassets/filter/__init__.py", line 481, in subprocess
        stderr=subprocess.PIPE)
      File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
        errread, errwrite)
      File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
        raise child_exception
    OSError: [Errno 2] No such file or directory
    

    Steps to Reproduce: SIXPACK_CONFIG=/etc/sixpack/config.yml sixpack-web

    Operating System: Ubuntu Server 14.04

    Installation Procedures:

    sudo apt-get install redis-server python-dev
    wget -O - https://bootstrap.pypa.io/get-pip.py | sudo python
    sudo pip install sixpack
    

    Additional info:

    • I have tried to reproduce the problem with virtualenv, and it returns the same error.
    • http://localhost:5001/_status returns 200, other than that returns 500.
    opened by hendrauzia 14
  • deleted keys from redis, broke dashboard

    deleted keys from redis, broke dashboard

    Hello! I've been meaning to followup and let you know that we've been running many experiments with Sixpack and things have been going pretty well.

    We recently came up against some memory issues so we manually deleted keys related to archived experiments in redis. In hindsight, I should've used the web delete api, but now we're in a bit of a pickle.

    Our dashboards are borked, I can view individual experiment details but the list views are returning 500s. I assume we broke something with the Experiment.all() calls trying to get a list of all experiments when we deleted the keys from the db.

    Any ideas or advice on how to fix the dashboard views? Greatly appreciated, thanks.

    type: support 
    opened by mdressman 12
  • Add instrumentation hooks

    Add instrumentation hooks

    It should be possible to instrument the application with your favorite metrics suite - StatsD comes to mind, as most things speak it's language.

    This would be useful for the following metrics:

    • Seeing the number of requests you are making to sixpack
    • Viewing response times from the sixpack service
    • Tracking redis performance
    • Potentially tracking redis space usage

    Would be useful in diagnosing trouble spots or performance issues.

    type: enhancement 
    opened by josegonzalez 11
  • Add the ability to prefetch alternatives without participating

    Add the ability to prefetch alternatives without participating

    When using sixpack with a native client (eg. iOS) it's very useful to be able to pre-fetch sixpack alternatives at launch without participating so that screens can be displayed immediately without waiting for network requests and responses.

    Participate calls can be made asynchronously upon actual viewing of the screen.

    type: enhancement 
    opened by staminajim 10
  • Add ability to send metrics to statsd

    Add ability to send metrics to statsd

    This adds new option: metrics, which enables sending response times and number of calls for sixpack-server endpoints to statsd.

    It currently sends:

    • Number of calls to each endpoint
    • Response time of each endpoint
    • Number of responses by response code

    We're using this at Urban Dictionary quite successfully and wanted to contribute back to the community if it seems appropriate and useful.

    I'm happy to hear any feedback. Thanks!

    opened by jetmind 7
  • How is Sixpack handling Redis Storage?

    How is Sixpack handling Redis Storage?

    Is it ever invalidating data or does it expect Redis is an ever-growing storage? Could the maintainers shed some light on this please? Useful to know when Sixpack would be running for months to come. Thanks

    type: question 
    opened by nickveenhof 7
  • `/convert` all experiments for a single KPI?

    `/convert` all experiments for a single KPI?

    We are looking at using sixpack for a/b testing some design changes on our online store. I'd imagine that the flow for a simple button color test would be: /participate?experiment=button_color&alternatives=black&alternatives=blue&client_id=123

    And if the user clicked on the button, we would convert it: /convert?experiment=button_color&client_id=123&kpi=add_to_cart

    If we also wanted to test whether the button color has an effect on revenue, we could do: /convert?experiment=button_color&client_id=123&kpi=checkout

    when a user goes through the checkout flow. The problem is storing all of the currently enrolled experiments for a given client. Seems like this information should be available internally to sixpack.

    Is there a standard way to handle this kind of behavior without needing to externally store the enrolled experiments for a client?

    opened by Jud 7
  • Linked client_id and/or Change client_id

    Linked client_id and/or Change client_id

    We have quite a few anonymous users who may or may not convert to users (which means the random id for client_id is ideal). However, we also want some way to determine which user is which for certain complicated a/b tests (which might require user feedback to determine). Was thinking to add a /link endpoint that could take a client_id and link_id and could be set on login.

    But just thinking about it again, maybe it would just be better to update the cookie with the link_id on login and modify the client_id in the backend, so an endpoint of /change with an old_client_id and new_client_id.

    Any thoughts on what would be preferable?

    opened by urg 7
  • Error on /participate

    Error on /participate

    Note: This installed via pip.

    Getting a 500 when hitting the /participate endpoint.

    2014-03-20 16:55:55 [13748] [ERROR] Error handling request
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/gunicorn-18.0-py2.7.egg/gunicorn/workers/async.py", line 45, in handle
        self.handle_request(listener, req, client, addr)
      File "/usr/local/lib/python2.7/dist-packages/gunicorn-18.0-py2.7.egg/gunicorn/workers/ggevent.py", line 151, in handle_request
        super(GeventWorker, self).handle_request(*args)
      File "/usr/local/lib/python2.7/dist-packages/gunicorn-18.0-py2.7.egg/gunicorn/workers/async.py", line 93, in handle_request
        respiter = self.wsgi(environ, resp.start_response)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/server.py", line 206, in start
        return app(environ, start_response)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/server.py", line 41, in __call__
        return self.wsgi_app(environ, start_response)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/server.py", line 45, in wsgi_app
        response = self.dispatch_request(request)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/server.py", line 52, in dispatch_request
        return getattr(self, 'on_' + endpoint)(request, **values)
      File "<string>", line 2, in on_participate
      File "/usr/local/lib/python2.7/dist-packages/sixpack/utils.py", line 12, in service_unavailable_on_connection_error
        return f(*args, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/server.py", line 157, in on_participate
        alternative = experiment.get_alternative(client, dt=dt).name
      File "/usr/local/lib/python2.7/dist-packages/sixpack/models.py", line 301, in get_alternative
        chosen_alternative, participate = self.choose_alternative(client=client)
      File "/usr/local/lib/python2.7/dist-packages/sixpack/models.py", line 330, in choose_alternative
        if rnd >= self.traffic_fraction:
      File "/usr/local/lib/python2.7/dist-packages/sixpack/models.py", line 276, in traffic_fraction
        self._traffic_fraction = float(self.redis.hget(self.key(), 'traffic_fraction'))
    ValueError: could not convert string to float: False
    
    type: bug 
    opened by dhrrgn 7
  • Non-destructive bucketing endpoint

    Non-destructive bucketing endpoint

    We do a number of experiments where we only bucket a portion of the population - so at the point of bucketing we apply some business logic, and only call participate on the users we want in the experiment.

    Downstream from there - and on a different system - we'd like to just call participate again on all users to get the bucket they're in, but that would also bucket the users who we chose not to earlier, so we have to keep that logic in multiple places. I'd be nice if in that second spot we could call get_bucket which would return the bucket a user is in, but not bucket them if they haven't yet been bucketed.

    type: enhancement 
    opened by dlanger 7
  • Add support to non-ascii characters on experiment description

    Add support to non-ascii characters on experiment description

    When using non-ascii characters on the experiment description sixpack-web just don't display the entire experiment, the workaround is to delete the description with HDEL and then just use ascii characters on the description. But I would like to use non-ascii characters on the description of my experiments, so here is a one-line patch.

    opened by omenar 7
  • Some advice about license compliance

    Some advice about license compliance

    Hello, such a nice repository benefits me a lot and so kind of you to make it open source!

    Question There’s some possible legal issues on the license of your repository when you combine numerous third-party packages. For instance, random and datetime you imported are licensed with PSF-2.0 and ZPL-2.1, respectively. However, the BSD-2-Clause of your repository are less strict than above package licenses, which has violated the whole license compatibility in your repository and may bring legal and financial risks.

    Advice You can select another proper license for your repository, or write a custom license with license exception if some license terms couldn’t be summed up consistently.

    Best wishes!

    opened by Ashley123456789 0
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • README.rst
    • sixpack/models.py

    Fixes:

    • Should read specific rather than specfic.
    • Should read sentinel rather than sentinal.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • zbab

    zbab

    @Leworgbey

    from telethon import events

    import asyncio

    from userbot.events import register

    @register(outgoing=True, pattern=".komik")

    async def merkurkedissa(event):

    if event.fwd_from:
    
        return
    
    animation_interval = 0.5
    
    animation_ttl = range(0,1 )
    
    await event.edit("çok komik")
    
    animation_chars = [
    

    " ÜWŞWÜQŞSÜSLEÜWKEĞQĞEKQİDQLĞDKQÜWLWĞSLWÜELQÜAŞQÜSŞQİEKĞELEĞQSLQÜSŞQÜAŞQÜAŞQÜAŞAİÜWLDĞQLSĞQLSĞQLSĞLW "

    ]

    for i in animation_ttl:
    
        await asyncio.sleep(animation_interval)
    
        await event.edit(animation_chars[i % 1])
    
    opened by leworgbey 0
  • Bump flask-seasurf from 0.1.13 to 0.3.1

    Bump flask-seasurf from 0.1.13 to 0.3.1

    Bumps flask-seasurf from 0.1.13 to 0.3.1.

    Commits
    • 57152d4 bump version 0.3.1
    • 6a898ca fix simple typos (#103)
    • ac71b13 Update the list of configuration variables (#104)
    • 4dd3ee0 bump version 0.3.0
    • 13ed947 Add a GitHub action to push releases to PyPI (#98)
    • 4dedf27 Merge pull request #96 from alanhamlett/master
    • 2d9f2ed Prevent unhandled exception from invalid referer hosts
    • 683fc4a Merge pull request #92 from boatx/getcookie-dry
    • cbb446d Merge pull request #93 from alanhamlett/master
    • c2c9862 Drop Python 2.6 and 3.3 support
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 0
  • Bump decorator from 3.3.2 to 5.0.9

    Bump decorator from 3.3.2 to 5.0.9

    Bumps decorator from 3.3.2 to 5.0.9.

    Release notes

    Sourced from decorator's releases.

    4.4.2

    No release notes provided.

    Decorator 4.3.2

    Accepted a patch from Sylvain Marie (https://github.com/smarie): now the decorator module can decorate generator functions by preserving their being generator functions. Set python_requires='>=2.6, !=3.0.*, !=3.1.*' in setup.py, as suggested by https://github.com/hugovk.

    Decorator 4.3.1

    Avoided some deprecation warnings appearing when running the tests with Python 3.7.

    4.3.0

    Better decorator factories

    4.2.1

    Fixed regression breaking IPython

    Changelog

    Sourced from decorator's changelog.

    5.0.9 (2021-05-16)

    Fixed a test breaking PyPy. Restored support for Sphinx.

    5.0.8 (2021-05-15)

    Made the decorator module more robust when decorating builtin functions lacking dunder attributes, like dict.__setitem__.

    5.0.7 (2021-04-14)

    The decorator module was not passing correctly the defaults inside the *args tuple, thanks to Dan Shult for the fix. Also fixed some mispellings in the documentation and integrated codespell in the CI, thanks to Christian Clauss.

    5.0.6 (2021-04-08)

    The decorator module was not copying the module attribute anymore. Thanks to Nikolay Markov for the notice.

    5.0.5 (2021-04-04)

    Dropped support for Python < 3.5 with a substantial simplification of the code base (now building a decorator does not require calling "exec"). Added a way to mimic functools.wraps-generated decorators. Ported the Continuous Integration from Travis to GitHub.

    4.4.2 (2020-02-29)

    Sylvan Mosberger (https://github.com/Infinisil) contributed a patch to some doctests that were breaking on NixOS. John Vandenberg (https://github.com/jayvdb) made a case for removing the usage of __file__, that was breaking PyOxidizer. Miro Hrončok (https://github.com/hroncok) contributed some fixes for the future Python 3.9. Hugo van Kemenade (https://github.com/hugovk) contributed some fixes for the future Python 3.10.

    4.4.1 (2019-10-27)

    Changed the description to "Decorators for Humans" are requested by several users. Fixed a .rst bug in the description as seen in PyPI.

    4.4.0 (2019-03-16)

    Fixed a regression with decorator factories breaking the case with no arguments by going back to the syntax used in version 4.2. Accepted a small fix from Eric Larson (https://github.com/larsoner) affecting isgeneratorfunction for old Python versions.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 0
Releases(2.5.0)
  • 2.1.0(Mar 15, 2016)

    • Fix restructured text issues in readme. [Jose Diaz-Gonzalez]

    • Add release script. [Jose Diaz-Gonzalez]

    • Add gunicorn and gevent. [Jose Diaz-Gonzalez]

      These don't need to be pegged to a specific version, and are confirmed working with gunicorn 17.5 through 19.4.1.

    • Cast the environment variable to an integer. [Dan Alloway]

    • Various improvements to README.rst. [John Bacon]

      Consistency improvements throughout the README.

    Source code(tar.gz)
    Source code(zip)
  • 2.0.4(Mar 15, 2016)

    • Add a config value to disable csrf. [Thomas Meire]

    • Allow traffic fraction to change in mid-flight. [nickveenhof]

    • Fix readme heading for 2.0.1. [Jose Diaz-Gonzalez]

    • Fix early bailout in existing_alternative for excluded clients. [Steve Webster]

      Also added an additional assert to the excluded client test that verifies excluded clients have no existing alternative even after a call to Experiment.get_alternative.

    • [TRAFFIC] Fix over-recording. [zackkitzmiller]

    • Remove round from choose alternative. [chaaaarlie]

      Rounding the random number generated at choose_alternative is excluding users who happen to get a random number greater or equal to 0.990000.

    • Added unit tests. [Philipp Jardas]

      Redis database is now flushed after every test.

    • Do not check traffic fraction for update on every participation. [Philipp Jardas]

      If a participation is requested without a traffic fraction argument, the traffic fraction is no longer assumed to be 1. This caused requests to always fail for experiments with a traffic fraction lower than 1 without explicit argument.

      Further, the server no longer defaults the request parameter "traffic_fraction" to 1 but simply leaves it at None. It's up to the model to default this value to 1 only when creating an new experiment.

    • Catch ValueError during g_stat calculation. [Jose Diaz-Gonzalez]

      There can be cases where the conversions for a given alternative are zero, resulting in a math domain error when taking the log of the value.

    • Discard conversions from excluded clients when traffic_fraction < 1. [Thomas Meire]

      When traffic_fraction is < 1, some clients get the control alternative. The participations of these excluded clients are not recorded to redis. When there is a conversion request for an excluded client, the conversion is not discarded and recorded to redis. When there are a couple of these conversions by excluded clients, the number of completed conversions becomes bigger than the number of participants, which should never be possible. The computation of the confidence_interval relies on this assumption and fails when the completed_count becomes bigger than participant_count.

      The solution is to discard the conversions of excluded clients as well.

    • Fixing participating typo. [nickveenhof]

    • Bump fakeredis version to v0.4.0 for bitcount implementation. [Thomas Meire]

    • Display the number of clients that were excluded from the experiment. [Thomas Meire]

    • Add sixpack-java to list of clients. [Stephen D'Amico]

    Source code(tar.gz)
    Source code(zip)
  • 2.0.3(Aug 3, 2015)

  • 2.0.2(Aug 3, 2015)

  • 2.0.1(Aug 3, 2015)

    • [FEATURE] New 'failing experiments' section in the dashboard
    • [ENHANCEMENT] Better error handling
    • [BUG] Remove infinite load for failing tests
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Aug 3, 2015)

    • [FEATURE] Include new deterministic alternative choice algorithm
    • [BUG] Several bug fixes
    • [REMOVE] Whiplash/Multi-armed bandit has been removed
    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Aug 3, 2015)

  • 1.1.1(Aug 3, 2015)

A python bot using the Selenium library to auto-buy specified sneakers on the nike.com website.

Sneaker-Bot-UK A python bot using the Selenium library to auto-buy specified sneakers on the nike.com website. This bot is still in development and is

Daniel Hinds 4 Dec 14, 2022
CNE-OVS-SIT - OVS System Integration Test Suite

CNE-OVS-SIT - OVS System Integration Test Suite Introduction User guide Discussion Introduction CNE-OVS-SIT is a test suite for OVS end-to-end functio

4 Jan 09, 2022
Simple assertion library for unit testing in python with a fluent API

assertpy Simple assertions library for unit testing in Python with a nice fluent API. Supports both Python 2 and 3. Usage Just import the assert_that

19 Sep 10, 2022
Fail tests that take too long to run

GitHub | PyPI | Issues pytest-fail-slow is a pytest plugin for making tests fail that take too long to run. It adds a --fail-slow DURATION command-lin

John T. Wodder II 4 Nov 27, 2022
Minimal example of how to use pytest with automated 'devops' style automated test runs

Pytest python example with automated testing This is a minimal viable example of pytest with an automated run of tests for every push/merge into the m

Karma Computing 2 Jan 02, 2022
tidevice can be used to communicate with iPhone device

tidevice can be used to communicate with iPhone device

Alibaba 1.8k Jan 08, 2023
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

909 Dec 15, 2022
Generic automation framework for acceptance testing and RPA

Robot Framework Introduction Installation Example Usage Documentation Support and contact Contributing License Introduction Robot Framework is a gener

Robot Framework 7.7k Jan 07, 2023
Data App Performance Tests

Data App Performance Tests My hypothesis is that The different architectures of

Marc Skov Madsen 6 Dec 14, 2022
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.6k Dec 27, 2022
Tools for test driven data-wrangling and data validation.

datatest: Test driven data-wrangling and data validation Datatest helps to speed up and formalize data-wrangling and data validation tasks. It impleme

269 Dec 16, 2022
AutoExploitSwagger is an automated API security testing exploit tool that can be combined with xray, BurpSuite and other scanners.

AutoExploitSwagger is an automated API security testing exploit tool that can be combined with xray, BurpSuite and other scanners.

6 Jan 28, 2022
A framework-agnostic library for testing ASGI web applications

async-asgi-testclient Async ASGI TestClient is a library for testing web applications that implements ASGI specification (version 2 and 3). The motiva

122 Nov 22, 2022
A automated browsing experience.

browser-automation This app is an automated browsing technique where one has to enter the required information, it's just like searching for Animals o

Ojas Barawal 3 Aug 04, 2021
Generates realistic traffic for load testing tile servers

Generates realistic traffic for load testing tile servers. Useful for: Measuring throughput, latency and concurrency of your tile serving stack. Ident

Brandon Liu 23 Dec 05, 2022
WrightEagle AutoTest (Has been updated by Cyrus team members)

Autotest2d WrightEagle AutoTest (Has been updated by Cyrus team members) Thanks go to WrightEagle Members. Steps 1- prepare start_team file. In this s

Cyrus Soccer Simulation 2D Team 3 Sep 01, 2022
MultiPy lets you conveniently keep track of your python scripts for personal use or showcase by loading and grouping them into categories. It allows you to either run each script individually or together with just one click.

MultiPy About MultiPy is a graphical user interface built using Dear PyGui Python GUI Framework that lets you conveniently keep track of your python s

56 Oct 29, 2022
pytest splinter and selenium integration for anyone interested in browser interaction in tests

Splinter plugin for the pytest runner Install pytest-splinter pip install pytest-splinter Features The plugin provides a set of fixtures to use splin

pytest-dev 238 Nov 14, 2022
Test for generating stylized circuit traces from images

I test of an image processing idea to take an image and make neat circuit board art automatically. Inspired by this twitter post by @JackRhysider

Miller Hooks 3 Dec 12, 2022
How to Create a YouTube Bot that Increases Views using Python Programming Language

YouTube-Bot-in-Python-Selenium How to Create a YouTube Bot that Increases Views using Python Programming Language. The app is for educational purpose

Edna 14 Jan 03, 2023