AWS SDK for Python

Overview

Boto3 - The AWS SDK for Python

Build Status Version Gitter

Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2. You can find the latest, most up to date, documentation at our doc site, including a list of services that are supported.

Getting Started

Assuming that you have Python and virtualenv installed, set up your environment and install the required dependencies like this or you can install the library using pip:

$ git clone https://github.com/boto/boto3.git
$ cd boto3
$ virtualenv venv
...
$ . venv/bin/activate
$ python -m pip install -r requirements.txt
$ python -m pip install -e .
$ python -m pip install boto3

Using Boto3

After installing boto3

Next, set up credentials (in e.g. ~/.aws/credentials):

[default]
aws_access_key_id = YOUR_KEY
aws_secret_access_key = YOUR_SECRET

Then, set up a default region (in e.g. ~/.aws/config):

[default]
region=us-east-1

Other credentials configuration method can be found here

Then, from a Python interpreter:

>>> import boto3
>>> s3 = boto3.resource('s3')
>>> for bucket in s3.buckets.all():
        print(bucket.name)

Running Tests

You can run tests in all supported Python versions using tox. By default, it will run all of the unit and functional tests, but you can also specify your own nosetests options. Note that this requires that you have all supported versions of Python installed, otherwise you must pass -e or run the nosetests command directly:

$ tox
$ tox -- unit/test_session.py
$ tox -e py26,py33 -- integration/

You can also run individual tests with your default Python version:

$ nosetests tests/unit

Getting Help

We use GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them. Please use these community resources for getting help:

Contributing

We value feedback and contributions from our community. Whether it's a bug report, new feature, correction, or additional documentation, we welcome your issues and pull requests. Please read through this CONTRIBUTING document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your contribution.

Maintenance and Support for SDK Major Versions

Boto3 was made generally available on 06/22/2015 and is currently in the full support phase of the availability life cycle.

For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:

More Resources

Comments
  • ResourceWarning: unclosed ssl.SSLSocket

    ResourceWarning: unclosed ssl.SSLSocket

    For some reason I'm getting a ResourceWarning about a unclosed socket, even when I'm specifically closing the socket myself. See testcase below:

    python3 -munittest discover
    
    import sys
    import boto3
    import unittest
    
    BUCKET = ''
    KEY = ''
    
    
    def give_it_to_me():
        client = boto3.client('s3')
        obj = client.get_object(Bucket=BUCKET, Key=KEY)
        try:
            yield from iter(lambda: obj['Body'].read(1024), b'')
        finally:
            print('Im closing it!', file=sys.stderr, flush=True)
            obj['Body'].close()
    
    
    class TestSomeShit(unittest.TestCase):
        def test_it(self):
            res = give_it_to_me()
            for chunk in res:
                pass
            print('Done', file=sys.stderr, flush=True)
    

    Fill in any BUCKET and KEY to see the problem. Attaching my output below:

    Im closing it!
    test.py:22: ResourceWarning: unclosed <ssl.SSLSocket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('...', 55498), raddr=('...', 443)>
      for chunk in res:
    Done
    .
    ----------------------------------------------------------------------
    Ran 1 test in 0.696s
    
    OK
    
    feature-request needs-review 
    opened by LinusU 53
  • Connecting to SQS in docker after assume role/kubernetes IAM role not working

    Connecting to SQS in docker after assume role/kubernetes IAM role not working

    Please fill out the sections below to help us address your issue.

    What issue did you see ? logs-from-kubernetes.txt when inside docker, can't access role assumed on computer/iam role on kubernetes from my computer it works fine, it finds the credential and config files. when creating s3 client all works fine. this happens only in sqs client..

    Steps to reproduce If you have a runnable example, please include it as a snippet or link to a repository/gist for larger code examples. simple python (3.7.4) code, boto3 (1.14.2), just creating a client for sqs. if __name__ == '__main__': boto3.set_stream_logger('') sqs = boto3.client('sqs')

    Debug logs Full stack trace by adding boto3.set_stream_logger('') to your code. here is local docker, and attached kubernetes logs file

    2020-07-02 07:05:24,593 botocore.hooks [DEBUG] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
    2020-07-02 07:05:24,597 botocore.hooks [DEBUG] Changing event name from before-call.apigateway to before-call.api-gateway
    2020-07-02 07:05:24,598 botocore.hooks [DEBUG] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
    2020-07-02 07:05:24,602 botocore.hooks [DEBUG] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
    2020-07-02 07:05:24,602 botocore.hooks [DEBUG] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
    2020-07-02 07:05:24,604 botocore.hooks [DEBUG] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
    2020-07-02 07:05:24,605 botocore.hooks [DEBUG] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
    2020-07-02 07:05:24,612 botocore.hooks [DEBUG] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
    2020-07-02 07:05:24,613 botocore.hooks [DEBUG] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
    2020-07-02 07:05:24,613 botocore.hooks [DEBUG] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
    2020-07-02 07:05:24,613 botocore.hooks [DEBUG] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: env
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: assume-role
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: assume-role-with-web-identity
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: sso
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: shared-credentials-file
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: custom-process
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: config-file
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: ec2-credentials-file
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: boto-config
    2020-07-02 07:05:24,634 botocore.credentials [DEBUG] Looking for credentials via: container-role
    2020-07-02 07:05:24,634 botocore.credentials [DEBUG] Looking for credentials via: iam-role
    2020-07-02 07:05:24,635 urllib3.connectionpool [DEBUG] Starting new HTTP connection (1): 169.254.169.254:80
    2020-07-02 07:05:25,646 urllib3.connectionpool [DEBUG] Starting new HTTP connection (2): 169.254.169.254:80
    2020-07-02 07:05:26,660 botocore.utils [DEBUG] Caught retryable HTTP exception while making metadata service request to http://169.254.169.254/latest/meta-data/iam/security-credentials/: Read timeout on endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 426, in _make_request
        six.raise_from(e, None)
      File "<string>", line 3, in raise_from
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 421, in _make_request
        httplib_response = conn.getresponse()
      File "/usr/local/lib/python3.7/http/client.py", line 1336, in getresponse
        response.begin()
      File "/usr/local/lib/python3.7/http/client.py", line 306, in begin
        version, status, reason = self._read_status()
      File "/usr/local/lib/python3.7/http/client.py", line 267, in _read_status
        line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
      File "/usr/local/lib/python3.7/socket.py", line 589, in readinto
        return self._sock.recv_into(b)
    socket.timeout: timed out
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 263, in send
        chunked=self._chunked(request.headers),
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 725, in urlopen
        method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
      File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 379, in increment
        raise six.reraise(type(error), error, _stacktrace)
      File "/usr/local/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
        raise value
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
        chunked=chunked,
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 428, in _make_request
        self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 336, in _raise_timeout
        self, url, "Read timed out. (read timeout=%s)" % timeout_value
    urllib3.exceptions.ReadTimeoutError: AWSHTTPConnectionPool(host='169.254.169.254', port=80): Read timed out. (read timeout=1)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/botocore/utils.py", line 342, in _get_request
        response = self._session.send(request.prepare())
      File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
        raise ReadTimeoutError(endpoint_url=request.url, error=e)
    botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
    2020-07-02 07:05:26,669 botocore.utils [DEBUG] Max number of attempts exceeded (1) when attempting to retrieve data from metadata service.
    2020-07-02 07:05:26,671 botocore.loaders [DEBUG] Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/endpoints.json
    2020-07-02 07:05:26,681 botocore.hooks [DEBUG] Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f503ec53b00>
    2020-07-02 07:05:26,696 botocore.loaders [DEBUG] Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/sqs/2012-11-05/service-2.json
    2020-07-02 07:05:26,701 botocore.hooks [DEBUG] Event creating-client-class.sqs: calling handler <function add_generate_presigned_url at 0x7f503eca0f80>
    Traceback (most recent call last):
      File "EnrichmentWorkerService.py", line 88, in <module>
        sqs = boto3.client('sqs')
      File "/usr/local/lib/python3.7/site-packages/boto3/__init__.py", line 91, in client
        return _get_default_session().client(*args, **kwargs)
      File "/usr/local/lib/python3.7/site-packages/boto3/session.py", line 263, in client
        aws_session_token=aws_session_token, config=config)
      File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 835, in create_client
        client_config=config, api_version=api_version)
      File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 85, in create_client
        verify, credentials, scoped_config, client_config, endpoint_bridge)
      File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 287, in _get_client_args
        verify, credentials, scoped_config, client_config, endpoint_bridge)
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 73, in get_client_args
        endpoint_url, is_secure, scoped_config)
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 153, in compute_client_args
        s3_config=s3_config,
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 218, in _compute_endpoint_config
        return self._resolve_endpoint(**resolve_endpoint_kwargs)
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 301, in _resolve_endpoint
        service_name, region_name, endpoint_url, is_secure)
      File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 361, in resolve
        service_name, region_name)
      File "/usr/local/lib/python3.7/site-packages/botocore/regions.py", line 134, in construct_endpoint
        partition, service_name, region_name)
      File "/usr/local/lib/python3.7/site-packages/botocore/regions.py", line 148, in _endpoint_for_partition
        raise NoRegionError()
    botocore.exceptions.NoRegionError: You must specify a region.
    
    
    guidance 
    opened by eldarnegrinperion 49
  • ImportError: cannot import name 'docevents' release 1.15.0

    ImportError: cannot import name 'docevents' release 1.15.0

    Describe the bug aws help command not functioning in release https://github.com/boto/boto3/releases/tag/1.15.0 I am using miniconda3 python runtime environment with the python implementation of AWS CLI. The error originally occurred on our CI pipeline.

    Steps to reproduce

    • pip install "boto3==1.15.0"
    • aws help Traceback (most recent call last): File "C:\Users\X\Miniconda3\Scripts\aws.cmd", line 50, in import awscli.clidriver File "C:\Users\X\Miniconda3\lib\site-packages\awscli\clidriver.py", line 36, in from awscli.help import ProviderHelpCommand File "C:\Users\X\Miniconda3\lib\site-packages\awscli\help.py", line 23, in from botocore.docs.bcdoc import docevents ImportError: cannot import name 'docevents'

    Expected behavior aws help is displayed as when boto3 1.14.63 is installed

    opened by bruce-lindsay 47
  • Support AWS Athena waiter feature

    Support AWS Athena waiter feature

    Hi,

    If you go to https://boto3.readthedocs.io/en/latest/reference/services/athena.html#Athena.Client.get_waiter looks like that feature is not implemented.

    I have a lambda function which executes Athena queries. I use a function called start_query_execution() in boto3 and I need to write a loop to check if the execution is finished or not, so I think it will be awesome if we have waiter feature implemented in Athena.

    Thanks

    feature-request waiters 
    opened by xysr89 41
  • Add explanation on how to catch boto3 exceptions

    Add explanation on how to catch boto3 exceptions

    The problem I have with the boto3 documentation can be found here: https://stackoverflow.com/questions/46174385/properly-catch-boto3-errors

    Am I doing this right? Or what is best practice when dealing with boto3 exceptions? Can this be added to the wiki?

    documentation feature-request 
    opened by schumannd 38
  • How to Use botocore.response.StreamingBody as stdin PIPE

    How to Use botocore.response.StreamingBody as stdin PIPE

    I want to pipe large video files from AWS S3 into Popen's stdin. This code runs as an AWS Lambda function, so these files won't fit in memory or on the local file system. Also, I don't want to copy these huge files anywhere, I just want to stream the input, process on the fly, and stream the output. I've already got the processing and streaming output bits working. The problem is how to obtain an input stream as a Popen pipe.

    I can access a file in an S3 bucket:

    import boto3
    s3 = boto3.resource('s3')
    response = s3.Object(bucket_name=bucket, key=key).get()
    body = response['Body']  
    

    body is a botocore.response.StreamingBody. I intend to use body something like this:

    from subprocess import Popen, PIPE
    Popen(cmd, stdin=PIPE, stdout=PIPE).communicate(input=body)[0]
    

    But of course body needs to be converted into a file-like object. The question is how?

    opened by mslinn 38
  • Initial commit of S3 upload_file/download_file

    Initial commit of S3 upload_file/download_file

    This PR adds support for an intelligent upload_file/download_file method for boto3.

    The module docstring provides some general information and an overview of how to use the module.

    I'd like to get some initial feedback on this. There's unit/integ tests added (there's a few integration tests I've haven't fleshed out yet), and the code is fully functional, but I will be pushing some changes in a bit.

    There are two changes I plan on making:

    • I'm going to be changing the logic for the _download_range function. The single lock writer on the file unnecessarily slows down the parallel downloads. I'm likely going to port some version of the what the AWS CLI does to improve this.
    • The callback interface may need to change. It requires a lot of information that's not technically necessary and could be provided. It also doesn't handle retries. In order to do this, I might need to change the interface from a simple callback to an actual class that has a few required methods.

    Also, the socket timeouts and bandwidth throttling are not implemented. Those are stretch features I might end up deferring for now.

    There will also be another pull request that integrates this with the s3 client and s3 resource objects.

    cc @kyleknap @danielgtaylor

    enhancement 
    opened by jamesls 34
  • Upload or put object in S3 failing silently

    Upload or put object in S3 failing silently

    I've been trying to upload files from a local folder into folders on S3 using Boto3, and it's failing kinda silently, with no indication of why the upload isn't happening.

    key_name = folder + '/' 
     s3_connect = boto3.client('s3', s3_bucket_region,)
     # upload File to S3
     for filename in os.listdir(folder):
         s3_name = key_name + filename
         print folder, filename, key_name, s3_name
         upload = s3_connect.upload_file(
             s3_name, s3_bucket, key_name,
         )
    

    Printing upload just says "None", with no other information. No upload happens. I've also tried using put_object:

    put_obj = s3_connect.put_object(
            Bucket=s3_bucket, 
            Key=key_name,
            Body=s3_name,
        )
    

    and I get an HTTPS response code of 200 - but no files upload.

    First, I'd love to solve this problem, but second, it seems this isn't the right behavior - if an upload doesn't happen, there should be a bit more information about why (although I imagine this might be a limitation of the API?)

    s3 
    opened by maxpearl 32
  • Strange behavior when trying to create an S3 bucket in us-east-1

    Strange behavior when trying to create an S3 bucket in us-east-1

    Version info: boto3 = 0.0.19 (from pip) botocore = 1.0.0b1 (from pip) Python = 2.7.9 (from Fedora 22)

    I have no problem creating S3 buckets in us-west-1 or us-west-2, but specifying us-east-1 gives InvalidLocationConstraint

    >>> conn = boto3.client("s3")
    >>> conn.create_bucket(
        Bucket='testing123-blah-blah-blalalala', 
        CreateBucketConfiguration={'LocationConstraint': "us-east-1"})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 200, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 255, in _make_api_call
        raise ClientError(parsed_response, operation_name)
    botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid
    

    Also trying with a s3 client connected directly to us-east-1:

    >>> conn = boto3.client("s3", region_name="us-east-1")
    >>> conn.create_bucket(Bucket='testing123-blah-blah-blalalala', CreateBucketConfiguration={'LocationConstraint': "us-east-1"})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 200, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 255, in _make_api_call
        raise ClientError(parsed_response, operation_name)
    botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid
    

    When I do not specify a region, the bucket is created in us-east-1 (verified in the web console):

    >>> conn.create_bucket(Bucket='testing123-blah-blah-blalalala')
    {u'Location': '/testing123-blah-blah-blalalala', 'ResponseMetadata': {'HTTPStatusCode': 200, 'HostId': 'Qq2CqKPm4PhADUJ8X+ngxxEE3yRrsT3DOS4TefgzUpYBKzQO/62cQy20yPa1zs7l', 'RequestId': '06B36B1D8B1213C8'}}
    

    ...but the bucket returns None for LocationConstraint:

    >>> conn.get_bucket_location(Bucket='testing123-blah-blah-blalalala')
    {'LocationConstraint': None, 'ResponseMetadata': {'HTTPStatusCode': 200, 'HostId': 'nBGHNu30A/m/RymzuoHLiE2uWuzCsz3v1mcov324r2sMYX7ANq1jOIR0XphWiUIAxDwmxTOW8eA=', 'RequestId': '53A539CC4BCA08C4'}}
    

    us-east-1 is listed as a valid region when I enumerate the regions:

    >>> conn = boto3.client("ec2", region_name="us-east-1")
    >>> [x["RegionName"] for x in conn.describe_regions()["Regions"]]
    ['eu-central-1', 'sa-east-1', 'ap-northeast-1', 'eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2', 'ap-southeast-2', 'ap-southeast-1']
    
    needs-discussion 
    opened by ghost 32
  • Hang in s3.download_file with Celery worker in version 1.4.0

    Hang in s3.download_file with Celery worker in version 1.4.0

    I've been using lots of boto3 calls in my Flask app for some time, but the switch to the latest boto3 v1.4.0 has broken my Celery workers. Something that may be unique about my app is that I use S3 to download a secure environment variables file before launching my app or workers. It appears that the new boto3 works with my app, but hangs when launching the Celery worker.

    I would temporarily downgrade my boto3 to avoid the problem, but its been a long time since the last release, and I need the elbv2 support that only comes in 1.4.0.

    I've created a tiny version of my worker (worker2.py) to demonstrate the problem. I've verified that using the previous version boto3 1.3.1 results in the worker launching properly. I see all prints and the Celery worker banner output.

    If I install boto3 1.4.0, then the second print() statement "Download complete" is never reached. Also note that I tried following the new doc example with boto3.resource and using s3.meta.client, but that fails as well.

    #
    # Stub Celery worker to demonstrate bug in Boto3 1.4.0. Works fine with previous version Boto3 1.3.1.
    # Test with: celery worker -A worker2.celery
    #
    from flask import Flask
    from celery import Celery
    import boto3
    import tempfile
    
    celery = Celery(__name__, broker='amqp://guest:[email protected]:5672//')
    
    app = Flask(__name__)
    
    s3 = boto3.client('s3', region_name='us-west-1')
    env_file = 'APPNAME.APPSTAGE.env'
    with tempfile.NamedTemporaryFile() as s3_file:
        print("Downloading file...")
        response = s3.download_file('APPBUCKET', env_file, s3_file.name)
        print("Download complete!")
    

    You can test it by running the following at the command line:

    celery worker -A worker2.celery
    

    Also note that just running the code downloads the file just fine with 1.4.0:

    python worker2.py
    
    opened by dmulter 30
  • KeyError: 'endpoint_resolver'

    KeyError: 'endpoint_resolver'

    Hi,

    I sometimes get that error trying to call a lambda function boto3==1.3.1

    def lambda_execute(payload):
        import boto3
        client = boto3.client('lambda', aws_access_key_id=KEY, aws_secret_access_key=SECRET region_name=REGION)
        client.invoke(**payload)
    

    payload is in this format:

    {'FunctionName': fct, 'InvocationType': 'Event', 'LogType': 'None', 'Payload': simplejson.dumps(payload, default=encode_model)}
    

    error seems to be coming from get_component in botocore/session.py

    screenshot 2016-09-07 11 35 14

    Can you help ?

    question guidance closed-for-staleness 
    opened by CosmicAnalogue465 30
  • recommended method for slowing down speech does not work

    recommended method for slowing down speech does not work

    Describe the bug

    In the documentation listed here

    https://docs.aws.amazon.com/polly/latest/dg/voice-speed-vip.html

    it says to use the following tags to slow speech down

    In some cases, it might help your audience to slow the speaking rate slightly to aid in comprehension.

    However, it does not work. The computer will actually say: 'speak' and 'prosody rate'

    My code is as follows where s is the string I'm trying to process, I can't use < > in github because they're special characters so in place of < I will use [

    s = f'[speak][prosody rate="90%"]{s}[/prosody][/speak]'

    I would attach my file but .mp3 are not accepted.

    Expected Behavior

    Output speech text without saying 'speak' and 'prosody'

    Current Behavior

    Does not outputs speech text without saying 'speak' and 'prosody'

    Reproduction Steps

    see above.

    Possible Solution

    No response

    Additional Information/Context

    No response

    SDK version used

    don't know

    Environment details (OS name and version, etc.)

    Mac OS 12.2 Python 3.8

    polly 
    opened by kylefoley76 4
  • Garbage collection using `gc.collect()` is not showing any effect

    Garbage collection using `gc.collect()` is not showing any effect

    Describe the bug

    When we invoke boto3 client methods multiple times/ running in some kind of loop for n times, memory is getting accumulated with each iteration. Even if we call the gc.collect() it also not showing any effect

    Expected Behavior

    1. Garbage collection should happen properly all the unused resources should be removed

    Current Behavior

    If we are running some boto3 code in loop for n times, memory is accumulating with each iteration gc.collect() not releasing unused memory. At the end of the program gc.collect() returning 0 unreachable objects but this also doesn't show any change in memory usage.

    Reproduction Steps

    import gc
    import os
    import boto3
    
    gc.set_debug(gc.DEBUG_UNCOLLECTABLE)
    
    boto3.set_stream_logger('')
    
    def get_memory_usage():
        return psutil.Process(os.getpid()).memory_info().rss // 1024 ** 2
    
    
    def test():
        queue_url = 'https://us-east-2.queue.amazonaws.com/916470431480/test.fifo'
        sqs = boto3.client('sqs')
        for i in range(10):
            message = sqs.receive_message(QueueUrl=queue_url)
            if message.Get ('Messages'):
                print(message)
                recept_handle = message['Messages'][0]['ReceiptHandle']
                sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=recept_handle)
    
            print(f'Iteration - {i + 1} Unreachable Objects: {gc.collect()} and length: {len(gc.garbage)}')
            print(f'Memory usage After: {get_memory_usage()}mb')
    
    
    for _ in range(5):
        print(f'Memory usage Before: {get_memory_usage()}mb')
        test()
        print(f'==================Unreachable Objects: {gc.collect()}==================')
        print(len(gc.garbage))
        print(f'Memory usage After: {get_memory_usage()}mb')
    
        print('\n' * 5)
    

    Attached sample code we can reproduce the issue by running the above code

    Possible Solution

    No response

    Additional Information/Context

    Logs: log.txt

    SDK version used

    1.26.37

    Environment details (OS name and version, etc.)

    Linux 5.15.84-1-MANJARO

    sqs 
    opened by sudouser777 7
  • list_objects response does not decode prefix

    list_objects response does not decode prefix

    Describe the bug

    Hi, I've noticed that in the response of list_objects with a prefix, the client does not decode the prefix. for example. we have the 2 tests:

    @attr(resource='bucket')
    @attr(method='get')
    @attr(operation='list under prefix')
    @attr(assertion='returns only objects under prefix')
    @attr('fails_on_dbstore')
    def test_bucket_list_prefix_basic():
        key_names = ['foo/bar', 'foo/baz', 'quux']
        bucket_name = _create_objects(keys=key_names)
        client = get_client()
    
        response = client.list_objects(Bucket=bucket_name, Prefix='foo/')
        eq(response['Prefix'], 'foo/')
    
        keys = _get_keys(response)
        prefixes = _get_prefixes(response)
        eq(keys, ['foo/bar', 'foo/baz'])
        eq(prefixes, [])
    
    @attr(resource='bucket')
    @attr(method='get')
    @attr(operation='list under prefix with list-objects-v2')
    @attr(assertion='returns only objects under prefix')
    @attr('list-objects-v2')
    @attr('fails_on_dbstore')
    def test_bucket_listv2_prefix_basic():
        key_names = ['foo/bar', 'foo/baz', 'quux']
        bucket_name = _create_objects(keys=key_names)
        client = get_client()
    
        response = client.list_objects_v2(Bucket=bucket_name, Prefix='foo/')
        eq(response['Prefix'], 'foo/')
    
        keys = _get_keys(response)
        prefixes = _get_prefixes(response)
        eq(keys, ['foo/bar', 'foo/baz'])
        eq(prefixes, [])
    

    The only difference between the 2 tests is that the first is using list_objects while the second is using list_objects_v2 (version of list objects) - those tests are from the project Ceph-S3 tests. I printed the response in both cases and noticed that the response is not decoded back in the prefix field:

    
    {'ResponseMetadata': {'RequestId': 'lbt12buh-6t7zlf-j0f', 'HostId': 'lbt12buh-6t7zlf-j0f', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'lbt12buh-6t7zlf-j0f', 'x-amz-id-2': 'lbt12buh-6t7zlf-j0f', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '818', 'date': 'Sun, 18 Dec 2022 07:09:19 GMT', 'connection': 'keep-alive', 'keep-alive': 'timeout=5'}, 'RetryAttempts': 0}, 'IsTruncated': False, 'Marker': '', 'Contents': [{'Key': 'foo/bar', 'LastModified': datetime.datetime(2022, 12, 18, 7, 9, 18, tzinfo=tzlocal()), 'ETag': '"82d0f0fa8551de8b7eb5ecb65eae0261"', 'Size': 7, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'NooBaa', 'ID': '123'}}, {'Key': 'foo/baz', 'LastModified': datetime.datetime(2022, 12, 18, 7, 9, 19, tzinfo=tzlocal()), 'ETag': '"2b92cb3da20fd0dd9b62b614dbcbe9b3"', 'Size': 7, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'NooBaa', 'ID': '123'}}], 'Name': 'ceph-kdwa0al2iozquphj5vjsl42h-1', 'Prefix': 'foo%2F', 'MaxKeys': 1000, 'EncodingType': 'url'}
    
    
    
    {'ResponseMetadata': {'RequestId': 'lbt1ap8q-c0jr0-vk8', 'HostId': 'lbt1ap8q-c0jr0-vk8', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'lbt1ap8q-c0jr0-vk8', 'x-amz-id-2': 'lbt1ap8q-c0jr0-vk8', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '703', 'date': 'Sun, 18 Dec 2022 07:15:49 GMT', 'connection': 'keep-alive', 'keep-alive': 'timeout=5'}, 'RetryAttempts': 0}, 'IsTruncated': False, 'Contents': [{'Key': 'foo/bar', 'LastModified': datetime.datetime(2022, 12, 18, 7, 15, 49, tzinfo=tzlocal()), 'ETag': '"82d0f0fa8551de8b7eb5ecb65eae0261"', 'Size': 7, 'StorageClass': 'STANDARD'}, {'Key': 'foo/baz', 'LastModified': datetime.datetime(2022, 12, 18, 7, 15, 49, tzinfo=tzlocal()), 'ETag': '"2b92cb3da20fd0dd9b62b614dbcbe9b3"', 'Size': 7, 'StorageClass': 'STANDARD'}], 'Name': 'ceph-5s1koiijx2gac6rn7jvy3cr3-1', 'Prefix': 'foo/', 'MaxKeys': 1000, 'EncodingType': 'url', 'KeyCount': 2}
    
    

    See the difference between 'Prefix': 'foo%2F' (not decoded back) and 'Prefix': 'foo/' (decoded)

    Expected Behavior

    To see the prefix in the response decoded, which means: 'Prefix': 'foo%2F'

    Current Behavior

    prefix is not decoded in the client: 'Prefix': 'foo%2F'

    Reproduction Steps

    you can use the 2 tests as described above.

    Possible Solution

    The prefix needs to be decoded in the client in list_object (as it is decoded in list_object_v2);

    Additional Information/Context

    I was not sure how the last 2 questions is helping - but I answered them anyway: SDK version used - server side (the issue is about the boto3 client). Environment details (OS name and version, etc.) - my station.

    SDK version used

    2 server 3 client - boto3

    Environment details (OS name and version, etc.)

    MacOS 12.6.1

    s3 
    opened by shirady 2
  • S3 resource meta client copy tagging behaviour not documented

    S3 resource meta client copy tagging behaviour not documented

    Describe the issue

    When using the meta client to copy files from one bucket to another, tags are only copied when the file size of the object is less than 8MB.

    Looking through the documentation, there is nothing mentioned around the tagging behaviour when using this method. Would it be possible to update the documentation to include this behaviour?

    Find below the code snippet that I used to copy the object.

    import boto3
    s3_resource = boto3.resource('s3')
    key = "test.txt"
    copy_source = {
        "Bucket": ORIGIN_BUCKET,
        "Key": key
    }
    
    s3_resource.meta.client.copy(copy_source, DESTINATION_BUCKET, key)
    

    Links

    Link to the description of the method in the docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.copy

    Link to the file that lead me to test the behaviour with 7MB, 8MB and 9MB files. With only the 7MB file having the tags present on the object in the destination bucket: https://github.com/boto/boto3/blob/develop/boto3/s3/transfer.py#L170

    documentation s3 resources p3 
    opened by GCHQDeveloper9491 1
  • ObjectVersion filter method does not respect MaxKeys

    ObjectVersion filter method does not respect MaxKeys

    Describe the bug

    When calling filter(...) with a value for MaxKeys under the limit of 1000, the response includes more items than requested.

    Expected Behavior

    The documentation for the MaxKeys parameter states:

    MaxKeys (integer) -- Sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more. If additional keys satisfy the search criteria, but were not returned because max-keys was exceeded, the response contains <isTruncated>true</isTruncated>. To return the additional keys, see key-marker and version-id-marker.
    

    I would expect that if MaxKeys is set and the filter criteria applies to more items, that the response would be limited to the number of requested keys and that the response would include the truncation value as called out in the docs.

    Alternatively, if there are reasons why the API supports it but boto3 does not, I would expect that the boto3 documentation would explain the delta (i.e. "This is deprecated" or "Don't use this, it doesn't do what you expect")

    Current Behavior

    The call to filter(...) returns all items with the given prefix regardless of the value provided for MaxKeys

    Reproduction Steps

    import boto3
    s3_rsrc = boto3.resource('s3')
    
    # Not actual bucket, but can provide via secure channel if required
    # Bucket prefix combo previously initialized with 79 items under the prefix,
    # Each item has ~10 versions
    bucket_name = 'dataset-abc-123456789012-us-west-2'
    prefix = 'AAQICCAV'
    bucket = s3_rsrc.Bucket(bucket_name)
    kwargs = {'Prefix': prefix, 'MaxKeys': 10}
    
    resp = bucket.object_versions.filter(**kwargs)
    
    item_count = 0
    key_count = 0
    last_key = None
    for item in resp:
        if item.key != last_key:
            key_count += 1
            last_key = item.key
        item_count += 1
    
    print(f"Item count: {item_count}, Key count: {key_count}")
    

    Output:

    Item count: 783, Key count: 79
    

    Possible Solution

    No response

    Additional Information/Context

    No response

    SDK version used

    1.26.24

    Environment details (OS name and version, etc.)

    MacOS 12.6.1, Python 3.10.8

    bug s3 resources 
    opened by corey-cole 2
Releases(0.0.14)
  • 0.0.14(Apr 9, 2015)

    • feature:Resources: Update to the latest resource models for:
    • AWS CloudFormation
    • Amazon EC2
    • AWS IAM
    • feature:Amazon S3: Add an upload_file and download_file to S3 clients that transparently handle parallel multipart transfers.
    • feature:Botocore: Update to Botocore 0.102.0.
    • Add support for Amazon Machine Learning.
    • Add support for Amazon Workspaces.
    • Update requests to 2.6.0.
    • Update AWS Lambda to the latest API.
    • Update Amazon EC2 Container Service to the latest API.
    • Update Amazon S3 to the latest API.
    • Add DBSnapshotCompleted support to Amazon RDS waiters.
    • Fixes for the REST-JSON protocol.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.13(Apr 3, 2015)

    • feature:Botocore: Update to Botocore 0.100.0.
    • Update AWS CodeDeploy to the latest service API.
    • Update Amazon RDS to support the describe_certificates service operation.
    • Update Amazon Elastic Transcoder to support PlayReady DRM.
    • Update Amazon EC2 to support D2 instance types.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.12(Mar 27, 2015)

    • feature:Resources: Add the ability to load resource data from a has relationship. This saves a call to load when available, and otherwise fixes a problem where there was no way to get at certain resource data. (issue 74,
    • feature:Botocore: Update to Botocore 0.99.0
    • Update service models for amazon Elastic Transcoder, AWS IAM and AWS OpsWorks to the latest versions.
    • Add deprecation warnings for old interface.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.11(Mar 24, 2015)

    • feature:Resources: Add Amazon EC2 support for ClassicLink actions and add a delete action to EC2 Volume resources.
    • feature:Resources: Add a load operation and user reference to AWS IAM's CurrentUser resource. (issue 72,
    • feature:Resources: Add resources for AWS IAM managed policies. (issue 71)
    • feature:Botocore: Update to Botocore 0.97.0
    • Add new Amazon EC2 waiters.
    • Add support for Amazon S3 cross region replication.
    • Fix an issue where empty config values could not be specified for Amazon S3's bucket notifications. (botocore issue 495)
    • Update Amazon CloudWatch Logs to the latest API.
    • Update Amazon Elastic Transcoder to the latest API.
    • Update AWS CloudTrail to the latest API.
    • Fix bug where explicitly passed profile_name will now override any access and secret keys set in environment variables. (botocore issue 486)
    • Add endpoint_url to client.meta.
    • Better error messages for invalid regions.
    • Fix creating clients with unicode service name.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.10(Mar 24, 2015)

    • bugfix:Documentation: Name collisions are now handled at the resource model layer instead of the factory, meaning that the documentation now uses the correct names. (issue 67)
    • feature:Session: Add a region_name option when creating a session. (issue 69, issue 21)
    • feature:Botocore: Update to Botocore 0.94.0
    • Update to the latest Amazon CloudeSearch API.
    • Add support for near-realtime data updates and exporting historical data from Amazon Cognito Sync.
    • Removed the ability to clone a low-level client. Instead, create a new client with the same parameters.
    • Add support for URL paths in an endpoint URL.
    • Multithreading signature fixes.
    • Add support for listing hosted zones by name and getting hosted zone counts from Amazon Route53.
    • Add support for tagging to AWS Data Pipeline.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.9(Feb 20, 2015)

    • feature:Botocore: Update to Botocore 0.92.0
    • Add support for the latest Amazon EC2 Container Service API.
    • Allow calling AWS STS assume_role_with_saml without credentials.
    • Update to latest Amazon CloudFront API
    • Add support for AWS STS regionalized calls by passing both a region name and an endpoint URL. (botocore issue 464)
    • Add support for Amazon Simple Systems Management Service (SSM)
    • Fix Amazon S3 auth errors when uploading large files to the eu-central-1 and cn-north-1 regions. (botocore issue 462)
    • Add support for AWS IAM managed policies
    • Add support for Amazon ElastiCache tagging
    • Add support for Amazon Route53 Domains tagging of domains
    Source code(tar.gz)
    Source code(zip)
  • 0.0.8(Feb 11, 2015)

    • bugfix:Resources: Fix Amazon S3 resource identifier order. (issue 62)
    • bugfix:Resources: Fix collection resource hydration path. (issue 61)
    • bugfix:Resources: Re-enable service-level access to all resources, allowing e.g. obj = s3.Object('bucket', 'key'). (issue 60)
    • feature:Botocore: Update to Botocore 0.87.0
    • Add support for Amazon DynamoDB secondary index scanning.
    • Upgrade to requests 2.5.1.
    • Add support for anonymous (unsigned) clients. (botocore issue 448)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.7(Feb 5, 2015)

    • feature:Resources: Enable support for Amazon Glacier.
    • feature:Resources: Support plural references and nested JMESPath queries for data members when building parameters and identifiers. (issue 52)
    • feature:Resources: Update to the latest resource JSON format.This is a backward-incompatible change as not all resources are exposed at the service level anymore. For example, s3.Object('bucket', 'key') is now s3.Bucket('bucket').Object('key'). (issue 51)
    • feature:Resources: Make resource.meta a proper object. This allows you to do things like resource.meta.client. This is a backward-incompatible change. (issue 45)
    • feature:Dependency: Update to JMESPath 0.6.1
    • feature:Botocore: Update to Botocore 0.86.0
    • Add support for AWS CloudHSM
    • Add support for Amazon EC2 and Autoscaling ClassicLink
    • Add support for Amazon EC2 Container Service (ECS)
    • Add support for encryption at rest and CloudHSM to Amazon RDS
    • Add support for Amazon DynamoDB online indexing.
    • Add support for AWS ImportExport get_shipping_label.
    • Add support for Amazon Glacier.
    • Add waiters for AWS ElastiCache. (botocore issue 443)
    • Fix an issue with Amazon CloudFront waiters. (botocore issue 426)
    • Allow binary data to be passed to UserData. (botocore issue 416)
    • Fix Amazon EMR endpoints for eu-central-1 and cn-north-1. (botocore issue 423)
    • Fix issue with base64 encoding of blob types for Amazon EMR. (botocore issue 413)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.6(Dec 18, 2014)

    • feature:Amazon SQS: Add purge action to queue resources
    • feature:Waiters: Add documentation for client and resource waiters (issue 44)
    • feature:Waiters: Add support for resource waiters (issue 43)
    • bugfix:Installation: Remove dependency on the unused six module (issue 42)
    • feature:Botocore: Update to Botocore 0.80.0
    • Update Amazon Simple Workflow Service (SWF) to the latest version
    • Update AWS Storage Gateway to the latest version
    • Update AWS Elastic MapReduce (EMR) to the latest version
    • Update AWS Elastic Transcoder to the latest version
    • Enable use of page_size for clients (botocore issue 408)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.5(Dec 16, 2014)

    • feature: Add support for batch actions on collections. (issue 32)
    • feature: Update to Botocore 0.78.0
    • Add support for Amazon Simple Queue Service purge queue which allows users to delete the messages in their queue.
    • Add AWS OpsWorks support for registering and assigning existing Amazon EC2 instances and on-premises servers.
    • Fix issue with expired signatures when retrying failed requests (botocore issue 399)
    • Port Route53 resource ID customizations from AWS CLI to Botocore. (botocore issue 398)
    • Fix handling of blob type serialization for JSON services. (botocore issue 397)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.4(Dec 4, 2014)

    • feature: Update to Botocore 0.77.0
    • Add support for Kinesis PutRecords operation. It writes multiple data records from a producer into an Amazon Kinesis stream in a single call.
    • Add support for IAM GetAccountAuthorizationDetails operation. It retrieves information about all IAM users, groups, and roles in your account, including their relationships to one another and their attached policies.
    • Add support for updating the comment of a Route53 hosted zone.
    • Fix base64 serialization for JSON protocol services.
    • Fix issue where certain timestamps were not being accepted as valid input (botocore issue 389)
    • feature: Update Amazon EC2 resource model.
    • feature: Support belongsTo resource reference as well as path specified in an action's resource definition.
    • bugfix: Fix an issue accessing SQS message bodies (issue 33)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Nov 26, 2014)

    • feature: Update to Botocore 0.76.0.
    • Add support for using AWS Data Pipeline templates to create pipelines and bind values to parameters in the pipeline
    • Add support to Amazon Elastic Transcoder client for encryption of files in Amazon S3.
    • Fix issue where Amazon S3 requests were not being resigned correctly when using Signature Version 4. (botocore issue 388)
    • Add support for custom response parsing in Botocore clients. (botocore issue 387)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.2(Nov 26, 2014)

  • 0.0.1(Nov 26, 2014)

aiosql - Simple SQL in Python

aiosql - Simple SQL in Python SQL is code. Write it, version control it, comment it, and run it using files. Writing your SQL code in Python programs

Will Vaughn 1.1k Jan 08, 2023
MinIO Client SDK for Python

MinIO Python SDK for Amazon S3 Compatible Cloud Storage MinIO Python SDK is Simple Storage Service (aka S3) client to perform bucket and object operat

High Performance, Kubernetes Native Object Storage 582 Dec 28, 2022
A Redis client library for Twisted Python

txRedis Asynchronous Redis client for Twisted Python. Install Install via pip. Usage examples can be found in the examples/ directory of this reposito

Dorian Raymer 127 Oct 23, 2022
aiomysql is a library for accessing a MySQL database from the asyncio

aiomysql aiomysql is a "driver" for accessing a MySQL database from the asyncio (PEP-3156/tulip) framework. It depends on and reuses most parts of PyM

aio-libs 1.5k Jan 03, 2023
Python ODBC bridge

pyodbc pyodbc is an open source Python module that makes accessing ODBC databases simple. It implements the DB API 2.0 specification but is packed wit

Michael Kleehammer 2.6k Dec 27, 2022
Familiar asyncio ORM for python, built with relations in mind

Tortoise ORM Introduction Tortoise ORM is an easy-to-use asyncio ORM (Object Relational Mapper) inspired by Django. Tortoise ORM was build with relati

Tortoise 3.3k Dec 31, 2022
PyMongo - the Python driver for MongoDB

PyMongo Info: See the mongo site for more information. See GitHub for the latest source. Documentation: Available at pymongo.readthedocs.io Author: Mi

mongodb 3.7k Jan 08, 2023
A database migrations tool for SQLAlchemy.

Alembic is a database migrations tool written by the author of SQLAlchemy. A migrations tool offers the following functionality: Can emit ALTER statem

SQLAlchemy 1.7k Jan 01, 2023
Dinamopy is a python helper library for dynamodb

Dinamopy is a python helper library for dynamodb. You can define your access patterns in a json file and can use dynamic method names to make operations.

Rasim Andıran 2 Jul 18, 2022
A Telegram Bot to manage Redis Database.

A Telegram Bot to manage Redis database. Direct deploy on heroku Manual Deployment python3, git is required Clone repo git clone https://github.com/bu

Amit Sharma 4 Oct 21, 2022
Prometheus instrumentation library for Python applications

Prometheus Python Client The official Python 2 and 3 client for Prometheus. Three Step Demo One: Install the client: pip install prometheus-client Tw

Prometheus 3.2k Jan 07, 2023
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 27, 2022
Toolkit for storing files and attachments in web applications

DEPOT - File Storage Made Easy DEPOT is a framework for easily storing and serving files in web applications on Python2.6+ and Python3.2+. DEPOT suppo

Alessandro Molina 139 Dec 25, 2022
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 29, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 02, 2023
SAP HANA Connector in pure Python

SAP HANA Database Client for Python A pure Python client for the SAP HANA Database based on the SAP HANA Database SQL Command Network Protocol. pyhdb

SAP 299 Nov 20, 2022
A pandas-like deferred expression system, with first-class SQL support

Ibis: Python data analysis framework for Hadoop and SQL engines Service Status Documentation Conda packages PyPI Azure Coverage Ibis is a toolbox to b

Ibis Project 2.3k Jan 06, 2023
A CRUD and REST api with mongodb atlas.

Movies_api A CRUD and REST api with mongodb atlas. Setup First import all the python dependencies in your virtual environment or globally by the follo

Pratyush Kongalla 0 Nov 09, 2022
Import entity definition document into SQLie3. Manage the entity. Also, create a "Create Table SQL file".

EntityDocumentMaker Version 1.00 After importing the entity definition (Excel file), store the data in sqlite3. エンティティ定義(Excelファイル)をインポートした後、データをsqlit

G-jon FujiYama 1 Jan 09, 2022
Pandas Google BigQuery

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Python for Data 345 Dec 28, 2022