a full featured file system for online data storage

Related tags

Storages3ql
Overview

S3QL

S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack. S3QL effectively provides a virtual drive of dynamic, infinite capacity that can be accessed from any computer with internet access.

S3QL is a standard conforming, full featured UNIX file system that is conceptually indistinguishable from any local file system. Furthermore, S3QL has additional features like compression, encryption, data de-duplication, immutable trees and snapshotting which make it especially suitable for online backup and archival.

S3QL is designed to favor simplicity and elegance over performance and feature-creep. Care has been taken to make the source code as readable and serviceable as possible. Solid error detection and error handling have been included from the very first line, and S3QL comes with extensive automated test cases for all its components.

Features

  • Transparency. Conceptually, S3QL is indistinguishable from a local file system. For example, it supports hardlinks, symlinks, standard unix permissions, extended attributes and file sizes up to 2 TB.

  • Dynamic Size. The size of an S3QL file system grows and shrinks dynamically as required.

  • Compression. Before storage, all data may be compressed with the LZMA, bzip2 or deflate (gzip) algorithm.

  • Encryption. After compression (but before upload), all data can be AES encrypted with a 256 bit key. An additional SHA256 HMAC checksum is used to protect the data against manipulation.

  • Data De-duplication. If several files have identical contents, the redundant data will be stored only once. This works across all files stored in the file system, and also if only some parts of the files are identical while other parts differ.

  • Immutable Trees. Directory trees can be made immutable, so that their contents can no longer be changed in any way whatsoever. This can be used to ensure that backups can not be modified after they have been made.

  • Copy-on-Write/Snapshotting. S3QL can replicate entire directory trees without using any additional storage space. Only if one of the copies is modified, the part of the data that has been modified will take up additional storage space. This can be used to create intelligent snapshots that preserve the state of a directory at different points in time using a minimum amount of space.

  • High Performance independent of network latency. All operations that do not write or read file contents (like creating directories or moving, renaming, and changing permissions of files and directories) are very fast because they are carried out without any network transactions.

    S3QL achieves this by saving the entire file and directory structure in a database. This database is locally cached and the remote copy updated asynchronously.

  • Support for low bandwidth connections. S3QL splits file contents into smaller blocks and caches blocks locally. This minimizes both the number of network transactions required for reading and writing data, and the amount of data that has to be transferred when only parts of a file are read or written.

Development Status

S3QL is considered stable and suitable for production use. Starting with version 2.17.1, S3QL uses semantic versioning. This means that backwards-incompatible versions (e.g., versions that require an upgrade of the file system revision) will be reflected in an increase of the major version number.

Supported Platforms

S3QL is developed and tested under Linux. Users have also reported running S3QL successfully on OS-X, FreeBSD and NetBSD. We try to maintain compatibility with these systems, but (due to lack of pre-release testers) we cannot guarantee that every release will run on all non-Linux systems. Please report any bugs you find, and we will try to fix them.

Typical Usage

Before a file system can be mounted, the backend which will hold the data has to be initialized. This is done with the mkfs.s3ql command. Here we are using the Amazon S3 backend, and nikratio-s3ql-bucket is the S3 bucket in which the file system will be stored.

mkfs.s3ql s3://ap-south-1/nikratio-s3ql-bucket

To mount the S3QL file system stored in the S3 bucket nikratio_s3ql_bucket in the directory /mnt/s3ql, enter:

mount.s3ql s3://ap-south-1/nikratio-s3ql-bucket /mnt/s3ql

Now you can instruct your favorite backup program to run a backup into the directory /mnt/s3ql and the data will be stored on Amazon S3. When you are done, the file system has to be unmounted with

umount.s3ql /mnt/s3ql

Need Help?

The following resources are available:

Please report any bugs you may encounter in the GitHub Issue Tracker.

Contributing

The S3QL source code is available on GitHub.

Comments
  • Initial support for BackBlaze B2

    Initial support for BackBlaze B2

    Hello,

    Here's a prototype implementation of BackBlaze B2 Backend. It may need some extended tests, but respond correctly to backend's test feature. There is still some things left to do :

    • backend does not support backslashes on filenames. To pass unit tests, i replace them with other characters. It seems not to be a problem, since s3ql does not use backslashes on normal operation, but remain quite dirty
    • backend does not support copy operation. The code download et upload again the file being copied.
    enhancement 
    opened by sylvainlehmann 39
  • Backblaze B2 Backend

    Backblaze B2 Backend

    I tried to implement a backend for Backblaze B2. I tested it a bit and it seems to run, but as I am not (yet) that familiar with python and this project, I am hoping for comments/corrections/suggestions.

    opened by powerpaul17 35
  • mount.s3ql hangs on dugong.HostnameNotResolvable error

    mount.s3ql hangs on dugong.HostnameNotResolvable error

    Running 2.8, a number of exceptions like the below occurred, I believe while data was being written to the mount:

    Apr  4 01:47:03 wolfie mount.s3ql[29386]: Thread-5] root.excepthook: Uncaught top-level exception:
    Traceback (most recent call last):
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 1544, in create_socket
        return socket.create_connection(address)
      File "/usr/lib64/python3.6/socket.py", line 704, in create_connection
        for res in getaddrinfo(host, port, 0, SOCK_STREAM):
      File "/usr/lib64/python3.6/socket.py", line 745, in getaddrinfo
        for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
    socket.gaierror: [Errno -3] Temporary failure in name resolution
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/lib64/python3.6/site-packages/s3ql/mount.py", line 64, in run_with_except_hook
        run_old(*args, **kw)
      File "/usr/lib64/python3.6/threading.py", line 864, in run
        self._target(*self._args, **self._kwargs)
      File "/usr/lib64/python3.6/site-packages/s3ql/block_cache.py", line 409, in _upload_loop
        self._do_upload(*tmp)
      File "/usr/lib64/python3.6/site-packages/s3ql/block_cache.py", line 436, in _do_upload
        % obj_id).get_obj_size()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/common.py", line 340, in perform_write
        return fn(fh)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/comprenc.py", line 371, in __exit__
        self.close()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/comprenc.py", line 365, in close
        self.fh.close()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/comprenc.py", line 530, in close
        self.fh.close()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/s3c.py", line 948, in close
        headers=self.headers, body=self.fh)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/gs.py", line 188, in _do_request
        query_string=query_string, body=body)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/s3c.py", line 480, in _do_request
        query_string=query_string, body=body)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/s3c.py", line 718, in _send_request
        headers=headers, body=BodyFollowing(body_len))
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 569, in send_request
        self.timeout)
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 1495, in eval_coroutine
        if not next(crt).poll(timeout=timeout):
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 596, in co_send_request
        self.connect()
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 490, in connect
        self._sock = create_socket((self.hostname, self.port))
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 1548, in create_socket
        raise HostnameNotResolvable(address[0])
    dugong.HostnameNotResolvable: Host commondatastorage.googleapis.com does not have any ip addresses
    

    When I discovered this later in the morning, I attempted various commands like 'ls' and 's3qlstat', but they would all hang on IO waits. 'fusermount -u' would simply complain that the filesystem was in use. I had too use 'kill -9' on mount.s3ql, then 'fusermount -u', and finally, 'fsck.s3ql'. Everything seemed fine, with fsck was uploading dirty blocks, then this...

    Apr  4 09:26:44 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 1 of inode 556620 to backend
    Apr  4 09:26:44 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 0 of inode 556610 to backend
    Apr  4 09:30:06 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 0 of inode 556612 to backend
    Apr  4 09:33:27 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 1 of inode 556616 to backend
    Apr  4 09:36:19 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 1 of inode 556610 to backend
    Apr  4 09:39:40 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.check: Dropping temporary indices...
    Apr  4 09:39:40 wolfie fsck.s3ql[32519]: MainThread] root.excepthook: Uncaught top-level exception:
    Traceback (most recent call last):
      File "/usr/lib/python-exec/python3.6/fsck.s3ql", line 11, in <module>
        load_entry_point('s3ql==2.28', 'console_scripts', 'fsck.s3ql')()
      File "/usr/lib64/python3.6/site-packages/s3ql/fsck.py", line 1322, in main
        fsck.check()
      File "/usr/lib64/python3.6/site-packages/s3ql/fsck.py", line 78, in check
        self.check_cache()
      File "/usr/lib64/python3.6/site-packages/s3ql/fsck.py", line 195, in check_cache
        raise RuntimeError('Strange file in cache directory: %s' % filename)
    RuntimeError: Strange file in cache directory: 550328-1.tmp
    
    bug 
    opened by dgasaway 25
  • WIP: #191: Container friendliness

    WIP: #191: Container friendliness

    Hi,

    Following #191, I finally had time for an MR of "container friendliness" as I so-called it over there.

    This is achieved mostly by implementing a mix of what I did a few weeks back on my own, and recommendations by @d--j in the aforementioned issue.

    While not completely "shell-free", I tried to keep scripting to a minimum for reasonable environment-variables-driven configuration.

    Now there's 2 things to note:

    • a slightly unfortunate limitation relating to signal handling and the docker entrypoint I ran into, detailed hereafter
    • there's no tests (or CI setup) for this yet and I do plan to at least have a go at it. However, this would involve some docker-compose (un)fun, which will be a bit tedious, so I'll wait to be sure at least the implementation seems fine before committing that time :sweat_smile:

    Limitations:

    • you cannot use the syslog logging, mostly because setting it up in a docker container would be overly complex, have little value, and considerably complexify/bloat the image
    • you can use the none logging, but you will then not see the logs, which is due to the next limitation
    • I was not able to come up with a solution using the foreground mode of mount.s3ql

    For the later, it seems to me that mount.s3ql does indeed terminate on receiving a stop signal, and unmounts the fuse mount, but doesn't cleanly close the filesystem. Meaning you should actually never "cancel" that process yourself, but rather run fusermount/umount.s3ql separately, which will eventually stop it.

    In this setup, a stop signal has the following path:

    1. it is is handled by docker (either by Ctrl+C-ing an interactively-attached container or docker stopping and associated),
    2. the container's entrypoint, dumb-init ensures it is proxied to entrypoint.sh rather than swallowed by docker's default init (& never seen by the entrypoint)
    3. is received by entrypoint.sh
    4. which executes the shutdown hook

    However, to make this work with the foreground mode, the signal needs to somehow be read by entrypoint.sh, and invoke the shutdown hook, then wait for the mount.s3ql process to finish, without the latter ever knowing about the signal...

    Unfortunately I couldn't set this up, even starting mount.s3ql as a background subshell, as it would still always see the signal before the shutdown hook was called...

    In the end, I believe this limitation is a reasonable tradeoff, unless some shell signals/concurrency guru is willing to help out.

    opened by Tristan971 24
  • [Invalid Credentials] s3ql.backends.gs.RequestError after a hour

    [Invalid Credentials] s3ql.backends.gs.RequestError after a hour

    Hi, I am new to s3ql. When using google bucket with ADC the mount work but after an hour my mount fail with

    ls: cannot open directory '.': Transport endpoint is not connected
    

    Reading the log the error is Unauthorized

    The relevant log file is:

    2019-02-20 18:51:00.053 1467:MainThread s3ql.mount.determine_threads: Using 4 upload threads.
    2019-02-20 18:51:00.054 1467:MainThread s3ql.mount.main: Autodetected 1048532 file descriptors available for cache entries
    2019-02-20 18:51:00.247 1467:MainThread s3ql.backends.gs._get_access_token: Requesting new access token
    2019-02-20 18:51:06.003 1467:MainThread s3ql.mount.get_metadata: Using cached metadata.
    2019-02-20 18:51:06.007 1467:MainThread s3ql.mount.main: Setting cache size to 55240 MB
    2019-02-20 18:51:06.008 1467:MainThread s3ql.mount.main: Mounting gs://bucket/main at /mnt/bucketmain...
    2019-02-20 18:51:06.015 1473:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 1474
    2019-02-20 19:55:08.263 1474:Thread-6 root.excepthook: Uncaught top-level exception:
    Traceback (most recent call last):
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/mount.py", line 58, in run_with_except_hook
        run_old(*args, **kw)
      File "/usr/lib/python3.6/threading.py", line 864, in run
        self._target(*self._args, **self._kwargs)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/block_cache.py", line 445, in _upload_loop
        self._do_upload(*tmp)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/block_cache.py", line 472, in _do_upload
        % obj_id).get_obj_size()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/common.py", line 279, in perform_write
        return fn(fh)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 389, in __exit__
        self.close()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 383, in close
        self.fh.close()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 548, in close
        self.fh.close()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/gs.py", line 933, in close
        self.metadata, size=self.obj_size)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/gs.py", line 485, in write_fh
        raise _map_request_error(exc, key) or exc
    s3ql.backends.gs.RequestError: <RequestError, code=401, reason='Unauthorized', message='Invalid Credentials'>
    2019-02-20 19:55:08.321 1474:Thread-5 s3ql.mount.exchook: Unhandled top-level exception during shutdown (will not be re-raised)
    2019-02-20 19:55:08.322 1474:Thread-5 root.excepthook: Uncaught top-level exception:
    

    Now, I don't understand if is my fault but I exported the

    export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"

    and set the credential as ADC and any password like in the documentation http://www.rath.org/s3ql-docs/backends.html

    Thanxs

    bug 
    opened by AntonioIbarraOrtiz 19
  • Gdrive Implementation

    Gdrive Implementation

    Hi, This is my first attemp to implement gdrive using base code of the implementation of @mkhon

    I modified the code of @mkhon to have the next features:

    • [x] S3QL GDrive Full Implementation
    • [x] Gdrive Error Handling
    • [ ] batch requests(better performance in write/delete little files, avoid ban for too much requests)
    • [x] oauth client integrated with s3ql
    • [x] avoid unecesary requests
    • [x] md5 checksum read/write

    I don't expect you accept now the changes, I want to know I am in the good way, I want to refactor some things before merge it to main.

    About Oauth auth I finally modified your oauth util for google storage I added a parameter --oauth_type where you can define if you want to generate token for google storage or google drive and also I added the possibilty to use own clientID/secret. You should modify your clientID to accept google drive, because right now you must have your own clientID to generate token.

    The idea is oauth client generate a refresh token and when you mount s3ql you set the next values: user: youclient_id password:client_secret:refreshToken

    Let me know what do you thing about this implementation, I'm not an expert in python so I suppose there are lot of things could be done better.

    enhancement 
    opened by segator 18
  • initial support for Amazon Cloud Drive

    initial support for Amazon Cloud Drive

    Here's some initial implementation of an Amazon Cloud Drive backend. It stores metadata as a client property, which means we download them with other basic file info, but pretty much invisible on the web gui/other clients, but it's probably not a big problem (but changing APP_ID would break the fs...)

    Things still left to do:

    • This commit has a temporary app_id and client_id, someone should do a proper app registration, then create a user-friendly-ish webapp to get refresh token from amazon (there's already one here thanks to the acd_cli project).
    • ACD assigns a (random?) id to every file uploaded, and all requests need this id instead of filenames. The filename->id translation normally requires an extra api call, and the latency is horrible. The code now caches the replies (node_cache), but it's an extra call for each when when used for the first time (and this affects new files too, as there's no upload or overwrite, only upload that fails if a file with the same name exists and overwrite an exising file by id) and delete. Maybe it'd be better to mass download the filelist of the whole s3ql directory, and answer all these queries locally, and only uploading the changes. Since one fs instance can only be mounted by one s3ql process, this shouldn't cause problems, but we probably shouldn't store all metadata in ram like the current implementation.
    • No server side copying. The code now downloads the source, and uploads it again, but this breaks the contract in AbstractBackend... But at least the rename method is overridden to not use copy.
    opened by DirtYiCE 18
  • Feature request: Backblaze B2 support

    Feature request: Backblaze B2 support

    It would be nice to see support for B2. It's one of the newer ones but practically unbeatable in price, and we have success with it using other OSS backup tools like hashbackup or Restic. Docs are at https://www.backblaze.com/b2/docs/

    opened by kobuki 16
  • WIP: Add mock server for OpenStack Swift

    WIP: Add mock server for OpenStack Swift

    Adds a mock server for OpenStack Swift so that the Swift Backend gets more test coverage in default cases (i.e. without tests on live filesystems).

    For now the mock server does not handle bulk deletes but copy via COPY. Ideally it would be good to have three different kinds/configurations of the mock server (One that has no special support for anything, one that supports copy via COPY and a third that support bulk delete). But I have no good idea how to do this.

    (Since TravisCI is integrated with GitHub I use GitHub for this pull request.)

    opened by d--j 15
  • Support storing multiple files in the same backend object (

    Support storing multiple files in the same backend object ("fragments")

    [migrated from BitBucket]

    Storing lots of small files is very inefficient, since every file requires its own block.

    We should add support for fragments, so that multiple files can be stored in the same block.

    With the new bucket interface, we should be able to implement this relatively easily:

    • Upload workers get list of cache entries, new blocks may be coalesced into single object
    • CommitThread() and expire() only call to worker threads once they have a reasonably big chunk of data ready
    • We keep objects until reference count of all contained blocks is zero
    • Therefore, blocks may continue to exist with refcount=0 and can possibly be reused
    • s3qladm may need a "cleanup" function to get rid of these blocks
    • When downloading object, db can be used to determine which blocks in the object belong to files (and should be added to cache) and which ones can be discarded
    • Minimum size of cache entries passed to workers could be adjusted dynamically based on upload bandwith, latency, and compression ratio of previous uploads
    enhancement wontfix 
    opened by Nikratio 14
  • Keystone v3 fails and OVH Swift

    Keystone v3 fails and OVH Swift

    Using latest version 3.3.2

    This seems similar to #140 I have been using s3ql with keystone v2 to access ovh object storage for a couple of years without issues. OVH are now going to move keystone v3 only. Accordingly I have added: --backend-options domain=Default to my mount command as per documentation to force v3, but it results in failure with following log output (same as #140):

    2020-02-07 11:40:03.194 1457:MainThread s3ql.mount.determine_threads: Using 2 upload threads. 2020-02-07 11:40:03.194 1457:MainThread s3ql.mount.main: Autodetected 4058 file descriptors available for cache entries 2020-02-07 11:40:03.281 1457:MainThread s3ql.backends.common.get_ssl_context: Reading default CA certificates. 2020-02-07 11:40:03.287 1457:MainThread s3ql.backends.swift._do_request: started with 'GET', '/', None, {'limit': 1}, None, None 2020-02-07 11:40:03.288 1457:MainThread s3ql.backends.swift._do_request: no active connection, calling _get_conn() 2020-02-07 11:40:03.288 1457:MainThread s3ql.backends.swiftks._get_conn: started 2020-02-07 11:40:03.386 1457:MainThread root.excepthook: No permission to access backend.

    Note that my backend-login is in the form tenant:user

    ovh have not been helpful so far, their guide for using s3ql does not work for v3

    It may be irrelevant but there seems a subtle difference between the code in backends/swiftks.py the is different from that suggested by ovh domain:{name vs. domain:{id

    POST /v3/auth/tokens HTTP/1.1 Host: auth.cloud.ovh.net Content-Length: Content-Type: application/json

    { "auth": { "identity": { "methods": [ "password" > ], "password": { "user": { "name": "", "domain": { "name": "Default" }, "password": "" } } } } }

    Anyone else have it working with ovh and keystone 3 ?

    opened by drgrumpy 13
  • fsck.s3ql crashes ERROR: Uncaught top-level exception:

    fsck.s3ql crashes ERROR: Uncaught top-level exception: "Path b'lost+found' does not exist"

    I have and old s3ql filesystem, I don't have the VM and I closed s3ql with a system shutdown or ctrl-c (the last time i used it) so the filesystem is not in clean state. I run fsck and failed with "Path b'lost+found' does not exist" This is a second time that I use a fsck without synced metadata, and the folder lost+found was create that time.

    Enter backend login: Enter backend password: Enter file system encryption passphrase: Starting fsck of S3URL_REDACTED Backend reports that file system is still mounted elsewhere. Either the file system has not been unmounted cleanly or the data has not yet propagated through the backend. In the later case, waiting for a while should fix the problem, in the former case you should try to run fsck on the computer where the file system has been mounted most recently. You may also continue and use whatever metadata is available in the backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD. Moreover, files and directories that you have deleted since then MAY REAPPEAR WITH SOME OF THEIR CONTENT LOST. Enter "continue, I know what I am doing" to use the outdated data anyway: continue, I know what I am doing

    Downloading and decompressing metadata... Reading metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Creating temporary extra indices... Checking lost+found... Checking for dirty cache objects... Checking names (refcounts)... Checking contents (names)... WARNING: Content entry for inode 3 refers to non-existing name with id 1, moving to /lost+found/-3 Dropping temporary indices... ERROR: Uncaught top-level exception: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/database.py", line 143, in get_row row = next(res) StopIteration

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/common.py", line 117, in inode_for_path inode = conn.get_val("SELECT inode FROM contents_v WHERE name=? AND parent_inode=?", File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/database.py", line 127, in get_val return self.get_row(*a, **kw)[0] File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/database.py", line 145, in get_row raise NoSuchRowError() s3ql.database.NoSuchRowError: Query produced 0 result rows

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/bin/fsck.s3ql", line 11, in load_entry_point('s3ql==3.8.1', 'console_scripts', 'fsck.s3ql')() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 1289, in main fsck.check(check_cache) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 86, in check self.check_contents_name() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 323, in check_contents_name (id_p_new, newname) = self.resolve_free(b"/lost+found", newname) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 1068, in resolve_free inode_p = inode_for_path(path, self.conn) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/common.py", line 120, in inode_for_path raise KeyError('Path %s does not exist' % path) KeyError: "Path b'lost+found' does not exist"

    A second fsck

    Enter backend login: Enter backend password: Enter file system encryption passphrase: Starting fsck of S3URL_REDACTED Using cached metadata. WARNING: Remote metadata is outdated. Checking DB integrity... Creating temporary extra indices... Checking lost+found... Checking for dirty cache objects... Checking names (refcounts)... Checking contents (names)... Dropping temporary indices... ERROR: Uncaught top-level exception: Traceback (most recent call last): File "/usr/local/bin/fsck.s3ql", line 11, in load_entry_point('s3ql==3.8.1', 'console_scripts', 'fsck.s3ql')() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 1289, in main fsck.check(check_cache) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 86, in check self.check_contents_name() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 318, in check_contents_name path = get_path(inode_p, self.conn)[1:] File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/common.py", line 147, in get_path raise RuntimeError('Failed to resolve name "%s" at inode %d to path', RuntimeError: ('Failed to resolve name "%s" at inode %d to path', None, 3)

    apt list sqlite3 3.31.1-4ubuntu0.3 amd64 [installed]

    s3ql version s3ql-3.8.1

    Ubuntu 20.04

    i don't remember the s3ql version of the old VM

    opened by mcdatho 2
  • rsync changes to defunct state while copying from a s3ql to another

    rsync changes to defunct state while copying from a s3ql to another

    I'm using rsync to copy files from a server A using s3ql to another server B, also using s3ql. My rsync command is executed from the destination server (B) and looks like this : rsync -avz --progress -H -X --partial --one-file-system A:/mnt/s3ql /mnt/s3ql/test

    After a while, rsync process state changes to defunct state and command freezes.

    I've tried using version 3.0.0 of s3ql but also version 3.8.1.

    I'm using a cache dir of 1G and there is my mount command : mount.s3ql --allow-other --cachedir=/tmp/cache --cachesize=1024000 --compress=lzma-4 --threads=3 --metadata-upload-interval=72000 local:///mnt/mfsmount /mnt/s3ql

    So far I've tried to decrease number of threads (previous was 8) and decrease cache size (previous was 8Go).

    opened by Miyuki0 1
  • B2 backend does not clean up stale upload connections

    B2 backend does not clean up stale upload connections

    B2 backend maintains a pool of upload URLs and associated connections, which do not get cleaned up after being established unless an error happens.

    This means that if one uses high thread count (say, threads=32) with B2 backend, after a period of intensive I/O metadata upload may hang for hours while the backend tries all connections one by one, establishing that each of them does not work and closing them, waiting for 5 minutes between attempts in the back-off logic:

    Jun 23 06:23:01 stratofortress mount.s4ql[2089141]: Dumping metadata...
    Jun 23 06:23:01 stratofortress mount.s3ql[2089141]: ..objects..
    Jun 23 06:23:01 stratofortress mount.s3ql[2089141]: ..blocks..
    Jun 23 06:23:02 stratofortress mount.s3ql[2089141]: ..inodes..
    Jun 23 06:23:02 stratofortress mount.s3ql[2089141]: ..inode_blocks..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..symlink_targets..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..names..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..contents..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..ext_attributes..
    Jun 23 06:23:04 stratofortress mount.s3ql[2089141]: Compressing and uploading metadata...
    Jun 23 06:23:10 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 3)...
    Jun 23 06:23:10 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 4)...
    Jun 23 06:23:10 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 5)...
    Jun 23 06:23:11 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 6)...
    Jun 23 06:23:12 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 7)...
    Jun 23 06:23:14 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 8)...
    Jun 23 06:23:17 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 9)...
    Jun 23 06:23:22 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 10)...
    Jun 23 06:23:35 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 11)...
    Jun 23 06:24:06 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 12)...
    Jun 23 06:25:05 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 13)...
    Jun 23 06:26:33 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 14)...
    Jun 23 06:29:41 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 15)...
    Jun 23 06:35:06 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 16)...
    Jun 23 06:40:19 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 17)...
    Jun 23 06:45:52 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 18)...
    Jun 23 06:50:59 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 19)...
    Jun 23 06:57:27 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 20)...
    Jun 23 07:04:49 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 21)...
    Jun 23 07:11:37 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 22)...
    Jun 23 07:16:48 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 23)...
    Jun 23 07:23:41 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 24)...
    Jun 23 07:29:06 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 25)...
    Jun 23 07:36:26 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 26)...
    Jun 23 07:42:11 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 27)...
    Jun 23 07:47:40 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 28)...
    Jun 23 07:53:26 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 29)...
    Jun 23 08:00:31 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 30)...
    Jun 23 08:07:38 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 31)...
    Jun 23 08:13:44 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 32)...
    Jun 23 08:19:17 stratofortress mount.s3ql[2089141]: Wrote 38.9 MiB of compressed metadata.
    Jun 23 08:19:17 stratofortress mount.s3ql[2089141]: Cycling metadata backups...
    Jun 23 08:19:17 stratofortress mount.s3ql[2089141]: Backing up old metadata...
    

    S3QL should either verify the connections before using them, or schedule closure of each established connection after a period of inactivity.

    bug 
    opened by intelfx 6
  • Better handle I/O errors in backends

    Better handle I/O errors in backends

    If the B2 backend encounters an ENOSPACE error when writing into the temporary file, write() will return the ENOSPACE exception, but a successive call to close() will result in a checksum error (because the checksum wasn't updated to reflect the incomplete write to the temporary file), and then a dugong.StateError (probably because after checking the checksum we did not read the rest of the response).

    Either write() should update the checksum to reflect the partial data that was written (thus eliminating the checksum error on upload), or perhaps it should set a flag that this object should not be uploaded at all on close.

    Other backends may have similar issues.

    bug 
    opened by Nikratio 0
  • Running s3qlrm can generate lots of `FileNotFoundError` entries in `mount.log`

    Running s3qlrm can generate lots of `FileNotFoundError` entries in `mount.log`

    I have found a large number of errors in my ~/.s3ql/mount.log:

    2020-11-04 18:33:37.380 5454:Thread-25 pyfuse3.run: Failed to submit invalidate_entry request for parent inode 200499000, name b'security'
    Traceback (most recent call last):
      File "src/internal.pxi", line 125, in pyfuse3._notify_loop
      File "src/pyfuse3.pyx", line 849, in pyfuse3.invalidate_entry
    FileNotFoundError: [Errno 2] fuse_lowlevel_notify_inval_entry returned: No such file or directory
    

    I suspect these are generated when running s3qlrm shortly before unmount and are harmless. Since invalidate requests are processed in a queue, the kernel may issue forget requests on its own before S3QL gets around to sending the invalidate request to the kernel.

    Still, it would be great to (1) confirm that this is indeed what happens and (2) find a way to avoid the errors.

    bug 
    opened by Nikratio 5
  • Various non-deterministic test failures

    Various non-deterministic test failures

    2. Test shows the following error:

    tests/t5_cache.py::TestPerstCache::test_cache_flush[True] FAILED                                                                                                                                           [ 87%]
    
    ==================================================================================================== FAILURES ====================================================================================================
    _____________________________________________________________________________________ TestPerstCache.test_cache_flush[True] ______________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_cache.py", line 120, in test_cache_flush
        self.fsck(args=['--keep-cache'])
      File "/usr/src/s3ql-3.1/tests/t4_fuse.py", line 128, in fsck
        assert proc.wait() == expect_retcode
    AssertionError: assert 128 == 0
      -128
      +0
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    dQow xSHp ZzBW QHpF bcRj xjo0 yBVP qw36 gxtI Dr9L vZ0=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    WARNING: Deleted spurious object 2
    ================================================================================ 1 failed, 297 passed, 5 skipped in 57.66 seconds ================================================================================
    

    3. Succesive test runs show different errors:

    tests/t5_cache.py::TestPerstCache::test_cache_flush[True] FAILED                                                                                                                                           [ 87%]
    
    ==================================================================================================== FAILURES ====================================================================================================
    _____________________________________________________________________________________ TestPerstCache.test_cache_flush[True] ______________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_cache.py", line 123, in test_cache_flush
        assert fh.read() == TEST_DATA
    AssertionError: assert b'\n)(tnuomu....ne/nib/rsu/!#' == b'#!/usr/bin/e...lf.umount()\n'
      At index 0 diff: 10 != 35
      Full diff:
      - (b'\n)(tnuomu.fles        \nATAD_TSET == )(daer.hf tressa            \n:hf sa '
      -  b")'br' ,)'eliftset' ,rid_tnm.fles(niojp(nepo htiw        \n)(tnuom.fles   "
      -  b"     \n)]'ehcac-peek--' ,'etomer-ecrof--'[=sgra                  \n,0=edoc"
      -  b'ter_tcepxe(kcsf.fles        \nderongi si ehcac taht erus ekaM #        \n\n'
      -  b'kab = rid_ehcac.fles            \n)rid_ehcac.fles(eertmr.lituhs          '...
      
      ...Full output truncated (154 lines hidden), use '-vv' to show
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    kYom jZq7 2fqs M8RY wXe4 QC4Y NJmR Yc2E SHxJ J7Dl 7NI=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    ================================================================================ 1 failed, 297 passed, 5 skipped in 58.27 seconds ================================================================================
    

    Next run:

    tests/t5_failsafe.py::TestNewerMetadata::test FAILED                                                                                                                                                       [ 89%]
    tests/t5_failsafe.py::TestNewerMetadata::test ERROR                                                                                                                                                        [ 89%]
    
    ===================================================================================================== ERRORS =====================================================================================================
    __________________________________________________________________________________ ERROR at teardown of TestNewerMetadata.test ___________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/pytest_checklogs.py", line 143, in pytest_runtest_teardown
        check_output(item)
      File "/usr/src/s3ql-3.1/tests/pytest_checklogs.py", line 132, in check_output
        check_test_output(capmethod, item)
      File "/usr/src/s3ql-3.1/tests/pytest_checklogs.py", line 106, in check_test_output
        raise AssertionError('Suspicious output to stderr (matched "%s")' % hit.group(0))
    AssertionError: Suspicious output to stderr (matched "ERROR")
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    sp0w ux24 JwJ+ XlRW WHKT lhfW YUH9 /74h hbqg 9tES FeE=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    -------------------------------------------------------------------------------------------- Captured stderr teardown --------------------------------------------------------------------------------------------
    ERROR: Remote metadata is newer than local (1555031773 vs 1555031772), refusing to overwrite!
    ERROR: The locally cached metadata will be *lost* the next time the file system is mounted or checked and has therefore been backed up.
    ==================================================================================================== FAILURES ====================================================================================================
    _____________________________________________________________________________________________ TestNewerMetadata.test _____________________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_failsafe.py", line 143, in test
        time.sleep(1)
      File "/usr/local/lib/python3.6/site-packages/_pytest/python_api.py", line 729, in __exit__
        fail(self.message)
      File "/usr/local/lib/python3.6/site-packages/_pytest/outcomes.py", line 117, in fail
        raise Failed(msg=msg, pytrace=pytrace)
    Failed: DID NOT RAISE <class 'PermissionError'>
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    sp0w ux24 JwJ+ XlRW WHKT lhfW YUH9 /74h hbqg 9tES FeE=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    =========================================================================== 1 failed, 302 passed, 6 skipped, 1 error in 88.13 seconds ============================================================================
    

    Next run:

    tests/t5_cache.py::TestPerstCache::test_cache_flush_unclean FAILED                                                                                                                                         [ 88%]
    
    ==================================================================================================== FAILURES ====================================================================================================
    ____________________________________________________________________________________ TestPerstCache.test_cache_flush_unclean _____________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_cache.py", line 161, in test_cache_flush_unclean
        args=['--force-remote'])
      File "/usr/src/s3ql-3.1/tests/t4_fuse.py", line 128, in fsck
        assert proc.wait() == expect_retcode
    AssertionError: assert 128 == 0
      -128
      +0
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    /r2k e8L/ 2SUJ 43O8 wSyw 6A3e QtJH ow3u Myr2 T4eI D40=
    ---END MASTER KEY---
    Backend reports that file system is still mounted elsewhere. Either
    the file system has not been unmounted cleanly or the data has not yet
    propagated through the backend. In the later case, waiting for a while
    should fix the problem, in the former case you should try to run fsck
    on the computer where the file system has been mounted most recently.
    You may also continue and use whatever metadata is available in the
    backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN
    UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD.
    Moreover, files and directories that you have deleted since then MAY
    REAPPEAR WITH SOME OF THEIR CONTENT LOST.
    Enter "continue, I know what I am doing" to use the outdated data anyway:
    > (--force-remote specified, continuing anyway)
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    WARNING: Deleted spurious object 1
    ================================================================================ 1 failed, 299 passed, 5 skipped in 68.40 seconds ================================================================================
    
    bug 
    opened by estebandf 1
Releases(release-4.0.0)
  • release-4.0.0(Jun 10, 2022)

    • The internal file system revision has changed. File systems created with this version of S3QL are NOT COMPATIBLE with prior S3QL versions.

      Existing file systems must be upgraded before they can be used with current S3QL versions. This procedure is NOT REVERSIBLE.

      To update an existing file system, use the s3qladm upgrade command. This upgrade process updates only the metadata tables and should not take more than a few minutes.

    • Smaller database size and improved performance on metadata operations.

      S3QL was designed to be able to store multiple blocks in the same backend object. However, this feature was never implemented. The necessary abstraction layer has now been removed, which should increase performance and reduce database size.

    • Workarounds for bugs in sqlite 3.38.0 – 3.38.4.

      Sqlite 3.38 introduced bloom filter optimizations. The first patch releases of sqlite 3.38 had some bugs that prevented fsck.s3ql from running properly and made it corrupt the database.

    • Fix handling of rate-limits and responses without Content-Type header

      Some Non-Amazon S3 providers return HTTP 429 when rate-limiting. Additionally, error responses may occasionally come without a Content-Type header. These are now handled, no longer causing file system crashes.

    • This is the last release from the current maintainer. S3QL is now no longer maintained or developed. Github issue tracking and pull reuests have therefore been disabled. The mailing list continuens to be available for use.

      If you would like to take over this project, you are welcome to do so. Please fork it and develop the fork for a while. Once there has been 6 months of reasonable activity, please contact [email protected] and I'll be happy to give you ownership of this repository or replace with a pointer to the fork.

    Source code(tar.gz)
    Source code(zip)
    s3ql-4.0.0.tar.gz(1.81 MB)
    s3ql-4.0.0.tar.gz.asc(1012 bytes)
  • release-3.8.1(Jan 10, 2022)

    • Update fsck.s3ql to remove empty direcrtories from local backend's storage directory. As blocks are added subdirectories are created on demand so there are not too many blocks in each directory, but as blocks are removed from storage empty directories are not automatically removed. The number of empty directories can become quite large over time, slowing down mount.s3ql a little and fsck.s3ql considerably.

    • Fix for fsck.s3ql removing .tmp files in cache directory.

    • Fix bug that would cause incorrect size to be recorded for a block if the file had zero-bytes added to the end by using truncate followed by close. (The recorded size does not count the zero bytes.) Both rsync's -S (sparse) option and VirtualBox do this for example. No data corruption, but contrib/fixup_block_sizes.py should be run to fix this.

    • contrib/fix_block_sizes.py updated to check all block sizes, not just ones with size multiple of 512, so it detects blocks affected by the above bug.

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.8.1.tar.gz(1.81 MB)
    s3ql-3.8.1.tar.gz.asc(1012 bytes)
  • release-3.8.0(Nov 7, 2021)

    • The way to build the documentation has changed. Instead of running setup.py build_sphinx, run ./build_docs.sh. To generate PDF documentation, follow this with cd doc/pdf && make.

    • The s3ql_verify tool is now able to detect the kind of corruption that was introduced by the fsck.s3ql data corruption bug described below.

    • The new contrib/fixup_block_sizes.py tool is now available to fix most of the issues caused by the fsck.s3ql bug described below.

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.8.0.tar.gz(1.81 MB)
    s3ql-3.8.0.tar.gz.asc(1012 bytes)
  • release-3.7.3(Jun 3, 2021)

    This release fixes a DATA CORRUPTION bug in fsck.s3ql that caused the recorded size of uploaded dirty blocks to be rounded up to the next multiple of 512 bytes, effectively appending up to 512 zero-bytes to the end of affected files.

    This problem was introduced in version 3.4.1 (released 2020-05-08) as part of a seemingly very minor improvement to cache usage calculation.

    You can tell that a file has (likely) been affected from fsck.s3ql messages of the form:

    WARNING: Writing dirty block <X> of inode <Y>
    

    followed later by:

    WARNING: Size of inode <Y> (<path_to_file>) does not agree with number of blocks, setting from <reasonable_size> to <size rounded to next multiple of 512)
    
    Source code(tar.gz)
    Source code(zip)
    s3ql-3.7.3.tar.bz2(878.16 KB)
    s3ql-3.7.3.tar.bz2.asc(1012 bytes)
  • release-3.7.2(May 4, 2021)

  • release-3.7.1(Mar 7, 2021)

  • release-3.7.0(Jan 3, 2021)

    • S3QL now supports newer AWS S3 regions like eu-south-1.

    • mount.s3ql now again includes debugging information in its log output when encountering an unexpected exception. This was broken in version 3.4.0, resulting in mount.s3ql seemingly terminating at random in such a situation.

    • mount.s3ql now properly handles SIGTERM (instead of crashing). This means it exits as quickly as possible without data corruption. For a proper unmount, always use umount.s3ql, umount, or fusermount -u and wait for the mount.s3ql process to terminate.

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.7.0.tar.bz2(877.08 KB)
    s3ql-3.7.0.tar.bz2.asc(1012 bytes)
  • release-3.6.0(Nov 9, 2020)

    • Added ability to specify domain-name, project-domain-name and tenant-name as options for the OpenStack Swift (Keystone v3) backend for providers that prefer name to id.

    • The open() syscall supports the O_TRUNC flag now.

    • mount.s3ql now exits gracefully on CTRL-C (INT signal)

    • mount.s3ql now supports the --dirty-block-upload-delay option to influence the time before dirty blocks are written from the cache to the storage backend.

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.6.0.tar.bz2(876.71 KB)
    s3ql-3.6.0.tar.bz2.asc(1012 bytes)
  • release-3.5.1(Sep 4, 2020)

  • release-3.5.0(Jul 15, 2020)

  • release-3.4.1(May 8, 2020)

  • release-3.4.0(Mar 19, 2020)

    • There have been significant changes in the internal implementation. Asynchronous I/O is now used in favor of threads in many places, and libfuse 3.x is used instead of libfuse 2.x.

    • S3QL is now uses kernel-side writeback caching, which should significantly improve write performance for small block sizes.

    • The dependency on the llfuse Python module has been dropped. A dependency on the trio <https://github.com/python-trio/trio_ and pyfuse3 <https://github.com/libfuse/pyfuse3/>_ modules has been added instead.

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.4.0.tar.bz2(861.46 KB)
    s3ql-3.4.0.tar.bz2.asc(1012 bytes)
  • release-3.3.2(Oct 20, 2019)

  • release-3.3(Sep 8, 2019)

  • release-3.2(Jul 17, 2019)

  • release-3.1(Mar 31, 2019)

    • Added a new --fs-name option to mount.s3ql allowing users to specify a name for the fuse mount (This name is visible in the first column of the system mount command output).

    • Added support for "Onezone-IA" to the S3 backend.

    • The s3ql_oauth_client command is working again now (it was broken due to changes in Google's OAuth2 workflow).

    • Fixed refresh of Google Storage authentication tokens (expired tokens were resulting in crashes).

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.1.tar.bz2(831.18 KB)
    s3ql-3.1.tar.bz2.asc(1012 bytes)
  • release-3.0(Feb 9, 2019)

    • Added a new --systemd option to simplify running mount.s3ql as a systemd unit.

    • Dropped the --upstart option - upstart seems to be unused and unmaintained.

    • Dropped support for legacy ("API key") authentication for Google Storage. Only oauth2 is supported now. This was necessitated by the switch to Google's native API (before S3QL was using Google's S3 compatibility layer).

    • Command line options specified in the authinfo file (in particular --backend-options) are now parsed correctly.

    • S3QL now uses python-cryptography instead of the (no longer maintained) pycrypto module.

    • The Google Storage backend now supports Application Default Credentials (ADC). To use this, install the google.auth module and use adc as your backend login.

    • umount.s3ql now works correctly on systems where ps doesn't accept a -p option (as long as /proc is available).

    Source code(tar.gz)
    Source code(zip)
    s3ql-3.0.tar.bz2(829.03 KB)
    s3ql-3.0.tar.bz2.asc(1012 bytes)
  • release-2.33(Dec 28, 2018)

  • release-2.32(Nov 6, 2018)

    • Fixed a potential bug in s3qlrm that would probably lead to a filesystem crash (hasn't actually been observed though).

    • Clarified installation instructions: S3QL requires the official systemd Python module, not the (third-party) module on PyPi.

    • Fixed occasional crashes with a "dugong.StateError" exception. Thanks to Roger Gammans for extensive testing!

    • mount.s3ql and fsck.s3ql got a new --keep-cache option. If this is specified, locally cached data will not be removed on unmount (or after fsck) so that it can be re-used if the file system is mounted again on the same system without having been mounted elsewhere.

    • S3QL now considers all SSL-related errors as temporary and automatically retries (instead of bailing out).

    Source code(tar.gz)
    Source code(zip)
    s3ql-2.32.tar.bz2(1.11 MB)
    s3ql-2.32.tar.bz2.asc(1012 bytes)
  • release-2.31(Oct 8, 2018)

  • release-2.30(Sep 2, 2018)

  • release-2.29(Jul 21, 2018)

Postgres CLI with autocompletion and syntax highlighting

A REPL for Postgres This is a postgres client that does auto-completion and syntax highlighting. Home Page: http://pgcli.com MySQL Equivalent: http://

dbcli 10.8k Dec 30, 2022
ZODB Client-Server framework

ZEO - Single-server client-server database server for ZODB ZEO is a client-server storage for ZODB for sharing a single storage among many clients. Wh

Zope 40 Nov 04, 2022
TrueNAS CORE/Enterprise/SCALE Middleware Git Repository

TrueNAS CORE/Enterprise/SCALE main source repo Want to contribute or collaborate? Join our Slack instance. IMPORTANT NOTE: This is the master branch o

TrueNAS 2k Jan 07, 2023
Continuous Archiving for Postgres

WAL-E Continuous archiving for Postgres WAL-E is a program designed to perform continuous archiving of PostgreSQL WAL files and base backups. To corre

3.4k Dec 30, 2022
Barman - Backup and Recovery Manager for PostgreSQL

Barman, Backup and Recovery Manager for PostgreSQL Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of

EDB 1.5k Dec 30, 2022
a full featured file system for online data storage

S3QL S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack. S3QL effectively provi

917 Dec 25, 2022
The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage

B2 Command Line Tool The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage. This program provides command-line a

Backblaze 467 Dec 08, 2022
Automatic SQL injection and database takeover tool

sqlmap sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of

sqlmapproject 25.7k Jan 02, 2023
Synchronize local directories with Tahoe-LAFS storage grids

Gridsync Gridsync aims to provide a cross-platform, graphical user interface for Tahoe-LAFS, the Least Authority File Store. It is intended to simplif

171 Dec 16, 2022
The next generation relational database.

What is EdgeDB? EdgeDB is an open-source object-relational database built on top of PostgreSQL. The goal of EdgeDB is to empower its users to build sa

EdgeDB 9.9k Dec 31, 2022
Nerd-Storage is a simple web server for sharing files on the local network.

Nerd-Storage is a simple web server for sharing files on the local network. It supports the download of files and directories, the upload of multiple files at once, making a directory, updates and de

ハル 68 Jun 07, 2022
ZFS, in Python, without reading the original C.

ZFSp What? ZFS, in Python, without reading the original C. What?! That's right. How? Many hours spent staring at hexdumps, and asking friends to searc

Colin Valliant 569 Oct 28, 2022
The web end of seafile server.

Introduction Seahub is the web frontend for Seafile. Preparation Build and deploy Seafile server from source. See http://manual.seafile.com/build_seaf

476 Dec 29, 2022
Cross-platform desktop synchronization client for the Nuxeo platform.

Nuxeo Drive Desktop Synchronization Client for Nuxeo This is an ongoing development project for desktop synchronization of local folders with remote N

Nuxeo 63 Dec 16, 2022
A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.

mycli A command line client for MySQL that can do auto-completion and syntax highlighting. HomePage: http://mycli.net Documentation: http://mycli.net/

dbcli 10.7k Jan 07, 2023
A generic JSON document store with sharing and synchronisation capabilities.

Kinto Kinto is a minimalist JSON storage service with synchronisation and sharing abilities. Online documentation Tutorial Issue tracker Contributing

Kinto 4.2k Dec 26, 2022
An open source multi-tool for exploring and publishing data

Datasette An open source multi-tool for exploring and publishing data Datasette is a tool for exploring and publishing data. It helps people take data

Simon Willison 6.8k Jan 01, 2023
The Tahoe-LAFS decentralized secure filesystem.

Free and Open decentralized data store Tahoe-LAFS (Tahoe Least-Authority File Store) is the first free software / open-source storage technology that

Tahoe-LAFS 1.2k Jan 01, 2023