The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage

Overview

B2 Command Line Tool

 Continuous Integration License python versions PyPI version Docs

The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage.

This program provides command-line access to the B2 service.

Documentation

The latest documentation is available on Read the Docs.

Installation

This tool can be installed with:

pip install b2

Usage

b2 authorize-account [-h] [applicationKeyId] [applicationKey]
b2 cancel-all-unfinished-large-files [-h] bucketName
b2 cancel-large-file [-h] fileId
b2 clear-account [-h]
b2 copy-file-by-id [-h] [--metadataDirective {copy,replace}]
                   [--contentType CONTENTTYPE] [--range RANGE] [--info INFO]
                   sourceFileId destinationBucketName b2FileName
b2 create-bucket [-h] [--bucketInfo BUCKETINFO] [--corsRules CORSRULES]
                 [--lifecycleRules LIFECYCLERULES]
                 bucketName bucketType
b2 create-key [-h] [--bucket BUCKET] [--namePrefix NAMEPREFIX]
              [--duration DURATION]
              keyName capabilities
b2 delete-bucket [-h] bucketName
b2 delete-file-version [-h] [fileName] fileId
b2 delete-key [-h] applicationKeyId
b2 download-file-by-id [-h] [--noProgress] fileId localFileName
b2 download-file-by-name [-h] [--noProgress]
                         bucketName b2FileName localFileName
b2 get-account-info [-h]
b2 get-bucket [-h] [--showSize] bucketName
b2 get-file-info [-h] fileId
b2 get-download-auth [-h] [--prefix PREFIX] [--duration DURATION] bucketName
b2 get-download-url-with-auth [-h] [--duration DURATION] bucketName fileName
b2 hide-file [-h] bucketName fileName
b2 list-buckets [-h] [--json]
b2 list-keys [-h] [--long]
b2 list-parts [-h] largeFileId
b2 list-unfinished-large-files [-h] bucketName
b2 ls [-h] [--long] [--json] [--versions] [--recursive] bucketName [folderName]
b2 make-url [-h] fileId
b2 make-friendly-url [-h] bucketName fileName
b2 sync [-h] [--noProgress] [--dryRun] [--allowEmptySource]
        [--excludeAllSymlinks] [--threads THREADS]
        [--compareVersions {none,modTime,size}] [--compareThreshold MILLIS]
        [--excludeRegex REGEX] [--includeRegex REGEX]
        [--excludeDirRegex REGEX] [--excludeIfModifiedAfter TIMESTAMP]
        [--skipNewer | --replaceNewer] [--delete | --keepDays DAYS]
        source destination
b2 update-bucket [-h] [--bucketInfo BUCKETINFO] [--corsRules CORSRULES]
                 [--lifecycleRules LIFECYCLERULES]
                 bucketName bucketType
b2 upload-file [-h] [--noProgress] [--quiet] [--contentType CONTENTTYPE]
               [--minPartSize MINPARTSIZE] [--sha1 SHA1] [--threads THREADS]
               [--info INFO]
               bucketName localFilePath b2FileName
b2 version [-h]

The environment variable B2_ACCOUNT_INFO specifies the sqlite file to use for caching authentication information. The default file to use is: ~/.b2_account_info

For more details on one command: b2 --help

When authorizing with application keys, this tool requires that the key have the 'listBuckets' capability so that it can take the bucket names you provide on the command line and translate them into bucket IDs for the B2 Storage service. Each different command may required additional capabilities. You can find the details for each command in the help for that command.

Parallelism and the --threads parameter

Users with high performance networks, or file sets with very small files, may benefit from increased parallelism. Experiment with using the --threads parameter with small values to determine if there are benefits.

Note that using multiple threads will usually be detrimental to the other users on your network.

Contrib

bash completion

You can find a bash completion script in the contrib directory. See this for installation instructions.

detailed logs

Verbose logs to stdout can be enabled with the --verbose flag.

A hidden flag --debugLogs can be used to enable logging to a b2_cli.log file (with log rotation at midnight) in current working directory. Please take care to not launch the tool from the directory that you are syncing, or the logs will get synced to the remote server (unless that is really what you want to do).

For advanced users, a hidden option --logConfig <filename.ini> can be used to enable logging in a user-defined format and verbosity. An example log configuration can be found here.

Release History

Please refer to the changelog.

Developer Info

Please see our contributing guidelines.

Comments
  • TruncatedOutput: only 1495314753 of 2121389892 bytes read

    TruncatedOutput: only 1495314753 of 2121389892 bytes read

    I'm able to use "b2 download-file-by-name" to download small files, but when I target a 2.1 GB file, it crashes out randomly midway through. (I have run the command at least 10 times over the course of two days. Each time it crashed after having downloaded in the range of 1.4 - 2.0 GB out of 2.1 GB). Reading through the issues page, it seemed that "b2 sync" is recommended. Same issue remains though, crashing out at about 1.7 GB. Since no one else appears to have this rather fundamental problem, I suspect it's related to my region/isp/home network. Still.. any help would be appreciated. I have attached a --debugLog, and pasted a typical command line response here. Thanks in advance

    b2_cli.log

    CMD:

    C:\Users\Desto>b2 download-file-by-name PeterBackup1 PB1/PB.000.000.000.004.pb8 "C:\Users\Desto\Desktop\Matlab\Projects 2018\Peter Backup\PB1_BackupCheck\temp\PB.000.000.000.004.pb8"

    Output:

    -snip- C:\Users\Desto\Desktop\Matlab\Projects 2018\Peter Backup\PB1_BackupCheck\temp\PB.000.000.000.004.pb8: 70%|7| 1.50G/2.12G [05:37<03:15, 3.20MB/s] ERROR:b2.console_tool:ConsoleTool command error Traceback (most recent call last): File "c:\python27\lib\site-packages\b2\console_tool.py", line 1399, in run_command return command.run(args) File "c:\python27\lib\site-packages\b2\console_tool.py", line 532, in run bucket.download_file_by_name(args.b2FileName, download_dest, progress_listener) File "c:\python27\lib\site-packages\logfury\v0_1\trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "c:\python27\lib\site-packages\b2\bucket.py", line 168, in download_file_by_name url, download_dest, progress_listener, range_ File "c:\python27\lib\site-packages\logfury\v0_1\trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "c:\python27\lib\site-packages\b2\transferer\transferer.py", line 115, in download_file_from_url range_, bytes_read, actual_sha1, metadata File "c:\python27\lib\site-packages\b2\transferer\transferer.py", line 122, in _validate_download raise TruncatedOutput(bytes_read, metadata.content_length) TruncatedOutput: only 1495314753 of 2121389892 bytes read ERROR: only 1495314753 of 2121389892 bytes read

    bug help wanted 
    opened by Desto7 62
  • b2 sync gets

    b2 sync gets "destination file is newer"

    After starting b2 sync using command line tool, leaving it to run for a period of time, and then cancelling it, I use b2 sync --dryRun to see how more files are needed to be uploaded. Usually, it lists them. Recently, the command I used and the error I received were:

    [email protected]:/media/tom/FreeAgent Drive/Images/100 Images/Canon Elph100HS/7z$ b2 sync --dryRun . b2://CanonElph100HS/7z
    ERROR:b2.console_tool:ConsoleTool command error 
    Traceback (most recent call last):
    File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 896, in run_command
    return command.run(args)
    File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 700, in run
    dry_run=args.dryRun,
    File "/usr/local/lib/python2.7/dist-packages/logfury/v0_1/trace_call.py", line 84, in wrapper
    return function(*wrapee_args, **wrapee_kwargs)
    File "build/bdist.linux-x86_64/egg/b2/sync/sync.py", line 211, in sync_folders
    source_folder, dest_folder, args, now_millis, reporter
    File "build/bdist.linux-x86_64/egg/b2/sync/sync.py", line 149, in make_folder_sync_actions
    sync_type, source_file, dest_file, source_folder, dest_folder, args, now_millis
    File "build/bdist.linux-x86_64/egg/b2/sync/sync.py", line 105, in make_file_sync_actions
    for action in policy.get_all_actions():
    File "build/bdist.linux-x86_64/egg/b2/sync/policy.py", line 93, in get_all_actions
    if self._should_transfer():
    File "build/bdist.linux-x86_64/egg/b2/sync/policy.py", line 46, in _should_transfer
    return self.files_are_different(self._source_file, self._dest_file, self._args)
    File "build/bdist.linux-x86_64/egg/b2/sync/policy.py", line 79, in files_are_different
    raise DestFileNewer(dest_file.name,)
    DestFileNewer: destination file is newer: 2011.7z.006
    ERROR: destination file is newer: 2011.7z.006
    [email protected]:/media/tom/FreeAgent Drive/Images/100 Images/Canon Elph100HS/7z$
    

    Using --replaceNewer gets me back to a list.

    But why is this happening at all? Every file on b2, as a copy of what I'm trying to backup to the cloud, will be newer than the original on my drive. When I view the file named in the error message, I don't see any record of the source system's datetime stamp for the file. What am I failing to understand here?

    opened by kazsulec 61
  • [Errno 104] Connection reset by peer

    [Errno 104] Connection reset by peer

    Hi,

    I am getting quite often the following error when uploading files: urllib2.URLError: <urlopen error [Errno 104] Connection reset by peer>

    What can I do to troubleshoot the issue ?

    Thanks

    invalid/environmental 
    opened by pgonin 37
  • Sync regularly fails with `Connection aborted: BadStatusLine`

    Sync regularly fails with `Connection aborted: BadStatusLine`

    I haven't been able to run 'b2 sync' for quite a while -- it always fails with the error below. Not immediately - there's always a delay of a few minutes. I guess while it works out which files to sync or something?

    Sometimes it'll upload one, maybe two, files, but mostly it fails on the first hurdle. Files being uploaded are about 4mbyte on average, going over a 0.5 mbps ADSL connection. Yay, Australia ;(

    b2 version: 1.1.0

    SYNCARGS="--threads 1 --delete --skipNewer --noProgress"
    b2 sync $SYNCARGS \
        /a/path/here/ b2://bucketname-here/subdirectory/
    
    ERROR:b2.console_tool:ConsoleTool command error
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/b2/console_tool.py", line 1074, in run
    _command
        return command.run(args)
      File "/usr/local/lib/python2.7/dist-packages/b2/console_tool.py", line 857, in run
        allow_empty_source=allow_empty_source
      File "/usr/local/lib/python2.7/dist-packages/logfury/v0_1/trace_call.py", line 84, 
    in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "/usr/local/lib/python2.7/dist-packages/b2/sync/sync.py", line 261, in sync_fo
    lders
        source_folder, dest_folder, args, now_millis, reporter
      File "/usr/local/lib/python2.7/dist-packages/b2/sync/sync.py", line 135, in make_fo
    lder_sync_actions
        dest_file) in zip_folders(source_folder, dest_folder, reporter, exclusions, inclu
    sions):
      File "/usr/local/lib/python2.7/dist-packages/b2/sync/sync.py", line 93, in zip_fold
    ers
        current_b = next_or_none(iter_b)
      File "/usr/local/lib/python2.7/dist-packages/b2/sync/sync.py", line 36, in next_or_
    none
        return six.advance_iterator(iterator)
      File "/usr/local/lib/python2.7/dist-packages/b2/sync/folder.py", line 219, in all_f
    iles
        self.folder_name, show_versions=True, recursive=True, fetch_count=1000
      File "/usr/local/lib/python2.7/dist-packages/b2/bucket.py", line 227, in ls
        self.id_, start_file_name, start_file_id, fetch_count
      File "/usr/local/lib/python2.7/dist-packages/b2/session.py", line 38, in wrapper
        return f(api_url, account_auth_token, *args, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/b2/raw_api.py", line 370, in list_file
    _versions
        maxFileCount=max_file_count
      File "/usr/local/lib/python2.7/dist-packages/b2/raw_api.py", line 139, in _post_jso
    n
        return self.b2_http.post_json_return_json(url, headers, params)
      File "/usr/local/lib/python2.7/dist-packages/b2/b2http.py", line 299, in post_json_
    return_json
        return self.post_content_return_json(url, headers, data, try_count, params)
      File "/usr/local/lib/python2.7/dist-packages/b2/b2http.py", line 272, in post_conte
    nt_return_json
        response = _translate_and_retry(do_post, try_count, post_params)
      File "/usr/local/lib/python2.7/dist-packages/b2/b2http.py", line 116, in _translate
    _and_retry
        return _translate_errors(fcn, post_params)
      File "/usr/local/lib/python2.7/dist-packages/b2/b2http.py", line 76, in _translate_
    errors
        raise B2ConnectionError(str(e0))
    B2ConnectionError: Connection error: ('Connection aborted.', BadStatusLine("''",))
    ERROR: Connection error: ('Connection aborted.', BadStatusLine("''",))
    
    bug help wanted 
    opened by TJC 24
  • Encrypting files for uploads

    Encrypting files for uploads

    I've added some encryption classes and functions. The UploadEncryptedSourceWrapper class can wrap any of the existing upload sources and encrypt them while reading/uploading. To test this you can use: upload_source = UploadEncryptedSourceWrapper(upload_source, EncryptionContext()) At the moment just a dummy key is used for encryption. Later the encryption context should be managed somewhere in the bucket. I'm not exactly sure where to put this. How to we initialize an encrypted bucket or how do hide the fact that the bucket is actually encrypted when using the API?

    opened by svonohr 24
  • using b2 command line tool as a library

    using b2 command line tool as a library

    I would like to implement something in python using b2 and its command line tool. Current implementation already lets me import it (after I symlink it to b2.py, that is) and call functions on it, which is good, however for any application which is not console-based or requires some resiliency against errors, it is impossible to use due to the current architecture. It is also not possible to test it automatically in isolation. Clearly the current codebase is already designed with integration with other python code - at least partially.

    The problem is with two layers being mixed with each other. One layer is what performs operations on the backend (let's call it controller) and the other is what displays the retrieved information (let's call it view). For example, if I would like to create a GUI application which would present the result of ls, I can't, because the function uses print to return results to the user. Here the layers should be divided so that the backend function returns an iterator of entries (objects), which would then be printed by the view function. If someone would need to call a library equivalent of ls, he would call the backend one and then display the result in the GUI (or web page or whatever he is trying to build).

    Another example is that if something does not exist or there is some other problem, current tool calls sys.exit(). If the user supplies invalid data to the gui application, as an application developer I want to display an error message, not shutdown the whole program. Therefore I'd expect the library functions to raise an exception on error, so that it can be handled properly.

    It is possible to implement most of the functionality of b2 command line tool again in a way which has properly separated layers and allows for console, gui, web page and other types of outputs, as well as for automated testing in isolation.

    On the other hand, the same changes could be developed here in this repository, with keeping most of the interface as it is. To be precise, I'd contribute most (if not all) of the code and tests required to do this properly. As it is a larger amount of work, I'd like to discuss it first here, so if my view and views of official maintainers are not aligned, it can be compared before much work is put into development.

    Please comment the above.

    opened by ppolewicz 24
  • b2 get-bucket size info

    b2 get-bucket size info

    Hello,

    It would be interesting to get-bucket command to have all the information on the web, the size and number of files being the most important missing

    Best Regards

    enhancement 
    opened by lmbss 22
  • ERROR: Connection error: Max retries exceeded with url

    ERROR: Connection error: Max retries exceeded with url

    Hi, I am very frequently getting this error, maybe 20 times a day. I have a bash script to execute the sync via cli, i frequently get emails from cron with this error:

    I have tried to contact BackBlaze support but they weren't very helpful and just said to report the issue here.

    ERROR: Connection error: HTTPSConnectionPool(host='api001.backblazeb2.com', port=443): Max retries exceeded with url: /b2api/v1/b2_list_file_versions (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x807ae3cd0>: Failed to establish a new connection: [Errno 60] Operation timed out',))
    
    invalid/environmental 
    opened by clintinthecode 20
  • Some files upload, some others won't

    Some files upload, some others won't

    Ran b2 sync in CentOS 7 and it's not performing well. Also it seems to use a lot of memory.

    Using https://api.backblazeb2.com
    upload 2017-08-17/accounts/ecotaxi/meta/mailserver                     
    ERROR:b2.sync.action:an exception occurred in a sync action GB   14 B/s
    Traceback (most recent call last):
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
        self.do_action(bucket, reporter)
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 84, in do_action
        progress_listener=SyncFileReporter(reporter)
      File "/usr/lib/python2.7/site-packages/logfury-0.1.2-py2.7.egg/logfury/v0_1/trace_call.py", line 84, in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 373, in upload
        upload_source, file_name, content_type, file_info, progress_listener
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 447, in _upload_large_file
        ) for (part_index, part_range) in enumerate(part_ranges)
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 122, in submit
        self._adjust_thread_count()
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 138, in _adjust_thread_count
        t.start()
      File "/usr/lib64/python2.7/threading.py", line 746, in start
        _start_new_thread(self.__bootstrap, ())
    error: can't start new thread
    b2_upload(/bkp/2017-08-17/accounts/deltamat.tar.gz, 2017-08-17/accounts/deltamat.tar.gz, 1502954072220): error("can't start new thread",) can't start new thread
    ERROR:b2.sync.action:an exception occurred in a sync action GB   87.4 MB/s
    Traceback (most recent call last):
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
        self.do_action(bucket, reporter)
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 84, in do_action
        progress_listener=SyncFileReporter(reporter)
      File "/usr/lib/python2.7/site-packages/logfury-0.1.2-py2.7.egg/logfury/v0_1/trace_call.py", line 84, in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 368, in upload
        progress_listener
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 122, in submit
        self._adjust_thread_count()
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 138, in _adjust_thread_count
        t.start()
      File "/usr/lib64/python2.7/threading.py", line 746, in start
        _start_new_thread(self.__bootstrap, ())
    error: can't start new thread
    b2_upload(/bkp/2017-08-17/accounts/ecotaxi/mysql-timestamps/ecotaxi_taxiapp, 2017-08-17/accounts/ecotaxi/mysql-timestamps/ecotaxi_taxiapp, 1502955340352): error("can't start new thread",) can't start new thread
    ERROR:b2.sync.action:an exception occurred in a sync action GB   87.4 MB/s
    Traceback (most recent call last):
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
        self.do_action(bucket, reporter)
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 84, in do_action
        progress_listener=SyncFileReporter(reporter)
      File "/usr/lib/python2.7/site-packages/logfury-0.1.2-py2.7.egg/logfury/v0_1/trace_call.py", line 84, in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 368, in upload
        progress_listener
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 122, in submit
        self._adjust_thread_count()
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 138, in _adjust_thread_count
        t.start()
      File "/usr/lib64/python2.7/threading.py", line 746, in start
        _start_new_thread(self.__bootstrap, ())
    error: can't start new thread
    b2_upload(/bkp/2017-08-17/accounts/ecotaxi/mysql-timestamps/ecotaxi_taxitest, 2017-08-17/accounts/ecotaxi/mysql-timestamps/ecotaxi_taxitest, 1502955360432): error("can't start new thread",) can't start new thread
    upload 2017-08-17/accounts/ecotaxi/meta/homedir_paths                     
    upload 2017-08-17/accounts/ecotaxi/logs/ftp.radioecotaxi.com-ftp_log.offsetftpbytes
    ERROR:b2.sync.action:an exception occurred in a sync action GB   77.7 MB/s
    Traceback (most recent call last):
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
        self.do_action(bucket, reporter)
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 84, in do_action
        progress_listener=SyncFileReporter(reporter)
      File "/usr/lib/python2.7/site-packages/logfury-0.1.2-py2.7.egg/logfury/v0_1/trace_call.py", line 84, in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 368, in upload
        progress_listener
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 122, in submit
        self._adjust_thread_count()
      File "/usr/lib/python2.7/site-packages/futures-3.1.1-py2.7.egg/concurrent/futures/thread.py", line 138, in _adjust_thread_count
        t.start()
      File "/usr/lib64/python2.7/threading.py", line 746, in start
        _start_new_thread(self.__bootstrap, ())
    error: can't start new thread
    b2_upload(/bkp/2017-08-17/system/dirs/_var_lib_rpm.tar.gz, 2017-08-17/system/dirs/_var_lib_rpm.tar.gz, 1502953509563): error("can't start new thread",) can't start new thread
    ERROR:b2.sync.action:an exception occurred in a sync action GB   77.6 MB/s
    Traceback (most recent call last):
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
        self.do_action(bucket, reporter)
      File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 84, in do_action
        progress_listener=SyncFileReporter(reporter)
      File "/usr/lib/python2.7/site-packages/logfury-0.1.2-py2.7.egg/logfury/v0_1/trace_call.py", line 84, in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 373, in upload
        upload_source, file_name, content_type, file_info, progress_listener
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 429, in _upload_large_file
        upload_source, file_name, file_info, part_ranges
      File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 480, in _find_unfinished_file
        sha1_sum = hex_sha1_of_stream(f, part_length)
      File "build/bdist.linux-x86_64/egg/b2/utils.py", line 123, in hex_sha1_of_stream
        data = input_stream.read(to_read)
    MemoryError
    b2_upload(/bkp/2017-08-17/system/dirs/_var_lib_mysql_.tar.gz, 2017-08-17/system/dirs/_var_lib_mysql_.tar.gz, 1502953485379): MemoryError() 
    upload 2017-08-17/accounts/ecotaxi/mysql-timestamps/ecotaxi_as30o         
    upload 2017-08-17/accounts/ecotaxi/ips/related_ips                       
    upload 2017-08-17/accounts/ecotaxi/mysql-timestamps/ecotaxi_aAd23        
    upload 2017-08-17/accounts/ecotaxi/meta/hostname                         
    upload 2017-08-17/accounts/ecotaxi/logs/radioecotaxi.com-bytes_log       
    upload 2017-08-17/accounts/ecotaxi/logs/ftp.radioecotaxi.com-ftp_log      
    upload 2017-08-17/system/dirs/_var_cpanel.tar.gz                           
    upload 2017-08-17/accounts/ecotaxi.tar.gz                                  
    ERROR:b2.console_tool:ConsoleTool command error                            
    Traceback (most recent call last):
      File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1009, in run_command
        return command.run(args)
      File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 798, in run
        dry_run=args.dryRun,
      File "/usr/lib/python2.7/site-packages/logfury-0.1.2-py2.7.egg/logfury/v0_1/trace_call.py", line 84, in wrapper
        return function(*wrapee_args, **wrapee_kwargs)
      File "build/bdist.linux-x86_64/egg/b2/sync/sync.py", line 262, in sync_folders
        raise CommandError('sync is incomplete')
    CommandError: sync is incomplete
    ERROR: sync is incomplete
    
    invalid/environmental 
    opened by shishanyu 19
  • Feature request: Optional range parameter for download request

    Feature request: Optional range parameter for download request

    Hi!

    Developer of B2Fuse here. An idea was put forth to use B2 CLI Python API as backend for B2Fuse to handle integration against B2. This is an excellent idea but requires an additional feature from B2 CLI. In order to use B2 CLI Python API in B2Fuse the download call needs to be able to request part of a file. This should be implemented as an optional feature, as in some cases it will also be necessary to request entire files.

    Would this be possible to add?

    Best Sondre

    enhancement 
    opened by falense 19
  • Library layer - first part

    Library layer - first part

    This PR is WIP (created for easy early review). Please do not merge it before it is ready.

    This will eventually implement #4.

    Task list:

    • [x] Create library access skeleton
    • [x] Implement B2Api.list_buckets()
    • [x] Implement B2Api.get_bucket_by_id()
    • [x] Implement B2Api.create_bucket()
    • [x] Implement B2Api.delete_bucket()
    • [x] Implement B2Api.make_url()
    • [x] Add api to bucket constructor
    • [x] Implement AbstractBucket.hide_file()
    • [x] drop AbstractBucket .getUploadUrl() (accidentally copied from java)
    • [x] implement B2Api.delete_file_version()
    • [x] implement bucket.set_type()
    • [x] implement InMemoryCache.save_bucket()
    • [x] remove FileInfo
    • [x] implement clear_account (?)
    • [x] rethink BucketType enum
    • [x] rething AbstractBucket class hierarchy
    • [x] implement api.download_file_by_id()
    • [x] implement AbstractBucket.download_file_by_name()
    • [x] implement bucket.list_file_names()
    • [x] implement bucket.list_file_versions()
    • [x] implement bucket.upload_file()
    • [x] refactor "400 bad request" exception handling
    • [x] remove ConsoleTool.info temporary property

    Unfinished items will be done in another PR

    opened by ppolewicz 19
  • Add --recursive, --force and --quiet options to delete-bucket command

    Add --recursive, --force and --quiet options to delete-bucket command

    Users sometimes need to delete a bucket and all the objects in that bucket. It is laborious to write a script to do so, and the use of b2 sync is not obvious. Add the following options to delete-bucket:

    • --recursive delete all file versions in the bucket, then delete the bucket itself. The user is prompted for confirmation, unless the --force option is supplied. The command prints the details of each file version (using the same format as the ls command) as it is deleted, unless the --quiet option is specified.
    • --force do not prompt for confirmation when doing a recursive delete.
    • --quiet do not print details of file versions as they are deleted.

    The following description is in terms of B2 Native API operations, though the command will call the equivalent methods in b2-sdk-python:

    When recursive is specified, after the confirmation step, the command will call b2_list_file_versions on the bucket once, and loop through the results, calling b2_delete_file_version for each file version and, if -quiet was not specified, printing the details of the file version. If either b2_list_file_versions or b2_delete_file_version returns an error, the command will terminate, reporting that error.

    Once the command has called b2_delete_file_version for each file version returned by b2_list_file_versions, it will call b2_delete_bucket to delete the bucket itself, reporting any error returned by that operation.

    Note that b2_delete_bucket may fail due to the bucket not being empty, as files may be uploaded to the bucket after b2_list_file_versions was called. In this case, the command will call b2_list_file_versions again, accompanying the error message from b2_delete_bucket with the message One or more files were uploaded while the bucket contents were being deleted: and the results of the b2_list_file_versions call.

    opened by metadaddy 3
  • permission denied

    permission denied

    Hi

    I am using tool like below

    wget -q https://github.com/Backblaze/B2_Command_Line_Tool/releases/latest/download/b2-linux -O /usr/bin/backblaze chmod +x /usr/bin/backblaze

    backblaze authorize-account xxx xxx

    and it gives error

    Failed to execv() /tmp/staticx-pDHABe/b2: Permission denied

    Any help

    invalid/environmental 
    opened by perochak 1
  • argparse.ArgumentError when running on Python 3.11

    argparse.ArgumentError when running on Python 3.11

    Running b2 authorize-account under Python 3.11.0 fails w/ following traceback:

    Traceback (most recent call last):
      File "/proj/venv311/bin/b2", line 8, in <module>
        sys.exit(main())
                 ^^^^^^
      File "/proj/venv311/lib/python3.11/site-packages/b2/console_tool.py", line 2986, in main
        exit_status = ct.run_command(sys.argv)
                      ^^^^^^^^^^^^^^^^^^^^^^^^
      File "/proj/venv311/lib/python3.11/site-packages/b2/console_tool.py", line 2812, in run_command
        args = B2.get_parser().parse_args(argv[1:])
               ^^^^^^^^^^^^^^^
      File "/proj/venv311/lib/python3.11/site-packages/b2/console_tool.py", line 551, in get_parser
        subcommand.get_parser(subparsers=subparsers, parents=parents, for_docs=for_docs)
      File "/proj/venv311/lib/python3.11/site-packages/b2/console_tool.py", line 527, in get_parser
        parser = subparsers.add_parser(
                 ^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/uninen/.pyenv/versions/3.11.0/lib/python3.11/argparse.py", line 1185, in add_parser
        raise ArgumentError(self, _('conflicting subparser: %s') % name)
    argparse.ArgumentError: argument command: conflicting subparser: authorize-account
    

    When switching back to 3.10 it works fine.

    bug 
    opened by Uninen 1
  • ERROR: Incomplete sync: sync is incomplete ( Already failed: 500 internal_error incident id 730b04157be6-adbb001b0523ff77)

    ERROR: Incomplete sync: sync is incomplete ( Already failed: 500 internal_error incident id 730b04157be6-adbb001b0523ff77)

    Hello.

    I'm getting errors when uploading to backblaze. The following is in the logs. Is there anything I can do to get it work?

    Software version:

    # b2 version
    b2 command line tool, version 3.1.0
    

    Logs:

    updated: 0/1 files   927 / 928 MB   861 kB/s ^M
    compare: 1/1 files   updated: 0/1 files   927 / 928 MB   862 kB/s ^M
    compare: 1/1 files   updated: 0/1 files   928 / 928 MB   862 kB/s ^M
    compare: 1/1 files   updated: 0/1 files   928 / 928 MB   862 kB/s ^M
    compare: 1/1 files   updated: 0/1 files   928 / 928 MB   862 kB/s ^M
    compare: 1/1 files   updated: 1/1 files   928 / 928 MB   862 kB/s ^M
    
    ERROR:b2sdk.sync.action:an exception occurred in a sync action
    Traceback (most recent call last):
      File "b2sdk/sync/action.py", line 49, in run
      File "b2sdk/sync/action.py", line 140, in do_action
      File "logfury/_logfury/trace_call.py", line 86, in wrapper
      File "b2sdk/bucket.py", line 528, in upload
      File "logfury/_logfury/trace_call.py", line 86, in wrapper
      File "b2sdk/bucket.py", line 578, in create_file
      File "b2sdk/bucket.py", line 662, in _create_file
      File "logfury/_logfury/trace_call.py", line 86, in wrapper
      File "b2sdk/transfer/emerge/emerger.py", line 71, in emerge
      File "b2sdk/transfer/emerge/executor.py", line 72, in execute_emerge_plan
      File "b2sdk/transfer/emerge/executor.py", line 217, in execute_plan
      File "b2sdk/transfer/emerge/executor.py", line 217, in <listcomp>
      File "b2sdk/utils/__init__.py", line 38, in interruptible_get_result
      File "concurrent/futures/_base.py", line 445, in result
      File "concurrent/futures/_base.py", line 390, in __get_result
      File "concurrent/futures/thread.py", line 52, in run
      File "b2sdk/transfer/outbound/upload_manager.py", line 168, in _upload_part
    b2sdk.exception.AlreadyFailed: Already failed: 500 internal_error incident id 730b04157be6-adbb001b0523ff77
    b2_upload(/opt/backuptmp/tmp/datafile.gz.gpg, autobackup/foldername/datafile.gz.gpg, 1665104760873): AlreadyFailed('500 internal_error incident id 730b04157be6-adbb001b0523ff77') Already failed: 500 internal_error incident id 730b04157be6-adbb001b0523ff77
    
     compare: 1/1 files   updated: 1/1 files   928 / 928 MB   862 kB/s^M                                                                  ^M
    
    ERROR: Incomplete sync: sync is incomplete
    command /usr/local/bin/b2 sync /opt/backuptmp/tmp b2://bucketname/autobackup/foldername failed with 1
    

    Best regards, Alojzij

    more-information-needed 
    opened by predkambrij 1
  • Bump tabulate from 0.8.10 to 0.9.0

    Bump tabulate from 0.8.10 to 0.9.0

    Bumps tabulate from 0.8.10 to 0.9.0.

    Changelog

    Sourced from tabulate's changelog.

    • 0.9.0: Drop support for Python 2.7, 3.5, 3.6. Migrate to pyproject.toml project layout (PEP 621). New output formats: asciidoc, various *grid and *outline formats. New output features: vertical row alignment, separating lines. New input format: list of dataclasses (Python 3.7 or later). Support infinite iterables as row indices. Improve column width options. Improve support for ANSI escape sequences and document the behavior. Various bug fixes.
    Commits
    • bf58e37 version bump to 0.9.0, update README (Benchmark, Contributors), CHANGELOG
    • fd0a34c Merge pull request #201 from astanin/dev-pep621
    • 0a6554e appveyor: upgrade setuptools before build (should fix UNKNOWN package name)
    • d99d9ae ignore ImportError when importing version number
    • 3e45eac update appveyor.yml to use pyproject.toml instead of setup.py
    • 05e88d2 fix test_cli - change script path, do not import .version if init.py is r...
    • 4e2eeb1 update tox.ini - use a virtual environment to build a source dist from the so...
    • 6e37802 Merge pull request #179 from KOLANYCH-libs:pyproject.toml
    • 9172378 fix tests failing after PR#183 (remove 1 space from the expected values)
    • 930a943 reformat with black, fix flake warnings
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(v3.6.0)
Owner
Backblaze
Backblaze Cloud Backup & Cloud Storage
Backblaze
The Tahoe-LAFS decentralized secure filesystem.

Free and Open decentralized data store Tahoe-LAFS (Tahoe Least-Authority File Store) is the first free software / open-source storage technology that

Tahoe-LAFS 1.2k Jan 01, 2023
The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage

B2 Command Line Tool The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage. This program provides command-line a

Backblaze 467 Dec 08, 2022
Postgres CLI with autocompletion and syntax highlighting

A REPL for Postgres This is a postgres client that does auto-completion and syntax highlighting. Home Page: http://pgcli.com MySQL Equivalent: http://

dbcli 10.8k Dec 30, 2022
Continuous Archiving for Postgres

WAL-E Continuous archiving for Postgres WAL-E is a program designed to perform continuous archiving of PostgreSQL WAL files and base backups. To corre

3.4k Dec 30, 2022
a full featured file system for online data storage

S3QL S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack. S3QL effectively provi

917 Dec 25, 2022
A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.

mycli A command line client for MySQL that can do auto-completion and syntax highlighting. HomePage: http://mycli.net Documentation: http://mycli.net/

dbcli 10.7k Jan 07, 2023
Nerd-Storage is a simple web server for sharing files on the local network.

Nerd-Storage is a simple web server for sharing files on the local network. It supports the download of files and directories, the upload of multiple files at once, making a directory, updates and de

ハル 68 Jun 07, 2022
ZODB Client-Server framework

ZEO - Single-server client-server database server for ZODB ZEO is a client-server storage for ZODB for sharing a single storage among many clients. Wh

Zope 40 Nov 04, 2022
The next generation relational database.

What is EdgeDB? EdgeDB is an open-source object-relational database built on top of PostgreSQL. The goal of EdgeDB is to empower its users to build sa

EdgeDB 9.9k Dec 31, 2022
Barman - Backup and Recovery Manager for PostgreSQL

Barman, Backup and Recovery Manager for PostgreSQL Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of

EDB 1.5k Dec 30, 2022
The web end of seafile server.

Introduction Seahub is the web frontend for Seafile. Preparation Build and deploy Seafile server from source. See http://manual.seafile.com/build_seaf

476 Dec 29, 2022
A generic JSON document store with sharing and synchronisation capabilities.

Kinto Kinto is a minimalist JSON storage service with synchronisation and sharing abilities. Online documentation Tutorial Issue tracker Contributing

Kinto 4.2k Dec 26, 2022
TrueNAS CORE/Enterprise/SCALE Middleware Git Repository

TrueNAS CORE/Enterprise/SCALE main source repo Want to contribute or collaborate? Join our Slack instance. IMPORTANT NOTE: This is the master branch o

TrueNAS 2k Jan 07, 2023
Synchronize local directories with Tahoe-LAFS storage grids

Gridsync Gridsync aims to provide a cross-platform, graphical user interface for Tahoe-LAFS, the Least Authority File Store. It is intended to simplif

171 Dec 16, 2022
Automatic SQL injection and database takeover tool

sqlmap sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of

sqlmapproject 25.7k Jan 02, 2023
Cross-platform desktop synchronization client for the Nuxeo platform.

Nuxeo Drive Desktop Synchronization Client for Nuxeo This is an ongoing development project for desktop synchronization of local folders with remote N

Nuxeo 63 Dec 16, 2022
ZFS, in Python, without reading the original C.

ZFSp What? ZFS, in Python, without reading the original C. What?! That's right. How? Many hours spent staring at hexdumps, and asking friends to searc

Colin Valliant 569 Oct 28, 2022
An open source multi-tool for exploring and publishing data

Datasette An open source multi-tool for exploring and publishing data Datasette is a tool for exploring and publishing data. It helps people take data

Simon Willison 6.8k Jan 01, 2023