A tool for creating credentials for accessing S3 buckets

Overview

s3-credentials

PyPI Changelog Tests License

A tool for creating credentials for accessing S3 buckets

For project background, see s3-credentials: a tool for creating credentials for S3 buckets on my blog.

⚠️ Warning

I am not an AWS security expert. You shoud review how this tool works carefully before using it against with own AWS account.

If you are an AWS security expert I would love to get your feedback!

Installation

Install this tool using pip:

$ pip install s3-credentials

Configuration

This tool uses boto3 under the hood which supports a number of different ways of providing your AWS credentials. If you have an existing ~/.aws/config or ~/.aws/credentials file the tool will use that - otherwise you can set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables before calling this tool.

Usage

The s3-credentials create command is the core feature of this tool. Pass it one or more S3 bucket names and it will create a new user with permission to access just those specific buckets, then create access credentials for that user and output them to your console.

Make sure to record the SecretAccessKey because it will only be displayed once and cannot be recreated later on.

In this example I create credentials for reading and writing files in my static.niche-museums.com S3 bucket:

% s3-credentials create static.niche-museums.com

Created user: s3.read-write.static.niche-museums.com with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.static.niche-museums.com to user s3.read-write.static.niche-museums.com
Created access key for user: s3.read-write.static.niche-museums.com
{
    "UserName": "s3.read-write.static.niche-museums.com",
    "AccessKeyId": "AKIAWXFXAIOZOYLZAEW5",
    "Status": "Active",
    "SecretAccessKey": "...",
    "CreateDate": "2021-11-03 01:38:24+00:00"
}

The command has several additional options:

  • --username TEXT: The username to use for the user that is created by the command (or the username of an existing user if you do not want to create a new one). If ommitted a default such as s3.read-write.static.niche-museums.com will be used.
  • -c, --create-bucket: Create the buckts if they do not exist. Without this any missing buckets will be treated as an error.
  • --read-only: The user should only be allowed to read files from the bucket.-
  • --write-only: The user should only be allowed to write files to the bucket, but not read them. This is useful for logging use-cases.
  • --bucket-region: If creating buckets, the region in which they should be created.
  • --silent: Don't output details of what is happening, just output the JSON for the created access credentials at the end.
  • --user-permissions-boundary: Custom permissions boundary to use for users created by this tool. This will default to restricting those users to only interacting with S3, taking the --read-only option into account. Use none to create users without any permissions boundary at all.

Here's the full sequence of events that take place when you run this command:

  1. Confirm that each of the specified buckets exists. If they do not and --create-bucket was passed create them - otherwise exit with an error.
  2. If a username was not specified, determine a username using the s3.$permission.$buckets format.
  3. If a user with that username does not exist, create one with an S3 permissions boundary that respects the --read-only option - unless --user-permissions-boundary=none was passed (or a custom permissions boundary string).
  4. For each specified bucket, add an inline IAM policy to the user that gives them permission to either read-only, write-only or read-write against that bucket.
  5. Create a new access key for that user and output the key and its secret to the console.

Other commands

whoami

To see which user you are authenticated as:

s3-credentials whoami

This will output JSON representing the currently authenticated user.

list-users

To see a list of all users that exist for your AWS account:

s3-credentials list-users

This will return pretty-printed JSON objects by default.

Add --nl to collapse these to single lines as valid newline-delimited JSON.

Add --array to output a valid JSON array of objects instead.

list-buckets

Shows a list of all buckets in your AWS account.

s3-credentials list-buckets

Accepts the same --nl and --array options as list-users.

list-user-policies

To see a list of inline policies belonging to users:

% s3-credentials list-user-policies s3.read-write.static.niche-museums.com

User: s3.read-write.static.niche-museums.com
PolicyName: s3.read-write.static.niche-museums.com
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::static.niche-museums.com"
            ]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": [
                "arn:aws:s3:::static.niche-museums.com/*"
            ]
        }
    ]
}

You can pass any number of usernames here. If you don't specify a username the tool will loop through every user belonging to your account:

s3-credentials list-user-policies

delete-user

In trying out this tool it's possible you will create several different user accounts that you later decide to clean up.

Deleting AWS users is a little fiddly: you first need to delete their access keys, then their inline policies and finally the user themselves.

The s3-credentials delete-user handles this for you:

% s3-credentials delete-user s3.read-write.simonw-test-bucket-10
User: s3.read-write.simonw-test-bucket-10
  Deleted policy: s3.read-write.simonw-test-bucket-10
  Deleted access key: AKIAWXFXAIOZK3GPEIWR
  Deleted user

You can pass it multiple usernames to delete multiple users at a time.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd s3-credentials
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
Comments
  • `s3-credentials create` command

    `s3-credentials create` command

    This is the command which create a user and returns credentials for a specified bucket, optionally also creating the bucket as well.

    See initial design notes in #1.

    enhancement 
    opened by simonw 22
  • Standard default output should be a valid JSON array

    Standard default output should be a valid JSON array

    I just spotted list-buckets has the same not-quite-newline-delimited JSON output format, which is a bad default. I should fix that too.

    Originally posted by @simonw in https://github.com/simonw/s3-credentials/issues> /28#issuecomment-1014838721

    enhancement 
    opened by simonw 15
  • Research creating expiring credentials using `sts.assume_role()`

    Research creating expiring credentials using `sts.assume_role()`

    The initial reason for creating this tool was that I wanted to be able to create long-lived (never expiring) tokens for the kinds of use-cases described in this post: https://simonwillison.net/2021/Nov/3/s3-credentials/

    Expiring credentials are fantastic for all sorts of other use-cases. It would be great if this tool could optionally create those instead of creating long-lived credentials.

    This would mean the tool didn't have to create users at all (when used in that mode) - it could create a role and then create temporary access credentials for that role using sts.assume_role(): https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role

    enhancement research 
    opened by simonw 15
  • `s3-credentials put-objects` command

    `s3-credentials put-objects` command

    It's frustrating when using s3-credentials put-object that you have to specify the key name each time, rather than deriving that from the filename:

    s3-credentials put-object simonwillison-cors-allowed-public \
      click_default_group-1.2.2-py3-none-any.whl \
      /tmp/click-default-group/dist/click_default_group-1.2.2-py3-none-any.whl
    

    One way to fix this would be with a s3-credentials put-objects which works like this:

    s3-credentials put-objects simonwillison-cors-allowed-public /tmp/click-default-group/dist/click_default_group-1.2.2-py3-none-any.whl
    

    It could accept multiple files (hence the plural name) and could also accept directories and recursively upload their contents.

    enhancement 
    opened by simonw 13
  • Make it easier to add extra policy statements

    Make it easier to add extra policy statements

    The current --policy option lets you set a custom policy, but leaves it to you to define one.

    I find myself wanting to mix in the following to the policy that I use, for s3-ocr:

    https://docs.aws.amazon.com/textract/latest/dg/security_iam_id-based-policy-examples.html#security_iam_async-actions

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "textract:StartDocumentTextDetection",
                    "textract:StartDocumentAnalysis",
                    "textract:GetDocumentTextDetection",
                    "textract:GetDocumentAnalysis"
                ],
                "Resource": "*"
            }
        ]
    }
    

    Would be nice if there was a neat way to do this.

    enhancement research 
    opened by simonw 10
  • Mechanism for running tests against a real AWS account

    Mechanism for running tests against a real AWS account

    The tests for this project currently run against mocks - which is good, because I don't like the idea of GitHub Action tests hitting real APIs.

    But... this project is about building securely against AWS. As such, automated tests that genuinely exercise a live AWS account (and check that the resulting permissions behave as expected) would be incredibly valuable for growing my confidence that this tool works as advertised.

    These tests would need quite a high level of administrative access, because they need to be able to create users, roles etc.

    I don't like the idea of storing my own AWS administrator account credentials in a GitHub Actions secret though. I think I'll write these tests such that they can be run outside of GitHub Actions, maybe configured via environment variables that allow other project contributors to run tests against their own accounts.

    tests 
    opened by simonw 10
  • Stop using action wildcards and start explicitly listing permissions

    Stop using action wildcards and start explicitly listing permissions

    See https://github.com/simonw/s3-credentials/issues/11#issuecomment-959844042_ for context.

    The read-write policy currently uses "Action": "s3:*Object" - and the read-only one uses Action": "s3:GetObject*".

    This is pretty gross - surely explicitly listing the allowed actions is better practice?

    • [x] #23
    • [x] #24
    • [x] #25
    research 
    opened by simonw 10
  • Support configuring the bucket as a website

    Support configuring the bucket as a website

    It would be useful to have an opt-in option for saying "this bucket should be configured as a website" - because setting that up without a tool is quite fiddly.

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html has the details:

    When you configure a bucket as a static website, if you want your website to be public, you can grant public read access. To make your bucket publicly readable, you must disable block public access settings for the bucket and write a bucket policy that grants public read access.

    See #20 for "block public access" setting, and #19 for bucket policies.

    enhancement 
    opened by simonw 9
  • Work-in-progress create command

    Work-in-progress create command

    Refs #3. This is implemented... but it doesn't seem to work - when I copy and paste the credentials into Transmit it refuses to connect.

    • [x] Get it working
    • [x] Add tests
    • [x] Add documentation
    opened by simonw 8
  • Manually test --prefix against litestream.io

    Manually test --prefix against litestream.io

    Originally posted by @simonw in https://github.com/simonw/s3-credentials/pull/39#issuecomment-1014857276

    Splitting this into a separate issue mainly so I can clearly document how to use Litestream in the comments here.

    Goal is to confirm that S3 credentials created using s3-credentials create ... --prefix litestream-test/ can be used with Litestream to back up a SQLite database to that path within the bucket.

    research tests 
    opened by simonw 7
  • Apply jdub policy suggestions

    Apply jdub policy suggestions

    https://github.com/simonw/s3-credentials/blob/main/s3_credentials/policies.py

    My suggestions:

    • specify individual actions explicitly (no wildcards)
    • separate permissions by resource (Buckets vs. Objects)
    • Sid is unnecessary

    Your read/write policy is good, but instead of *Object, list GetObject and PutObject.

    Your read-only policy would be better written like your read/write policy, one section for the bucket permission (ListBucket), one for the object permission (which should be GetObject, no wildcard).

    Your write-only policy is great as is.

    You may want to add additional permissions to let clients set ACLs. But if it's all simple object-by-object stuff, these very simple policies are great.

    Originally posted by @jdub in https://github.com/simonw/s3-credentials/issues/7#issuecomment-958651592

    enhancement research 
    opened by simonw 7
  • Add s3:PutObjectAcl to write policies

    Add s3:PutObjectAcl to write policies

    This came up here:

    • https://github.com/simonw/public-notes/issues/9#issuecomment-1328567164

    It turned out django-storages nees a write policy that includes s3:PutObjectAcl: https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#iam-policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObjectAcl",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:DeleteObject",
                    "s3:PutObjectAcl"
                ],
                "Principal": {
                    "AWS": "arn:aws:iam::example-AWS-account-ID:user/example-user-name"
                },
                "Resource": [
                    "arn:aws:s3:::example-bucket-name/*",
                    "arn:aws:s3:::example-bucket-name"
                ]
            }
        ]
    }
    

    Looks like I should add s3:GetObjectAcl to the default read policies too.

    enhancement 
    opened by simonw 3
  • Add the options to add tags to the created resources

    Add the options to add tags to the created resources

    Hello! I am looking into using s3-credentials for my projects. I use tags to identify resources in different ways, like how it was created or the project it belongs. I was wondering if there is planned support for adding tags to the resources created, or if you would be open for a contribution in that area.

    enhancement 
    opened by threkk 3
  • `get-objects/put-objects` `--skip` and `--skip-hash` options

    `get-objects/put-objects` `--skip` and `--skip-hash` options

    Idea:

    • --skip to skip downloading a file if it already exists with the same filename
    • --skip-hash to skip downloading a file if it already exists AND the MD5 hash has not changed (more expensive as needs to calculate the local hash)

    Originally posted by @simonw in https://github.com/simonw/s3-credentials/issues/78#issuecomment-1248398247

    enhancement 
    opened by simonw 1
  • Provide a `--profile` option to allow AWS profile selection

    Provide a `--profile` option to allow AWS profile selection

    Users with multiple AWS accounts can declare named profiles to manage the different sets of credentials/regions. It would be ideal if s3-credentials accepted a --profile argument, just like the aws comand line tool.

    enhancement 
    opened by nk9 3
  • Bad session token error masked if not creating a new bucket

    Bad session token error masked if not creating a new bucket

    If a bad or expired session token is set, the create command fails with a misleading error that an existing bucket doesn't exist, if --create-bucket isn't specified. If --create-bucket is specified a traceback with more info is given instead:

    export AWS_ACCESS_KEY_ID="..."
    export AWS_SECRET_ACCESS_KEY="..."
    export AWS_SESSION_TOKEN="EXPIRED_TOKEN" 
    
    $ s3-credentials create --username USERNAME BUCKET
    Error: Bucket does not exist: BUCKET - try --create-bucket to create it
    
    $ s3-credentials create --create-bucket --username USERNAME BUCKET
    Traceback (most recent call last):
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/bin/s3-credentials", line 8, in <module>
        sys.exit(cli())
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
        return self.main(*args, **kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1053, in main
        rv = self.invoke(ctx)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 754, in invoke
        return __callback(*args, **kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/s3_credentials/cli.py", line 314, in create
        s3.create_bucket(Bucket=bucket, **kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/botocore/client.py", line 391, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/botocore/client.py", line 719, in _make_api_call
        raise error_class(parsed_response, operation_name)
    botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the CreateBucket operation: The provided token has expired.
    
    bug 
    opened by kimvanwyk 3
Releases(0.14)
  • 0.14(Sep 15, 2022)

    • s3-credentials put-objects command (docs) for uploading more than one file or directory to an S3 bucket at a time. #68
    • s3-credentials get-objects command (docs) for downloading multiple files from an S3 bucket. #78
    Source code(tar.gz)
    Source code(zip)
  • 0.13(Aug 12, 2022)

    • Documentation now lives on a dedicated documentation website: https://s3-credentials.readthedocs.io/ #71
    • s3-credentials create ... --website --create-bucket now creates an S3 bucket that is configured to act as a website, with index.html an the index page and error.html as the page used for any errors. #21
    • s3-credentials list-buckets --details now returns the bucket region and the URL to the website, if it is configured to act as a website. #77
    • Fixed a bug where list-bucket would return an error if the bucket (or specified --prefix) was empty. #76
    Source code(tar.gz)
    Source code(zip)
  • 0.12.1(Aug 1, 2022)

    • Using the --policy or --statement options now implies --user-permissions-boundary=none. Previously it was easy to use these options to accidentally create credentials that did not work as expected since they would have a default permissions boundary that locked them down to only being able to access S3. #74
    • The s3-credentials.AmazonS3FullAccess role created by this tool in order to issue temporary credentials previously used the default MaxSessionDuration value of 3600, preventing it from creating credentials that could last more than an hour. This has been increased to 12 hours. See this issue comment for instructions on fixing your existing role if this bug is affecting your account. #75
    Source code(tar.gz)
    Source code(zip)
  • 0.12(Jun 30, 2022)

    • New --statement JSON option for both the s3-credentials create and s3-credentials policy commands, allowing one or more additional policy statements (provided as JSON strings) to be added to the generated IAM policy. #72
    Source code(tar.gz)
    Source code(zip)
  • 0.11(May 1, 2022)

  • 0.10(Jan 25, 2022)

  • 0.9(Jan 18, 2022)

    See Weeknotes: s3-credentials prefix and Datasette 0.60 for extra background on these new features.

    • New --prefix myprefix/ option to s3-credentials create, which configures the credentials to only allow access to keys within the S3 bucket that start with the provided prefix. #12
    • s3-credentials policy --prefix myprefix/ command for generating and outputting a JSON policy that is restricted to the specified prefix. You can see examples in the README.
    • New list-bucket command for listing the contents of a specified bucket. #28
    • The list-users, list-buckets and list-bucket command all default to outputting an indented JSON array - previously the outputted indented JSON objects separated by newlines. The --nl option can be used to return newline-delimited single line JSON objects. The new --csv and --tsv options can be used to return CSV or TSV output. #48
    Source code(tar.gz)
    Source code(zip)
  • 0.8(Dec 7, 2021)

    • s3-credentials create my-bucket --public option for creating public buckets, which allow anyone with knowledge of a filename to download that file. This works by attaching this public bucket policy to the bucket after it is created. #42
    • s3-credentials put-object now sets the Content-Type header on the uploaded object. The type is detected based on the filename, or can be specified using the new --content-type option. #43
    • s3-credentials policy my-bucket --public-bucket outputs the public bucket policy that would be attached to a bucket of that name. #44
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Nov 30, 2021)

    • s3-credentials policy command, to output the JSON policy that would be used directly to the terminal. #37
    • README now includes examples of the three different policies. #36
    • s3-credentials put-object and s3-credentials get-object commands for uploading and downloading files from an S3 bucket. #38
    Source code(tar.gz)
    Source code(zip)
  • 0.6(Nov 18, 2021)

    • create --dry-run option outputs a summary of changes that would be made to an AWS account without applying them. #35
    • s3-credentials whoami command now uses sts.GetCallerIdentity, which means it works with any kind of access key. #33
    Source code(tar.gz)
    Source code(zip)
  • 0.5(Nov 11, 2021)

    • New s3-credentials create --duration 20m option. This creates temporary credentials that only last for the specified time, by creating a role and using STS.AssignRole() to retrieve credentials. #27
    • Redesigned read-only and read-write policies to no longer use wildcards and instead explicitly list allowed actions. #15
    • Commands now accept an optional --auth file/path.json option to specify a JSON or INI file containing the credentials to use. #29
    • New s3-credentials list-buckets --details option to include ACLs, website configuration and bucket policies. #22
    • New s3-credentials create --format ini option for outputting INI format instead of JSON. #17
    • Now uses botocore.stub in some of the tests - thanks, Niko Abeler. #16
    • Added integration tests, run using pytest --integration, which exercise the tool against an AWS account and delete any created resources afterwards. #30
    • Added tips section to the README, including how to access CloudTrail
    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 4, 2021)

    • New options for authenticating with AWS: --access-key, --secret-key, --session-token, --endpoint-url. #2
    • Various improvements to JSON policies - thanks, @jdub! #11
    • --policy filename.json option for specifying a custom JSON policy. #14
    Source code(tar.gz)
    Source code(zip)
  • 0.3(Nov 3, 2021)

  • 0.2(Nov 3, 2021)

  • 0.1(Nov 3, 2021)

    • Initial release
    • s3-credentials create name-of-bucket creates a new user with read-write access only to the specified S3 bucket, creates an access key for that user and outputs it to the console. #3
    • s3-credentials list-users lists all of the users for the current AWS account. #4
    • s3-credentials list-user-policies lists inline policies for the specifeid users, or all users. #5
    • s3-credentials whoami shows information about the currently authenticated user.
    Source code(tar.gz)
    Source code(zip)
Owner
Simon Willison
Simon Willison
(unofficial) Googletrans: Free and Unlimited Google translate API for Python. Translates totally free of charge.

Googletrans Googletrans is a free and unlimited python library that implemented Google Translate API. This uses the Google Translate Ajax API to make

Suhun Han 3.2k Jan 04, 2023
A Python interface module to the SAS System. It works with Linux, Windows, and mainframe SAS. It supports the sas_kernel project (a Jupyter Notebook kernel for SAS) or can be used on its own.

A Python interface to MVA SAS Overview This module creates a bridge between Python and SAS 9.4. This module enables a Python developer, familiar with

SAS Software 319 Dec 19, 2022
Troposphere and shellscript based AWS infrastructure automation creates an awsapigateway lambda with a go backend

Automated-cloudformation-infra Troposphere and shellscript based AWS infrastructure automation. Feel free to clone and edit for personal usage. The en

1 Jan 03, 2022
Lending-Club-Loans - Using TensorFlow to create an ANN model to predict whether people would charge off or pay back their loans.

Lending Club Loans: Brief Introduction LendingClub is a US peer-to-peer lending company, headquartered in San Francisco, California.[3] It was the fir

Ali Akram 1 Jan 03, 2022
CSUL Discord Bot

Cruzeiro This is the same old bot running on the Discord Server of CSUL, but i've changed the code. It's better now. Discord.py Heroku How i did The b

Operaho 6 Jan 31, 2022
A Matrix-Instagram DM puppeting bridge

mautrix-instagram A Matrix-Instagram DM puppeting bridge. Documentation All setup and usage instructions are located on docs.mau.fi. Some quick links:

89 Dec 14, 2022
派蒙Bot / PaimonBot

派蒙Bot / PaimonBot 基于Mrs4s / go-cqhttp 和 nonebot / nonebot2 的原神QQ群聊机器人 特别鸣谢 MingxuanGame 的人物武器名字自动纠正功能 小灰灰 的人物武器信息api 环境 请务必使用Python3.7以上版本!! 尽量在linux下

晓轩 96 Dec 16, 2022
The official Pushy SDK for Python apps.

pushy-python The official Pushy SDK for Python apps. Pushy is the most reliable push notification gateway, perfect for real-time, mission-critical app

Pushy 1 Dec 21, 2021
Dashboard to monitor the performance of your Binance Futures account

futuresboard A python based scraper and dashboard to monitor the performance of your Binance Futures account. Note: A local sqlite3 database config/fu

86 Dec 29, 2022
Automate saving your Discover Weekly Playlist using Python.

SpotWeekly Automate saving your Discover Weekly Playlist using Python. Made with 3 and FastAPI. The saved playlist link is sent to my discord server

shourya 6 Jan 03, 2022
A bot written in python that send prefilled Google Forms. It supports multithreading for faster execution time.

GoogleFormsBot https://flassy.xyz https://github.com/Shawey/GoogleFormsBot Requirements: os (Default) ast (Default) threading (Default) configparser (

Shawey 1 Jul 10, 2022
Alienworlds-bot - A Python script made to automate the tidious job of mining on AlienWorlds

AlienWorlds bot A Python script designed to automate the tedious work of mining

5 Dec 23, 2022
Opensea-upload-with-recaptcha-solution - Updated opensea uploading solution with recaptcha pass

opensea-upload-with-recaptcha-solution updated opensea uploading solution with r

byeonggeon sim 25 Nov 15, 2022
Auto-Approved-Bot - Auto Approved Invaite Link Request Telegram Bot

🤖 𝗔𝘂𝘁𝗼-𝗔𝗽𝗽𝗿𝗼𝘃𝗲-𝗕𝗼𝘁 🤖 ℹ️ 𝗨𝘀𝗲𝗴𝗲 ℹ️ When a join request invita

Muhammed 32 Dec 18, 2022
Discord bot written in discord.py

Orion Discord bot written in discord.py Installation Installation of code is supported for macOS only currently First open the terminal. If incase you

Zeus 3 May 19, 2022
ANKIT-OS/TG-MUSIC-PLAYER a special repository. Its Is A Telegram Bot To Play To Play Music In Voice Chat

🔥 🎶 TG MUSIC PLAYER 🎶 🔥 The owner would not be responsible for any kind of bans due to the bot. • ⚡ INSTALLING ⚡ • • 🛠️ Lᴀɴɢᴜᴀɢᴇs Aɴᴅ Tᴏᴏʟs 🔰 •

ANKIT KUMAR 1 Dec 27, 2021
A tool for exporting Telegram group chats into static websites, preserving chat history like mailing list archives.

tg-archive is a tool for exporting Telegram group chats into static websites, preserving chat history like mailing list archives. Preview The @fossuni

Kailash Nadh 400 Dec 27, 2022
A Python package that can be used to download post and comment data from Reddit.

Reddit Data Collector Reddit Data Collector is a Python package that allows a user to collect post and comment data from Reddit. It is built on top of

Nico Van den Hooff 3 Jul 26, 2022
A multipurpose bot designed to make Discord better for everyone, written in Python.

Hadum A multipurpose bot that makes Discord better for everyone Features A Fully Functional Moderation component: manage your staff, members and permi

1 Jan 25, 2022
Program that automates the bump of the Disboard Bot. Done 100% in Python with PyAutoGUI library

Auto-Discord-Bump Program that automates the bump of the Disboard Bot done 100% in python with PyAutoGUI How to configue You will need 3 things before

Mateus 1 Dec 19, 2021