Full stack, modern web application generator. Using FastAPI, PostgreSQL as database, Docker, automatic HTTPS and more.

Overview

Full Stack FastAPI and PostgreSQL - Base Project Generator

Build Status

Generate a backend and frontend stack using Python, including interactive API documentation.

Interactive API documentation

API docs

Alternative API documentation

API docs

Dashboard Login

API docs

Dashboard - Create User

API docs

Features

  • Full Docker integration (Docker based).
  • Docker Swarm Mode deployment.
  • Docker Compose integration and optimization for local development.
  • Production ready Python web server using Uvicorn and Gunicorn.
  • Python FastAPI backend:
    • Fast: Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic).
    • Intuitive: Great editor support. Completion everywhere. Less time debugging.
    • Easy: Designed to be easy to use and learn. Less time reading docs.
    • Short: Minimize code duplication. Multiple features from each parameter declaration.
    • Robust: Get production-ready code. With automatic interactive documentation.
    • Standards-based: Based on (and fully compatible with) the open standards for APIs: OpenAPI and JSON Schema.
    • Many other features including automatic validation, serialization, interactive documentation, authentication with OAuth2 JWT tokens, etc.
  • Secure password hashing by default.
  • JWT token authentication.
  • SQLAlchemy models (independent of Flask extensions, so they can be used with Celery workers directly).
  • Basic starting models for users (modify and remove as you need).
  • Alembic migrations.
  • CORS (Cross Origin Resource Sharing).
  • Celery worker that can import and use models and code from the rest of the backend selectively.
  • REST backend tests based on Pytest, integrated with Docker, so you can test the full API interaction, independent on the database. As it runs in Docker, it can build a new data store from scratch each time (so you can use ElasticSearch, MongoDB, CouchDB, or whatever you want, and just test that the API works).
  • Easy Python integration with Jupyter Kernels for remote or in-Docker development with extensions like Atom Hydrogen or Visual Studio Code Jupyter.
  • Vue frontend:
    • Generated with Vue CLI.
    • JWT Authentication handling.
    • Login view.
    • After login, main dashboard view.
    • Main dashboard with user creation and edition.
    • Self user edition.
    • Vuex.
    • Vue-router.
    • Vuetify for beautiful material design components.
    • TypeScript.
    • Docker server based on Nginx (configured to play nicely with Vue-router).
    • Docker multi-stage building, so you don't need to save or commit compiled code.
    • Frontend tests ran at build time (can be disabled too).
    • Made as modular as possible, so it works out of the box, but you can re-generate with Vue CLI or create it as you need, and re-use what you want.
    • It's also easy to remove it if you have an API-only app, check the instructions in the generated README.md.
  • PGAdmin for PostgreSQL database, you can modify it to use PHPMyAdmin and MySQL easily.
  • Flower for Celery jobs monitoring.
  • Load balancing between frontend and backend with Traefik, so you can have both under the same domain, separated by path, but served by different containers.
  • Traefik integration, including Let's Encrypt HTTPS certificates automatic generation.
  • GitLab CI (continuous integration), including frontend and backend testing.

How to use it

Go to the directory where you want to create your project and run:

pip install cookiecutter
cookiecutter https://github.com/tiangolo/full-stack-fastapi-postgresql

Generate passwords

You will be asked to provide passwords and secret keys for several components. Open another terminal and run:

openssl rand -hex 32
# Outputs something like: 99d3b1f01aa639e4a76f4fc281fc834747a543720ba4c8a8648ba755aef9be7f

Copy the contents and use that as password / secret key. And run that again to generate another secure key.

Input variables

The generator (cookiecutter) will ask you for some data, you might want to have at hand before generating the project.

The input variables, with their default values (some auto generated) are:

  • project_name: The name of the project

  • project_slug: The development friendly name of the project. By default, based on the project name

  • domain_main: The domain in where to deploy the project for production (from the branch production), used by the load balancer, backend, etc. By default, based on the project slug.

  • domain_staging: The domain in where to deploy while staging (before production) (from the branch master). By default, based on the main domain.

  • docker_swarm_stack_name_main: The name of the stack while deploying to Docker in Swarm mode for production. By default, based on the domain.

  • docker_swarm_stack_name_staging: The name of the stack while deploying to Docker in Swarm mode for staging. By default, based on the domain.

  • secret_key: Backend server secret key. Use the method above to generate it.

  • first_superuser: The first superuser generated, with it you will be able to create more users, etc. By default, based on the domain.

  • first_superuser_password: First superuser password. Use the method above to generate it.

  • backend_cors_origins: Origins (domains, more or less) that are enabled for CORS (Cross Origin Resource Sharing). This allows a frontend in one domain (e.g. https://dashboard.example.com) to communicate with this backend, that could be living in another domain (e.g. https://api.example.com). It can also be used to allow your local frontend (with a custom hosts domain mapping, as described in the project's README.md) that could be living in http://dev.example.com:8080 to communicate with the backend at https://stag.example.com. Notice the http vs https and the dev. prefix for local development vs the "staging" stag. prefix. By default, it includes origins for production, staging and development, with ports commonly used during local development by several popular frontend frameworks (Vue with :8080, React, Angular).

  • smtp_port: Port to use to send emails via SMTP. By default 587.

  • smtp_host: Host to use to send emails, it would be given by your email provider, like Mailgun, Sparkpost, etc.

  • smtp_user: The user to use in the SMTP connection. The value will be given by your email provider.

  • smtp_password: The password to be used in the SMTP connection. The value will be given by the email provider.

  • smtp_emails_from_email: The email account to use as the sender in the notification emails, it would be something like [email protected].

  • postgres_password: Postgres database password. Use the method above to generate it. (You could easily modify it to use MySQL, MariaDB, etc).

  • pgadmin_default_user: PGAdmin default user, to log-in to the PGAdmin interface.

  • pgadmin_default_user_password: PGAdmin default user password. Generate it with the method above.

  • traefik_constraint_tag: The tag to be used by the internal Traefik load balancer (for example, to divide requests between backend and frontend) for production. Used to separate this stack from any other stack you might have. This should identify each stack in each environment (production, staging, etc).

  • traefik_constraint_tag_staging: The Traefik tag to be used while on staging.

  • traefik_public_constraint_tag: The tag that should be used by stack services that should communicate with the public.

  • flower_auth: Basic HTTP authentication for flower, in the formuser:password. By default: "admin:changethis".

  • sentry_dsn: Key URL (DSN) of Sentry, for live error reporting. You can use the open source version or a free account. E.g.: https://1234abcd:[email protected]/30.

  • docker_image_prefix: Prefix to use for Docker image names. If you are using GitLab Docker registry it would be based on your code repository. E.g.: git.example.com/development-team/my-awesome-project/.

  • docker_image_backend: Docker image name for the backend. By default, it will be based on your Docker image prefix, e.g.: git.example.com/development-team/my-awesome-project/backend. And depending on your environment, a different tag will be appended ( prod, stag, branch ). So, the final image names used will be like: git.example.com/development-team/my-awesome-project/backend:prod.

  • docker_image_celeryworker: Docker image for the celery worker. By default, based on your Docker image prefix.

  • docker_image_frontend: Docker image for the frontend. By default, based on your Docker image prefix.

How to deploy

This stack can be adjusted and used with several deployment options that are compatible with Docker Compose, but it is designed to be used in a cluster controlled with pure Docker in Swarm Mode with a Traefik main load balancer proxy handling automatic HTTPS certificates, using the ideas from DockerSwarm.rocks.

Please refer to DockerSwarm.rocks to see how to deploy such a cluster in 20 minutes.

More details

After using this generator, your new project (the directory created) will contain an extensive README.md with instructions for development, deployment, etc. You can pre-read the project README.md template here too.

Sibling project generators

Release Notes

Latest Changes

  • Update issue-manager. PR #211.
  • Add GitHub Sponsors button. PR #201.
  • Add consistent errors for env vars not set. PR #200.
  • Upgrade Traefik to version 2, keeping in sync with DockerSwarm.rocks. PR #199.
  • Add docs about reporting test coverage in HTML. PR #161.
  • Run tests with TestClient. PR #160.
  • Refactor backend:
    • Simplify configs for tools and format to better support editor integration.
    • Add mypy configurations and plugins.
    • Add types to all the codebase.
    • Update types for SQLAlchemy models with plugin.
    • Update and refactor CRUD utils.
    • Refactor DB sessions to use dependencies with yield.
    • Refactor dependencies, security, CRUD, models, schemas, etc. To simplify code and improve autocompletion.
    • Change from PyJWT to Python-JOSE as it supports additional use cases.
    • Fix JWT tokens using user email/ID as the subject in sub.
    • PR #158.
  • Add docs about removing the frontend, for an API-only app. PR #156.
  • Simplify scripts and development, update docs and configs. PR #155.
  • Simplify docker-compose.*.yml files, refactor deployment to reduce config files. PR #153.
  • Simplify env var files, merge to a single .env file. PR #151.

0.5.0

  • Make the Traefik public network a fixed default of traefik-public as done in DockerSwarm.rocks, to simplify development and iteration of the project generator. PR #150.
  • Update to PostgreSQL 12. PR #148. by @RCheese.
  • Use Poetry for package management. Initial PR #144 by @RCheese.
  • Fix Windows line endings for shell scripts after project generation with Cookiecutter hooks. PR #149.
  • Upgrade Vue CLI to version 4. PR #120 by @br3ndonland.
  • Remove duplicate login tag. PR #135 by @Nonameentered.
  • Fix showing email in dashboard when there's no user's full name. PR #129 by @rlonka.
  • Format code with Black and Flake8. PR #121 by @br3ndonland.
  • Simplify SQLAlchemy Base class. PR #117 by @airibarne.
  • Update CRUD utils for users, handling password hashing. PR #106 by @mocsar.
  • Use . instead of source for interoperability. PR #98 by @gucharbon.
  • Use Pydantic's BaseSettings for settings/configs and env vars. PR #87 by @StephenBrown2.
  • Remove package-lock.json to let everyone lock their own versions (depending on OS, etc).
  • Simplify Traefik service labels PR #139.
  • Add email validation. PR #40 by @kedod.
  • Fix typo in README. PR #83 by @ashears.
  • Fix typo in README. PR #80 by @abjoker.
  • Fix function name read_item and response code. PR #74 by @jcaguirre89.
  • Fix typo in comment. PR #70 by @daniel-butler.
  • Fix Flower Docker configuration. PR #37 by @dmontagu.
  • Add new CRUD utils based on DB and Pydantic models. Initial PR #23 by @ebreton.
  • Add normal user testing Pytest fixture. PR #20 by @ebreton.

0.4.0

  • Fix security on resetting a password. Receive token as body, not query. PR #34.

  • Fix security on resetting a password. Receive it as body, not query. PR #33 by @dmontagu.

  • Fix SQLAlchemy class lookup on initialization. PR #29 by @ebreton.

  • Fix SQLAlchemy operation errors on database restart. PR #32 by @ebreton.

  • Fix locations of scripts in generated README. PR #19 by @ebreton.

  • Forward arguments from script to pytest inside container. PR #17 by @ebreton.

  • Update development scripts.

  • Read Alembic configs from env vars. PR #9 by @ebreton.

  • Create DB Item objects from all Pydantic model's fields.

  • Update Jupyter Lab installation and util script/environment variable for local development.

0.3.0

  • PR #14:

    • Update CRUD utils to use types better.
    • Simplify Pydantic model names, from UserInCreate to UserCreate, etc.
    • Upgrade packages.
    • Add new generic "Items" models, crud utils, endpoints, and tests. To facilitate re-using them to create new functionality. As they are simple and generic (not like Users), it's easier to copy-paste and adapt them to each use case.
    • Update endpoints/path operations to simplify code and use new utilities, prefix and tags in include_router.
    • Update testing utils.
    • Update linting rules, relax vulture to reduce false positives.
    • Update migrations to include new Items.
    • Update project README.md with tips about how to start with backend.
  • Upgrade Python to 3.7 as Celery is now compatible too. PR #10 by @ebreton.

0.2.2

0.2.1

  • Fix documentation for path operation to get user by ID. PR #4 by @mpclarkson in FastAPI.

  • Set /start-reload.sh as a command override for development by default.

  • Update generated README.

0.2.0

PR #2:

  • Simplify and update backend Dockerfiles.
  • Refactor and simplify backend code, improve naming, imports, modules and "namespaces".
  • Improve and simplify Vuex integration with TypeScript accessors.
  • Standardize frontend components layout, buttons order, etc.
  • Add local development scripts (to develop this project generator itself).
  • Add logs to startup modules to detect errors early.
  • Improve FastAPI dependency utilities, to simplify and reduce code (to require a superuser).

0.1.2

  • Fix path operation to update self-user, set parameters as body payload.

0.1.1

Several bug fixes since initial publication, including:

  • Order of path operations for users.
  • Frontend sending login data in the correct format.
  • Add https://localhost variants to CORS.

License

This project is licensed under the terms of the MIT license.

Comments
  • sqlalchemy queue pool limit lockup/timeout

    sqlalchemy queue pool limit lockup/timeout

    While doing some testing to see how this fastapi with sqlalchemy would hold up, my server seemed to lock up when running 100 concurrent requests. If i ran the requests sequentially it was totally fine.

    sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached,

    Is it possible I've made some error while trying to mimic the structure of the code base? Or is it possible the main can lock up in the middleware with the session implementation of sqlalchemy?

    Has anyone tested the performance of this cookie cutter project?

    answered 
    opened by jklaw90 61
  • Is this project still maintained ?

    Is this project still maintained ?

    Hello,

    I see that they are a bunch of PR and the master branch has not been updated since 2020.

    Should someone make an official fork of this ? Or something ? That's a very good project, it would be sad if it became obsolete :/

    opened by sorasful 19
  • This cookiecutter but without the frontend

    This cookiecutter but without the frontend

    For a few of my projects, I have used this cookiecutter but removed the frontend part.

    Would it be useful to create a cookiecutter that followed this project, but removed the frontend? It would allow the project to focus on purely the API, with the frontend in a separate project.

    It would also make it easier to swap out the dashboard part and use something like https://github.com/marmelab/react-admin in a separate project.

    opened by nyejon 18
  • Simplify CRUD definitions, make a clearer distinction between schemas and models

    Simplify CRUD definitions, make a clearer distinction between schemas and models

    This PR facilitates the creation of the basic CRUD operations for new db_models objects.

    Motivation: get DRYer

    If you don't care for IDE embedded auto-completion, you willl not need to copy/paste app.crud.items.py into app.crud.your_model.py file anymore. One line is sufficient in app.crud.__init__.py (with appropriate import):

    your_model = CrudBase(YourModel)
    

    If you care about "intellisense", then you will still have a bit of copy-pasting for the sake of auto-completion, and leverage your favorite IDE. But you will still gain time (and lines of code)

    Solution The PR therefore defines a CrudBase class, which instances will provide the get/get_multi/create/udpate/delete/remove operations for a given SQLAlchemy object. I have tried to be as detailed as possible in the documentation of app.crud.base.py

    Along with this new tool, I have added a new SubItem model that showcases the usage of CrudBase. I have also modified Item with the same objective. Tests have been added or updated consequently.

    because we should be DRY 🚀

    Bonus 1: reflect the fastapi documentation by using schemas (when pydantic) and models (when sqlalchemy)

    I have updated the tree structure to reflect the latest documentation from fastapi (https://fastapi.tiangolo.com/tutorial/sql-databases/). Hence renaming models to schemas and db_models to models.

    Bonus 2: run the test more easily

    when runing sh tests.sh from the root directory, arguments are forward to scripts/test.sh and then to .../test-start.sh.

    Thats allows the user to pass whatever argument he would pass to pytest, like -x -v -k filter --pdb (pick your favorite)

    when sh tests.sh fails and you want to quicky run the test again and again (without deleting the entire testing-projectand generating again (including docker image), then you can user sh test-again.sh that will rsync the testing-project src code with your latest modifications.

    Bonus 3: easy casting from schema to model (and vice versa)

    Not only we I have tried to be very explicit on which kind object is used, i.e

    from app.models import subitem as models_subitem
    from app.schemas import subitem as schemas_subitem
    
    subitem = CrudBase(models_subitem.SubItem, schemas_subitem.SubItem)
    

    But the models provide 2 functions to ease the casting_

    class CustomBase(object):
    
        def to_schema(self, schema_cls):
            return schema_cls(**self.__dict__)
    
        @classmethod
        def from_schema(cls, schema_obj):
            return cls(**jsonable_encoder(schema_obj))
    

    which allows you to easily cast a model instance to any schema you provide, or create the model object from a schema instance.

    Those 2 functions are used in the base crud definitions of update and create, and makes sure that all types will be properly cast (in particular non directly JSONable ones like datetime or enums)

    Bonus 4: normal user for tests

    without the need to create a new one in each and every tests...

    That basically comes from PR #20

    opened by ebreton 16
  • [BUG] SQLAlchemy InvalidRequestError on relationships if one of the model is not yet imported

    [BUG] SQLAlchemy InvalidRequestError on relationships if one of the model is not yet imported

    Describe the bug

    The application crashes at start-up, when initializing data if :

    1. a relationship is defined ...
    2. ... with a model not already imported at the time of execution.
    backend_1        | INFO:__main__:Starting call to '__main__.init', this is the 2nd time calling it.
    backend_1        | INFO:__main__:Service finished initializing
    backend_1        | INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
    backend_1        | INFO  [alembic.runtime.migration] Will assume transactional DDL.
    backend_1        | INFO  [alembic.runtime.migration] Running upgrade  -> d4867f3a4c0a, First revision
    backend_1        | INFO  [alembic.runtime.migration] Running upgrade d4867f3a4c0a -> ea9cad5d9292, Added SubItem models
    backend_1        | INFO:__main__:Creating initial data
    backend_1        | Traceback (most recent call last):
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/ext/declarative/clsregistry.py", line 294, in __call__
    backend_1        |     x = eval(self.arg, globals(), self._dict)
    backend_1        |   File "<string>", line 1, in <module>
    backend_1        | NameError: name 'SubItem' is not defined
    ...
    backend_1        | sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class Item->item, expression 'SubItem' failed to locate a name ("name 'SubItem' is not defined"). If this is a class name, consider adding this relationship() to the <class 'app.db_models.item.Item'> class after both dependent classes have been defined.
    base-project_backend_1 exited with code 1
    

    To Reproduce

    create a new db_models/subitems.py (could be copied from item.py)

    It is important that this new model has a relationship to another one, e.g Item

    from sqlalchemy import Column, ForeignKey, Integer, String
    from sqlalchemy.orm import relationship
    
    from app.db.base_class import Base
    
    
    class SubItem(Base):
        id = Column(Integer, primary_key=True, index=True)
        title = Column(String, index=True)
        description = Column(String, index=True)
        item_id = Column(Integer, ForeignKey("item.id"))
        item = relationship("Item", back_populates="subitems")
    

    Adapt db_models/item.py with the new relationship

    ...
    
    class Item(Base):
        ...
        subitems = relationship("SubItem", back_populates="item")
    

    Declare the new SubItem in db/base.py as per the documentation

    # Import all the models, so that Base has them before being
    # imported by Alembic
    from app.db.base_class import Base  # noqa
    from app.db_models.user import User  # noqa
    from app.db_models.item import Item  # noqa
    from app.db_models.subitem import SubItem  # noqa
    

    Re-build and start the application. The full traceback follows

    backend_1        | INFO:__main__:Creating initial data
    backend_1        | Traceback (most recent call last):
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/ext/declarative/clsregistry.py", line 294, in __call__
    backend_1        |     x = eval(self.arg, globals(), self._dict)
    backend_1        |   File "<string>", line 1, in <module>
    backend_1        | NameError: name 'SubItem' is not defined
    backend_1        |
    backend_1        | During handling of the above exception, another exception occurred:
    backend_1        |
    backend_1        | Traceback (most recent call last):
    backend_1        |   File "/app/app/initial_data.py", line 21, in <module>
    backend_1        |     main()
    backend_1        |   File "/app/app/initial_data.py", line 16, in main
    backend_1        |     init()
    backend_1        |   File "/app/app/initial_data.py", line 11, in init
    backend_1        |     init_db(db_session)
    backend_1        |   File "/app/app/db/init_db.py", line 12, in init_db
    backend_1        |     user = crud.user.get_by_email(db_session, email=config.FIRST_SUPERUSER)
    backend_1        |   File "/app/app/crud/user.py", line 16, in get_by_email
    backend_1        |     return db_session.query(User).filter(User.email == email).first()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/scoping.py", line 162, in do
    backend_1        |     return getattr(self.registry(), name)(*args, **kwargs)
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1543, in query
    backend_1        |     return self._query_cls(entities, self, **kwargs)
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 168, in __init__
    backend_1        |     self._set_entities(entities)
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 200, in _set_entities
    backend_1        |     self._set_entity_selectables(self._entities)
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 231, in _set_entity_selectables
    backend_1        |     ent.setup_entity(*d[entity])
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 4077, in setup_entity
    backend_1        |     self._with_polymorphic = ext_info.with_polymorphic_mappers
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 855, in __get__
    backend_1        |     obj.__dict__[self.__name__] = result = self.fget(obj)
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 2135, in _with_polymorphic_mappers
    backend_1        |     configure_mappers()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 3229, in configure_mappers
    backend_1        |     mapper._post_configure_properties()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 1947, in _post_configure_properties
    backend_1        |     prop.init()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/interfaces.py", line 196, in init
    backend_1        |     self.do_init()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 1860, in do_init
    backend_1        |     self._process_dependent_arguments()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 1922, in _process_dependent_arguments
    backend_1        |     self.target = self.entity.persist_selectable
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 855, in __get__
    backend_1        |     obj.__dict__[self.__name__] = result = self.fget(obj)
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 1827, in entity
    backend_1        |     argument = self.argument()
    backend_1        |   File "/usr/local/lib/python3.7/site-packages/sqlalchemy/ext/declarative/clsregistry.py", line 306, in __call__
    backend_1        |     % (self.prop.parent, self.arg, n.args[0], self.cls)
    backend_1        | sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class Item->item, expression 'SubItem' failed to locate a name ("name 'SubItem' is not defined"). If this is a class name, consider adding this relationship() to the <class 'app.db_models.item.Item'> class after both dependent classes have been defined.
    base-project_backend_1 exited with code 1
    

    Expected behavior The solution should have started normally

    Additionnal context In (most of) real use-cases, the application would have defined some CRUD operation on the new defined model, and consequently imported it here and there.

    Thus making it available at the time when creating initial data.

    Nevertheless, the error is so annoying and obscure (when it happens) that it deserves a safeguard (see my PR for a suggestion)

    opened by ebreton 14
  • [Question] Why is there no pipenv / poetry for installing dependencies?

    [Question] Why is there no pipenv / poetry for installing dependencies?

    I would assume that one or the other (or even a requirements.txt) would be used for setting up the python dependencies.

    I've seen so MANY nice libraries/abstractions already used throughout this cookiecutter, that I'm surpised that there is no strict way of controlling python package dependency versions.

    opened by stratosgear 13
  • Psycopg2 Error while deploying to AWS ECS FARGATE

    Psycopg2 Error while deploying to AWS ECS FARGATE

    I am a newbie to docker and backend web development so this issue might seem trivial. I am working on deploying the project to ECS Fargate using ecs-cli. However ecs-cli does not support several docker parameters (eg. network). So inorder to deploy it to ECS Fargate, I pushed all the images to AWS Elastic Container Registry and then created a new docker-compose file as follows:

    version: '3'
    services:
        backend:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/backend
            ports:
                - "8888:8888"
            env_file:
                - .env
            environment:
                - SERVER_NAME=${DOMAIN?Variable not set}
                - SERVER_HOST=https://${DOMAIN?Variable not set}
                # Allow explicit env var override for tests
                - SMTP_HOST=${SMTP_HOST}
            logging:
                driver: awslogs
                options: 
                    awslogs-group: tutorialv3
                    awslogs-region: ap-south-1
                    awslogs-stream-prefix: fastapi-docker
        celeryworker:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/celeryworker
            env_file:
                - .env  
            environment:
                - SERVER_NAME=${DOMAIN?Variable not set}
                - SERVER_HOST=https://${DOMAIN?Variable not set}
                # Allow explicit env var override for tests
                - SMTP_HOST=${SMTP_HOST?Variable not set}
            logging:
                driver: awslogs
                options: 
                    awslogs-group: tutorialv3
                    awslogs-region: ap-south-1
                    awslogs-stream-prefix: fastapi-docker
        
        frontend:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/frontend
            logging:
                driver: awslogs
                options: 
                    awslogs-group: tutorialv3
                    awslogs-region: ap-south-1
                    awslogs-stream-prefix: fastapi-docker
    
        queue:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/rabbitmq
            logging:
                driver: awslogs
                options: 
                    awslogs-group: tutorialv3
                    awslogs-region: ap-south-1
                    awslogs-stream-prefix: fastapi-docker
                
        pgadmin:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/pgadmin4
            ports:
                - "5050:5050"
            env_file:
                - .env
            logging:
                driver: awslogs
                options: 
                    awslogs-group: tutorialv3
                    awslogs-region: ap-south-1
                    awslogs-stream-prefix: fastapi-docker
    
        db:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/db
            ports:
                - "5432:5432"
            env_file:
                - .env
            environment:
                - PGDATA=/var/lib/postgresql/data/pgdata
            logging:
                driver: awslogs
                options: 
                    awslogs-group: tutorialv3
                    awslogs-region: ap-south-1
                    awslogs-stream-prefix: fastapi-docker
    
        flower:
            image: 123456789123.dkr.ecr.ap-south-1.amazonaws.com/fast-api/flower
            ports:
                - "5555:5555"
            env_file:
                - .env
            command:
                - "--broker=amqp://[email protected]:5672//"
    

    Along with this there are 2 other files:

    • .env
    • ecs-params.yml (Contains task definition for containers with memory & CPU unit format for which is linked in the guide below)

    Using the guide I am able to deploy the containers and the cluster reaches a stable state with all containers running. However I am not able to login. Upon checking the logs I came across this for the backend, celeryworker : INFO:__main__:Starting call to '__main__.init', this is the 300th time calling it. ERROR:__main__:(psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known Since the networks parameter is not supported the containers are not ale to communicate with each other and the backend / celeryworker psycopg2 clients are not able to connect to the db.

    Since all the containers are deployed in a single cluster so they can communicate over localhost and the networking is possible only through awsvpc, is traefik network mandatory for the project to run on ECS Fargate? Is there a workaround to use 'Networks' parameter in docker compose? Is BACKEND_CORS_ORIGIN preventing the backend container to access the db?

    opened by umang-mistry-bo 12
  • Ability to run backend separately

    Ability to run backend separately

    For now its up to 2 days while I'm struggling with postgres docker container and it would've been great if there was any guidelines/ ready compose file to launch them locally and debug that locally.

    opened by orihomie 12
  • backend does not start

    backend does not start

    docker-compose up -d starts each one of the services, but backend service fails with the following:

    docker-compose logs backend

    Attaching to iaas_backend_1
    backend_1        | Checking for script in /app/prestart.sh
    backend_1        | There is no script /app/prestart.sh
    backend_1        | INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
    backend_1        | INFO: Started reloader process [1]
    backend_1        | ERROR: Error loading ASGI app. Import string ":app" must be in format "<module>:<attribute>".
    backend_1        | INFO: Stopping reloader process [1]
    

    I tried to get into the backend's bash to try debugging, but it also fails:

    docker-compose exec backend bash

    ERROR: No container found for backend_1

    opened by monatis 11
  • Backend not starting

    Backend not starting

    Hello I've tried running project locally on Windows server (using cookiecutter), but backend is not starting. docker logs shows: Checking for script in /app/prestart.sh Running script /app/prestart.sh : not foundad.sh: 2: /app/prestart.sh:

    opened by olegasdo 11
  • Poetry RuntimeError workaround

    Poetry RuntimeError workaround

    It all started when I tried to use a celery.chord to run two tasks in parallel, followed by another task to consume their result. I received an exception similar to the one described here: https://stackoverflow.com/q/45240564/95989.

    This issue is not about that. I found this comment https://github.com/tiangolo/full-stack-fastapi-postgresql/issues/293#issuecomment-708665327 , which appeared to resolve a similar issue:

    After a deleting all the volumes and images and rebuilding everything again with docker-compose up -d, it works now!

    So I decided to try that as well. Unfortunately, upon attempting to rebuild, I encountered this error:

    #7 3.111 Installing version: 1.1.6
    #7 3.111   - Downloading poetry-1.1.6-linux.tar.gz (72.33MB)
    #7 3.111 Traceback (most recent call last):
    #7 3.111   File "<stdin>", line 1086, in <module>
    #7 3.111   File "<stdin>", line 1082, in main
    #7 3.111   File "<stdin>", line 363, in run
    #7 3.111   File "<stdin>", line 528, in install
    #7 3.111   File "<stdin>", line 548, in make_lib
    #7 3.111   File "<stdin>", line 625, in _make_lib
    #7 3.111 RuntimeError: Hashes for poetry-1.1.6-linux.tar.gz do not match: a812d3e0e1ff93b6a69fa04bd2fdd81bd502d8788314387fb554b5807c2628f6 != 763eae856b0eca592c0fecb0247d8486b8e72eb35215be2fe993a32e9c1f1c06
    

    Guessing that it had something to do with Docker caching, I tried appending ?foo=bar to the URL to get-poetry.py in backend.dockerfile in order to bust the cache:

     11 RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py?foo=bar | POETRY_HOME=/opt/poetry python && \
    ...
    

    Unfortunately this did not resolve the issue.

    Disconcertingly, I'm not able to find any results for this issue online, i.e. https://www.google.com/search?q=%22get-poetry.py%22+%22runtimeerror%22+%22hashes%22+%22do+not+match%22 returns only four results, none of which are helpful.

    The source appears to be https://github.com/python-poetry/poetry/blob/master/get-poetry.py#L624 , and that line has been unchanged for three years.

    Any suggestions would be appreciated!

    opened by abrichr 10
  • Forked rebuild & upgraded to FastAPI 0.88, SQLAlchemy 1.4, & Nuxt 3.0

    Forked rebuild & upgraded to FastAPI 0.88, SQLAlchemy 1.4, & Nuxt 3.0

    I forked this repo more than a year ago for a major project and, with help from @br3ndonland, consolidated it on his Inboard FastAPI images, added in support for Neo4j (so I could use it for search), and fixed some of the things what were broken.

    With several big generational updates on core dependencies, I've spent the last two weeks updating and refactoring. All dependencies upgraded to the latest long-term support & working versions, and I built a tiny blog-type app as the base stack. Still a Cookiecutter template.

    Key updates:

    • Inboard 0.10.4 -> 0.37.0, including FastAPI 0.88
    • SQLAlchemy 1.3 -> 1.4
    • Celery 4.4 -> 5.2
    • Postgresql 12 -> 14
    • Neo4j pinned to 5.2.0
    • Nuxt.js 2.5 -> 3.0
    • Pinia for state management (replaces Vuex)
    • Vee-Validate 3 -> 4
    • Tailwind 2.2 -> 3.2
    • Authentication refresh token tables and schemas for long-term issuing of a new JWT access token.

    API docs

    I also rebuilt the auth 'n auth to make it a bit more reliable. I haven't rebuilt the tests yet.

    Here's instructions for use and I'd appreciate your trying it out and letting me know if it is helpful.

    Projects it was used for:

    • openLocal.uk - visual data explorer for a quarterly-updated commercial location database for England and Wales.
    • Qwyre.com - online ereader and collaborative self-publishing.

    Next project for the rebuild: Enqwyre.com - implementation of a "no code" method for schema-to-schema data transformations for interoperability.

    opened by turukawa 1
  • error when run docker compose up ??

    error when run docker compose up ??

    I'm beginner use command with this work flow

    pip install cookiecutter
    cookiecutter https://github.com/tiangolo/full-stack-fastapi-postgresql
    
    # cd to project and run with docker compose up
    docker compose up
    

    error-docker-compose

    opened by zergreen 2
  • Need to use

    Need to use "importlib-metadata<5.0" to use Celery

    I meet an issue when docker-compose up -d

    ImportError: cannot import name 'Celery' from 'celery' (/usr/local/lib/python3.7/site-packages/celery/__init__.py)
    

    I fix it in this way.

    (at data-backend/app/pyproject.toml)

    ...
    [tool.poetry.dependencies]
    python = "^3.7"
    uvicorn = "^0.11.3"
    fastapi = "^0.54.1"
    python-multipart = "^0.0.5"
    email-validator = "^1.0.5"
    requests = "^2.23.0"
    celery = "^4.4.2"
    passlib = {extras = ["bcrypt"], version = "^1.7.2"}
    tenacity = "^6.1.0"
    pydantic = "^1.4"
    emails = "^0.5.15"
    raven = "^6.10.0"
    gunicorn = "^20.0.4"
    jinja2 = "^2.11.2"
    psycopg2-binary = "^2.8.5"
    alembic = "^1.4.2"
    sqlalchemy = "^1.3.16"
    pytest = "^5.4.1"
    python-jose = {extras = ["cryptography"], version = "^3.1.0"}
    importlib-metadata = "<5.0"
    ...
    
    

    Reference Issue

    https://github.com/celery/celery/issues/7783

    opened by whatisand 1
  • Project Broken Due To Docker / Docker Compose - Unsupported Compose File Version - Root Additional Property Name Is Not Allowed

    Project Broken Due To Docker / Docker Compose - Unsupported Compose File Version - Root Additional Property Name Is Not Allowed

    This project does not work anymore.

    With a clean cookiecutter, if you follow the directions in dockerswarm.rocks you get the error: "Unsupported compose file version 1.0"

    This happens when you get to docker stack in /scripts/deploy.sh

    I think it's because docker stack and docker compose are not compatible.

    If you try to add a version to the top of the file after the compose config, i.e. ...

    (echo -e "version: '3.8'\n"; DOMAIN=${DOMAIN?Variable not set} \
    TRAEFIK_TAG=${TRAEFIK_TAG?Variable not set} \
    STACK_NAME=${STACK_NAME?Variable not set} \
    TAG=${TAG?Variable not set} \
    docker-compose \
    -f docker-compose.yml \
    config)
    

    ... then you still get this error "(root) additional property **name** is not allowed"

    investigate 
    opened by akamspsw 3
Releases(0.5.0)
  • 0.5.0(Apr 19, 2020)

    • Make the Traefik public network a fixed default of traefik-public as done in DockerSwarm.rocks, to simplify development and iteration of the project generator. PR #150.
    • Update to PostgreSQL 12. PR #148. by @RCheese.
    • Use Poetry for package management. Initial PR #144 by @RCheese.
    • Fix Windows line endings for shell scripts after project generation with Cookiecutter hooks. PR #149.
    • Upgrade Vue CLI to version 4. PR #120 by @br3ndonland.
    • Remove duplicate login tag. PR #135 by @Nonameentered.
    • Fix showing email in dashboard when there's no user's full name. PR #129 by @rlonka.
    • Format code with Black and Flake8. PR #121 by @br3ndonland.
    • Simplify SQLAlchemy Base class. PR #117 by @airibarne.
    • Update CRUD utils for users, handling password hashing. PR #106 by @mocsar.
    • Use . instead of source for interoperability. PR #98 by @gucharbon.
    • Use Pydantic's BaseSettings for settings/configs and env vars. PR #87 by @StephenBrown2.
    • Remove package-lock.json to let everyone lock their own versions (depending on OS, etc).
    • Simplify Traefik service labels PR #139.
    • Add email validation. PR #40 by @kedod.
    • Fix typo in README. PR #83 by @ashears.
    • Fix typo in README. PR #80 by @abjoker.
    • Fix function name read_item and response code. PR #74 by @jcaguirre89.
    • Fix typo in comment. PR #70 by @daniel-butler.
    • Fix Flower Docker configuration. PR #37 by @dmontagu.
    • Add new CRUD utils based on DB and Pydantic models. Initial PR #23 by @ebreton.
    • Add normal user testing Pytest fixture. PR #20 by @ebreton.
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(May 29, 2019)

    • Fix security on resetting a password. Receive token as body, not query. PR #34.

    • Fix security on resetting a password. Receive it as body, not query. PR #33 by @dmontagu.

    • Fix SQLAlchemy class lookup on initialization. PR #29 by @ebreton.

    • Fix SQLAlchemy operation errors on database restart. PR #32 by @ebreton.

    • Fix locations of scripts in generated README. PR #19 by @ebreton.

    • Forward arguments from script to pytest inside container. PR #17 by @ebreton.

    • Update development scripts.

    • Read Alembic configs from env vars. PR #9 by @ebreton.

    • Create DB Item objects from all Pydantic model's fields.

    • Update Jupyter Lab installation and util script/environment variable for local development.

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Apr 19, 2019)

    • PR #14:

      • Update CRUD utils to use types better.
      • Simplify Pydantic model names, from UserInCreate to UserCreate, etc.
      • Upgrade packages.
      • Add new generic "Items" models, crud utils, endpoints, and tests. To facilitate re-using them to create new functionality. As they are simple and generic (not like Users), it's easier to copy-paste and adapt them to each use case.
      • Update endpoints/path operations to simplify code and use new utilities, prefix and tags in include_router.
      • Update testing utils.
      • Update linting rules, relax vulture to reduce false positives.
      • Update migrations to include new Items.
      • Update project README.md with tips about how to start with backend.
    • Upgrade Python to 3.7 as Celery is now compatible too. PR #10 by @ebreton.

    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Apr 11, 2019)

  • 0.2.1(Mar 29, 2019)

  • 0.2.0(Mar 11, 2019)

    PR #2:

    • Simplify and update backend Dockerfiles.
    • Refactor and simplify backend code, improve naming, imports, modules and "namespaces".
    • Improve and simplify Vuex integration with TypeScript accessors.
    • Standardize frontend components layout, buttons order, etc.
    • Add local development scripts (to develop this project generator itself).
    • Add logs to startup modules to detect errors early.
    • Improve FastAPI dependency utilities, to simplify and reduce code (to require a superuser).
    Source code(tar.gz)
    Source code(zip)
Owner
Sebastián Ramírez
Creator of FastAPI and Typer. Dev at @explosion. APIs, Deep Learning/ML, full-stack distributed systems, SQL/NoSQL, Python, Docker, JS, TypeScript, etc
Sebastián Ramírez
Cookiecutter API for creating Custom Skills for Azure Search using Python and Docker

cookiecutter-spacy-fastapi Python cookiecutter API for quick deployments of spaCy models with FastAPI Azure Search The API interface is compatible wit

Microsoft 379 Jan 03, 2023
Dead simple CSRF security middleware for Starlette ⭐ and Fast API ⚡

csrf-starlette-fastapi Dead simple CSRF security middleware for Starlette ⭐ and Fast API ⚡ Will work with either a input type="hidden" field or ajax

Nathaniel Sabanski 9 Nov 20, 2022
FastAPI Auth Starter Project

This is a template for FastAPI that comes with authentication preconfigured.

Oluwaseyifunmi Oyefeso 6 Nov 13, 2022
Adds integration of the Chameleon template language to FastAPI.

fastapi-chameleon Adds integration of the Chameleon template language to FastAPI. If you are interested in Jinja instead, see the sister project: gith

Michael Kennedy 124 Nov 26, 2022
FastAPI with Docker and Traefik

Dockerizing FastAPI with Postgres, Uvicorn, and Traefik Want to learn how to build this? Check out the post. Want to use this project? Development Bui

51 Jan 06, 2023
REST API with FastAPI and PostgreSQL

REST API with FastAPI and PostgreSQL To have the same data in db: create table CLIENT_DATA (id SERIAL PRIMARY KEY, fullname VARCHAR(50) NOT NULL,email

Luis Quiñones Requelme 1 Nov 11, 2021
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models Documentation: https://fast

Adam Watkins 943 Jan 01, 2023
SuperSaaSFastAPI - Python SaaS Boilerplate for building Software-as-Service (SAAS) apps with FastAPI, Vue.js & Tailwind

Python SaaS Boilerplate for building Software-as-Service (SAAS) apps with FastAP

Rudy Bekker 31 Jan 10, 2023
FastAPI Skeleton App to serve machine learning models production-ready.

FastAPI Model Server Skeleton Serving machine learning models production-ready, fast, easy and secure powered by the great FastAPI by Sebastián Ramíre

268 Jan 01, 2023
A simple Blogging Backend app created with Fast API

This is a simple blogging app backend built with FastAPI. This project is created to simulate a real CRUD blogging system. It is built to be used by s

Owusu Kelvin Clark 13 Mar 24, 2022
LuSyringe is a documentation injection tool for your classes when using Fast API

LuSyringe LuSyringe is a documentation injection tool for your classes when using Fast API Benefits The main benefit is being able to separate your bu

Enzo Ferrari 2 Sep 06, 2021
Simple web app example serving a PyTorch model using streamlit and FastAPI

streamlit-fastapi-model-serving Simple example of usage of streamlit and FastAPI for ML model serving described on this blogpost and PyConES 2020 vide

Davide Fiocco 291 Jan 06, 2023
A minimal Streamlit app showing how to launch and stop a FastAPI process on demand

Simple Streamlit + FastAPI Integration A minimal Streamlit app showing how to launch and stop a FastAPI process on demand. The FastAPI /run route simu

Arvindra 18 Jan 02, 2023
Lightning FastAPI

Lightning FastAPI Lightning FastAPI framework, provides boiler plates for FastAPI based on Django Framework Explaination / | │ manage.py │ README.

Rajesh Joshi 1 Oct 15, 2021
FastAPI simple cache

FastAPI Cache Implements simple lightweight cache system as dependencies in FastAPI. Installation pip install fastapi-cache Usage example from fastapi

Ivan Sushkov 188 Dec 29, 2022
Flask-Bcrypt is a Flask extension that provides bcrypt hashing utilities for your application.

Flask-Bcrypt Flask-Bcrypt is a Flask extension that provides bcrypt hashing utilities for your application. Due to the recent increased prevelance of

Max Countryman 310 Dec 14, 2022
Github timeline htmx based web app rewritten from Common Lisp to Python FastAPI

python-fastapi-github-timeline Rewrite of Common Lisp htmx app _cl-github-timeline into Python using FastAPI. This project tries to prove, that with h

Jan Vlčinský 4 Mar 25, 2022
Hook Slinger acts as a simple service that lets you send, retry, and manage event-triggered POST requests, aka webhooks

Hook Slinger acts as a simple service that lets you send, retry, and manage event-triggered POST requests, aka webhooks. It provides a fully self-contained docker image that is easy to orchestrate, m

Redowan Delowar 96 Jan 02, 2023
Reusable utilities for FastAPI

Reusable utilities for FastAPI Documentation: https://fastapi-utils.davidmontague.xyz Source Code: https://github.com/dmontagu/fastapi-utils FastAPI i

David Montague 1.3k Jan 04, 2023
Drop-in MessagePack support for ASGI applications and frameworks

msgpack-asgi msgpack-asgi allows you to add automatic MessagePack content negotiation to ASGI applications (Starlette, FastAPI, Quart, etc.), with a s

Florimond Manca 128 Jan 02, 2023