The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Unofficial Discord Rich Presence for HackTheBox platform

HTBRichPresence Unofficial Discord Rich Presence for HackTheBox platform The project is under lazy development. How to run Install requirements: // I'

Antonio 4 Apr 19, 2022
Advanced and powerful Userbot written with telethon. ♥

Daisy-X-UB ☣️ The Most Super Powerfull UserBot ☣️ ⚡ †hê ∂αιѕу χ ⚡ Legendary AF Ꭰαιѕу χ This is a userbot made for telegram. I made this userbot with h

TeamDaisyX 31 Jul 30, 2021
A fully responsive interface to manage all your favorite software on your HTPC.

Python 3 port of Hellowlol's HTPC Manager fork We made this an organization repository to be more independent from single developers. If you want to j

26 Jan 04, 2023
A Superfast SMS & Call bomber for Linux And Termux

PSKR_BOMBER 💣 📱 💀 A Superfast SMS & Call bomber for Linux And Termux ! Disclaimer This tool is for educational purposes only ! Don't use this to ta

1 Dec 20, 2021
Bot made with Microsoft Azure' cloud service

IttenWearBot Autori: Antonio Zizzari Simone Giglio IttenWearBot è un bot intelligente dotato di sofisticate tecniche di machile learning che aiuta gli

Antonio Zizzari 1 Jan 24, 2022
HASOKI DDOS TOOL- powerful DDoS toolkit for penetration tests

DDoS Attack Panel includes CloudFlare Bypass (UAM, CAPTCHA, GS ,VS ,BFM, etc..) This is open source code. I am not responsible if you use it for malic

Rebyc 1 Dec 02, 2022
A python package to fetch results of various national examinations done in Tanzania.

Necta-API Get a formated data of examination results scrapped from necta results website. Note this is not an official NECTA API and is still in devel

vincent laizer 16 Dec 23, 2022
LavaAPI - A simple library for accepting payments and using the LAVA Wallet

This library was created to simplify the LAVA api provided on the official websi

Vlad Baccara 8 Dec 18, 2022
Python API for working with RESQML models

resqpy: Python API for working with RESQML models Introduction resqpy is a pure python package which provides a programming interface (API) for readin

BP 44 Dec 14, 2022
A program used to create accounts in bulk, still a work in progress as of now.

Discord Account Creator This project is still a work in progress. It will be published upon its full completion. About This project is still under dev

patched 8 Sep 15, 2022
A telegram bot to monitor the latest NFT price on BSC.

NFT_Monitor This is a telegram bot for monitoring price and ranking of NFT on Binance Smart Chain. Can fetch latest ranking and price in real time. .P

Niko Pang 10 Oct 09, 2022
A multipurpose, semi-modular Discord bot written in Python with the new discord.py module.

Discord.py Reaction Bot MIRAI KURIYAMA A multipurpose, semi-modular Discord bot written in Python with the new discord.py module. Installing dependenc

1 Dec 02, 2021
Notion4ever - Python tool for export all your content of Notion page using official Notion API

NOTION4EVER Notion4ever is a small python tool that allows you to free your cont

50 Dec 30, 2022
Wetterdienst - Open weather data for humans

We are a group of like-minded people trying to make access to weather data in Python feel like a warm summer breeze, similar to other projects like rdwd for the R language, which originally drew our

226 Jan 04, 2023
SystemSix is an e-Ink "desk accessory" running on a Raspberry Pi. It is a bit of nostalgia that can function as a calendar, display the weather

SystemSix is an e-Ink "desk accessory" running on a Raspberry Pi. It is a bit of nostalgia that can function as a calendar, display the weather, the c

John Calhoun 372 Jan 02, 2023
Actively maintained, pure Python wrapper for the Twitter API. Supports both normal and streaming Twitter APIs.

Twython Twython is a Python library providing an easy way to access Twitter data. Supports Python 3. It's been battle tested by companies, educational

Ryan McGrath 1.9k Jan 02, 2023
Telegram Bot to store Posts and Documents and it can Access by Special Links.

File-sharing-Bot Telegram Bot to store Posts and Documents and it can Access by Special Links. I Guess This Will Be Usefull For Many People..... 😇 .

Code X Botz 1.2k Jan 08, 2023
A simple Telegram bot that can add caption to any media on your channel

Channel Auto Caption This bot can add a caption for any media/document sent to a channel. Just deploy bot and add bot as admin to a channel. Deploy to

22 Nov 14, 2022
Checks instagram names to see if they're available

How to install You must have python 3.7.6 installed and make sure you click the 'ADD TO PATH' option when installing Open cmd and type pip install aio

2 Oct 20, 2021
Replacement for the default Dark Sky Home Assistant integration using Pirate Weather

Pirate Weather Integrations This integration is designed to replace the default Dark Sky integration in Home Assistant with a slightly modified, but f

Alexander Rey 129 Jan 06, 2023