This project is the implementation template for HW 0 and HW 1 for both the programming and non-programming tracks

Overview

S22-W4111-HW-1-0:
W4111 - Intro to Databases HW0 and HW1

Introduction

This project is the implementation template for HW 0 and HW 1 for both the programming and non-programming tracks.

HW 0 - All Students

You have completed the first step, which is cloning the project template.

Note: You are Columbia students. You should be able to install SW and follow instructions.

MySQL:

  • Download the installation files for MySQL Community Server..

    • Make sure you download for the correct operating system.
    • If you are on Mac make sure you choose the correct architecture. ARM is for Apple silicon. x86 is for other Apple systems.
    • On Windows, you can download and use the MSI.
  • Follow the installation instructions for MySQL. There are official instructions and many online tutorials.

  • Remember your root user ID and password, that you set during installation. Also, choose "Legacy Authentication" when prompted.

    • If you forget your root user or password, you are on your own. The TAs and I will not fix any problems due to forgetting the information.
    • Also, if you say something like, "It did not prompt me for a user ID and password when I instaled ... ..," we will laugh. We will say something like, ""Sure. 20 million MySQL installations asked for the information, but it decide to not to ask you."
    • If you tell us that you are sure that you are entering the correct user ID and password we will laugh. We will say something like, "Which is more likely. That a DATABASE forgot something or" you did?"
  • You only need to install the server. All other SW packages are optional.

Anaconda:

  • I strongly recommend uninstalling any existing version of Anaconda. If you choose not to uninstall previous versions, you may hit issues. You are on your own if you hit issues due to conflicting versions of Anaconda during the semester.

  • Download the most recent version of Ananconda..

  • Follow the installation instructions. Choose "Install for me" when prompted. If you hit a problem and I find your Anaconda installation in the wrong directory, you are on your own. If you say something like, "But, it did not give me that option," you can guess what will happen.

DataGrip:

  • Download DataGrip. Make sure you choose the correct OS and silicon.

  • Follow the installation instructions.

  • Apply for a student license.

  • When you receive confirmation of your student license, set the license information in DataGrip.

HW0: Non-Programming

Step 1: Initial Files

  1. Create a folder in the project of the form _src, where is your UNI I created an example, which is dff9_src.

  2. Create a file in the directory _HW0.

  3. Copy the Jupyter notebook file from dff9_src/dff9_HW0.ipynb into the directory you created and replace dff9 with your UNI.

  4. Do the same for dff9_HW0.py

Step 2: Jupter Notebook

  • Start Anaconda.

  • Open Jupyter Notebook in Anaconda.

  • Navigate to the directory where you cloned the repository, and then go into the folder you created.

  • Open the notebook (the file ending in .ipynb).

  • The remaining steps in HW0: Non-Programming are in the notebook that you opened.

HW 0: Programming

  • Complete the steps for HW0: Non-Programming.

  • The programming track is not "harder" than non-programming. The initial set up is a little more work, however.

  • Download and install PyCharm. Download and install the professional edition.

  • Follow the instructions to set the license key using the JetBrains account you used to get the DataGrip licenses.

  • Start PyCharm, navigate to and open the project that you cloned from GitHub.

  • Follow the instructions for creating a new virtual Conda environment for the project.

  • Select the root folder in the project, right click and add a new Python Package named _web_src. My example is dff9_web_src.

  • Copy the files from dff9_web_src into the package you created.

  • Follow the instructions for adding a package to your virtual environment. You should add the package flask.

  • Right click on your file application.py that you copied and select run. You will see a console window open and this will show a URL. Copy on the URL.

  • Open a browser. Paste the URL and append '/health'. My URL looks like http://172.20.1.14:5000/health. Yours may be a little different.

  • Hit enter. You should see a health message. Take a screenshot of the browser window and add the file to the directory. My example is ""

Owner
Donald F. Ferguson
Senior Technical Fellow, Chief SW Architect, Ansys, Inc. Adjunct Professor, Dept. of Computer Science, Columbia University. CTO and Co-Founder, Seeka.TV
Donald F. Ferguson
This cosmetics generator allows you to generate the new Fortnite cosmetics, Search pak and search cosmetics!

COSMETICS GENERATOR This cosmetics generator allows you to generate the new Fortnite cosmetics, Search pak and search cosmetics! Remember to put the l

ᴅᴊʟᴏʀ3xᴢᴏ 11 Dec 13, 2022
Python tools for querying and manipulating BIDS datasets.

PyBIDS is a Python library to centralize interactions with datasets conforming BIDS (Brain Imaging Data Structure) format.

Brain Imaging Data Structure 180 Dec 18, 2022
Hg002-qc-snakemake - HG002 QC Snakemake

HG002 QC Snakemake To Run Resources and data specified within snakefile (hg002QC

Juniper A. Lake 2 Feb 16, 2022
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 04, 2021
apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models quickly.

Please consider citing the manuscript if you use apricot in your academic work! You can find more thorough documentation here. apricot implements subm

Jacob Schreiber 457 Dec 20, 2022
A model checker for verifying properties in epistemic models

Epistemic Model Checker This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralisti

Thomas Träff 2 Dec 22, 2021
MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

SeungHeonDoh 3 Jul 02, 2022
This creates a ohlc timeseries from downloaded CSV files from NSE India website and makes a SQLite database for your research.

NSE-timeseries-form-CSV-file-creator-and-SQL-appender- This creates a ohlc timeseries from downloaded CSV files from National Stock Exchange India (NS

PILLAI, Amal 1 Oct 02, 2022
BErt-like Neurophysiological Data Representation

BENDR BErt-like Neurophysiological Data Representation This repository contains the source code for reproducing, or extending the BERT-like self-super

114 Dec 23, 2022
Hydrogen (or other pure gas phase species) depressurization calculations

HydDown Hydrogen (or other pure gas phase species) depressurization calculations This code is published under an MIT license. Install as simple as: pi

Anders Andreasen 13 Nov 26, 2022
EOD Historical Data Python Library (Unofficial)

EOD Historical Data Python Library (Unofficial) https://eodhistoricaldata.com Installation python3 -m pip install eodhistoricaldata Note Demo API key

Michael Whittle 20 Dec 22, 2022
Automated Exploration Data Analysis on a financial dataset

Automated EDA on financial dataset Just a simple way to get automated Exploration Data Analysis from financial dataset (OHLCV) using Streamlit and ta.

Darío López Padial 28 Nov 27, 2022
Semi-Automated Data Processing

Perform semi automated exploratory data analysis, feature engineering and feature selection on provided dataset by visualizing every possibilities on each step and assisting the user to make a meanin

Arun Singh Babal 1 Jan 17, 2022
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Iron 1.3k Dec 30, 2022
A notebook to analyze Amazon Recommendation Review Dataset.

Amazon Recommendation Review Dataset Analyzer A notebook to analyze Amazon Recommendation Review Dataset. Features Calculates distinct user count, dis

isleki 3 Aug 22, 2022
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

Blaze 3.1k Jan 05, 2023
Recommendations from Cramer: On the show Mad-Money (CNBC) Jim Cramer picks stocks which he recommends to buy. We will use this data to build a portfolio

Backtesting the "Cramer Effect" & Recommendations from Cramer Recommendations from Cramer: On the show Mad-Money (CNBC) Jim Cramer picks stocks which

Gábor Vecsei 12 Aug 30, 2022
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022
Advanced Pandas Vault — Utilities, Functions and Snippets (by @firmai).

PandasVault ⁠— Advanced Pandas Functions and Code Snippets The only Pandas utility package you would ever need. It has no exotic external dependencies

Derek Snow 374 Jan 07, 2023
OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase working capital.

Overview OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase

Tom 3 Feb 12, 2022