dbt (data build tool) adapter for Oracle Autonomous Database

Overview

dbt-oracle version 1.0.0

dbt (data build tool) adapter for the Oracle database. dbt "adapters" are responsible for adapting dbt's functionality to a given database. Check https://docs.getdbt.com/docs/contributing/building-a-new-adapter

Prior to version 1.0.0, dbt-oracle was created and maintained by Indicium on their GitHub repo. Contributors in this repo are credited for laying the groundwork and maintaining the adapter till version 0.4.3. From version 1.0.0, dbt-oracle is maintained and distributed by Oracle.

Table of contents

What is dbt?

dbt does the T in ELT (Extract, Load, Transform). To work with dbt you need a copy of your data already loaded in your warehouse.

dbt features

  • With dbt, you can express all transforms with SQL select
    • Different materialization strategies.
      • view
      • table
      • incremental; selective rebuild for new rows
      • ephemeral; Model 1 interpolated into Model 2 as a Common Table Expression (CTE)
    • No need to write boilerplate code
      • All code to create table or views is generated using macros.
    • Idempotence; rerun models
      • If your source data were to stop updating, successive runs of your transformations would still result in the same tables and views in your warehouse.
      • If your production deployment of your transformations were interrupted, the next run of the transformations would result in the same tables and views as if the deployment had not been interrupted.
      • If you manually triggered transformations between scheduled runs, the scheduled run would result in the same tables and views as if the manual runs had not been triggered.
    • All transformation code is accessible and can be version controlled.
  • Dependency resolution
    • Use of ref() function select * from {{ ref('MODEL_NAME')}}
    • dbt automatically resolves dependencies in between models and builds a Directed Acyclic Graph (DAG). Each path in the DAG can be independently executed using multiple threads.
    • Interpolates the name of database schema
  • Includes a built-in testing framework to ensure model accuracy
    • not null
    • unique
    • contains accepted values
    • relationships
    • custom tests
  • Generate documentation for your project and render it as a website.
  • Use macros to write reusable SQL

An example

dbt model

--models/sales_internet_channel.sql
{{ config(materialized='table') }}
WITH sales_internet AS (
       SELECT * FROM {{ source('sh_database', 'sales') }}
       WHERE channel_id = 4 )
SELECT * FROM sales_internet

dbt compiles the above SQL template to run the below DDL statement.

CREATE TABLE dbt_test.sales_internet_channel AS
WITH sales_internet AS (
         SELECT * from sh.sales
         WHERE channel_id = 4 )
SELECT * FROM sales_internet

For dbt documentation, refer https://docs.getdbt.com/docs/introduction

Installation

dbt-oracle can be installed via the Python Package Index (PyPI) using pip

pip install -U dbt-oracle

Support

dbt-oracle will provide support for the following

  • Python versions 3.6, 3.7, 3.8 and 3.9
  • Autonomous Database versions 19c and 21c
  • OS
    • Linux
    • MacOS
    • Windows

Core dependencies

dbt-oracle requires the following 3 python packages.

dbt-core

  • Open source framework for data transformation
  • Jinja Templating and core SQL compilation logic
  • Latest version of dbt-core is preferred; From version 1.0.0, dbt-core supports Python 3.7 or higher
  • For Python 3.6, pip will fallback to version 0.21.1 of dbt-core

cx-Oracle

dataclasses; python_version < '3.7'

  • dataclasses package was introduced in the standard Python library from Python 3.7. This is conditional dependency and required only for Python 3.6

Getting Started

Create a dbt project for oracle database using the dbt init command. The init command is interactive and will help you get started with a new project.

dbt init will:

  • ask you the name of the project
  • ask you the database adapter you are using i.e. oracle
  • prompt to specify necessary connection details

This example shows initialization of test project dbt_oracle_test_project

>> dbt init

Running with dbt=1.0.4
Enter a name for your project (letters, digits, underscore): dbt_oracle_test_project
Which database would you like to use?
[1] oracle
  Enter a number: 1
  protocol (tcp or tcps) [tcps]: 
  host (adb.<oci-region>.oraclecloud.com) [{{ env_var('DBT_ORACLE_HOST') }}]: 
  port [1522]: 
  user [{{ env_var('DBT_ORACLE_USER') }}]: 
  password [{{ env_var('DBT_ORACLE_PASSWORD') }}]: 
  service (service name in tnsnames.ora) [{{ env_var('DBT_ORACLE_SERVICE') }}]: 
  dbname (database name in which dbt objects should be created) [{{ env_var('DBT_ORACLE_DATABASE') }}]: 
  schema (database schema in which dbt objects should be created) [{{ env_var('DBT_ORACLE_SCHEMA') }}]: 
  threads (1 or more) [1]: 4
Profile dbt_oracle_test_project written to ~/.dbt/profiles.yml using target's profile_template.yml and your supplied values. Run 'dbt debug' to validate the connection.
Your new dbt project "dbt_oracle_test_project" was created!

Then dbt init command will:

  1. Create the following folder with project name and sample files to get you started

     ├── README.md
     ├── analyses
     ├── dbt_project.yml
     ├── macros
     ├── models
     │   └── example
     ├── seeds
     ├── snapshots
     └── tests
    
  2. Create a connection profile on your local machine. The default location is ~/.dbt/profiles.yml

    Next step, configure connection related parameters and test if dbt connection works using dbt debug command

    >> dbt debug
     
    os info: macOS-11.6-x86_64-i386-64bit
       Using profiles.yml file at ~/.dbt/profiles.yml
       Using dbt_project.yml file at /dbt_oracle_test_project/dbt_project.yml
       Configuration:
        profiles.yml file [OK found and valid]
        dbt_project.yml file [OK found and valid]
       Required dependencies:
       - git [OK found]
       Connection:
        user: ***
        database: ga01d76d2ecd5e0_db202112221108
        schema: ***
        protocol: tcps
        host: adb.us-ashburn-1.oraclecloud.com
        port: 1522
        service: <service_name>_high.adb.oraclecloud.com
        connection_string: None
        shardingkey: []
        supershardingkey: []
        cclass: None
        purity: None
        Connection test: [OK connection ok]
    
       All checks passed!
    

Documentation [TODO]

Link to the homepage - https://oracle.github.io/dbt-oracle

Link to documentation - https://dbt-oracle.readthedocs.io

Contributing

This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide.

Security

Please consult the security guide for our responsible security vulnerability disclosure process.

License

dbt-oracle is licensed under Apache 2.0 License which you can find here

Comments
  • DBT Snapshot filling in dbt_valid_to for most recent record

    DBT Snapshot filling in dbt_valid_to for most recent record

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    After running the dbt snapshot command and building a snapshot table with the invalidate_hard_deletes parameter set to True, various records are having their dbt_valid_to dates populated with the latest dbt_updated_at value, even though these records are still considered valid and have not been removed from their source (staging) tables. We are using the following code to build each snapshot:

    {% snapshot tmp_device_ad %}
    
    {{
        config(
               target_schema = 'dbt',
               strategy = 'timestamp',
               unique_key = 'snap_id',
               updated_at = 'update_dt',
               invalidate_hard_deletes = True
              )
    }}
    
    select ad.*
    
      from {{ ref('stg_device_ad') }} ad
    
    {% endsnapshot %}
    

    The number of records this is affecting is inconsistent between snapshots (sometimes it's a few dozen, or a few hundred from tables up to 30k), but it appears that it is affecting the same records on each run.

    Expected Behavior

    When building a dbt snapshot table and setting the invalidate_hard_deletes parameter to True, I expect dbt to only fill in the dbt_valid_to value ONLY if a record no longer exists in the source table OR if a record has changed in the source table and a new record is created in the snapshot table and the previous record in the snapshot table is marked as invalid. An active record in the snapshot table SHOULD NOT be marked as having a dbt_valid_to date, instead it should have a value of null.

    Steps To Reproduce

    1. Environment: Windows 10 Enterprise 21H2 19044.2251
    2. Config:
    {% snapshot tmp_device_ad %}
    
    {{
        config(
               target_schema = 'dbt',
               strategy = 'timestamp',
               unique_key = 'snap_id',
               updated_at = 'update_dt',
               invalidate_hard_deletes = True
              )
    }}
    
    select ad.*
    
      from {{ ref('stg_device_ad') }} ad
    
    {% endsnapshot %}
    
    1. Run: dbt snapshot
    2. No errors are generated

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS: Windows 10 Enterprise 21H2 19044.2251
    - Python: 3.9.11
    - dbt: 1.3.0
    

    What Oracle database version are you using dbt with?

    19c

    Additional Context

    This issue looks to be similar, if not identical, to #2390 from dbt-core a few years ago that was resolved in v0.17.0. I've created a fork to play around with this and see if the two issues are related.

    bug 
    opened by dhodge250 26
  • Fix renaming the old_relation in Oracle 12c+ when using DBT table materialization with missing database name in profiles.yml

    Fix renaming the old_relation in Oracle 12c+ when using DBT table materialization with missing database name in profiles.yml

    Hi,

    first of all, thank you very much for releasing an official DBT Oracle DB adapter/profile with an up-to-date DBT version!!! 😍 I tried this adapter today locally together with some Airbyte modifications and found that version 1.0.0 does not rename the already existing destination table to a backup table when using DBT table materialization and therefore I get the following error:

    1 of 1 ERROR creating table model staging.test................................................................. [ERROR in 0.96s]
    2022-05-04 08:41:55 normalization > 08:41:55  oracle adapter: Oracle error: ORA-00955: name is already used by an existing object
    

    With dbt run --debug

    Database Error in model test (models/generated/airbyte_tables/staging/test.sql)
    2022-05-04 08:42:00 normalization > 08:41:55.566833 [error] [MainThread]:   ORA-00955: name is already used by an existing object
    2022-05-04 08:42:00 normalization > 08:41:55.567157 [error] [MainThread]:   compiled SQL at ../build/run/airbyte_utils/models/generated/airbyte_tables/staging/test.sql
    

    This is caused by the fact that the database name is passed to function adapter.get_relation() which should not be the case for Oracle DB. https://github.com/oracle/dbt-oracle/blob/v1.0.0/dbt/include/oracle/macros/materializations/table/table.sql#L22 In Oracle DB the database name is never a prefix when making a select as it does not even work. ✅ schema.relationdatabase.schema.relation

    Also at the DBT/Jinija macro oracle__list_relations_without_caching the filter for the table_catalog (database) should be removed as usually no one ever will pass the database name in the profiles.yml.

    Fixes #4 Fixes techindicium/dbt-oracle#8 Fixes techindicium/dbt-oracle#32

    opened by ThoSap 21
  • [Bug] get_relation reporting dbt found an approximate match

    [Bug] get_relation reporting dbt found an approximate match

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    Hello, I am the author of the dbt_constraints package and I am attempting to add support for dbt-oracle.

    I am receiving the following error when my code runs adapter.get_relation():

    Encountered an error:
    Compilation Error in operation dbt_constraints-on-run-end-0 (./dbt_project.yml)
      When searching for a relation, dbt found an approximate match. Instead of guessing 
      which relation to use, dbt will move on. Please delete DBT_USER.dim_customers, or rename it to be less ambiguous.
      Searched for: dbt_user.dim_customers
      Found: DBT_USER.dim_customers
    

    I have a schema named DBT_USER and I have specified the the schema in upper case in my profile.

    I believe the root cause is how dbt-oracle changes the case of objects in your catalog.sql:

              select
                  lower(tables.table_catalog) as "table_database",
                  lower(tables.table_schema) as "table_schema",
                  lower(tables.table_name) as "table_name",
                  lower(tables.table_type) as "table_type",
                  all_tab_comments.comments as "table_comment",
                  lower(columns.column_name) as "column_name",
                  ordinal_position as "column_index",
                  lower(case
                    when data_type like '%CHAR%'
                    then data_type || '(' || cast(char_length as varchar(10)) || ')'
                    else data_type
                  end) as "column_type",
                  all_col_comments.comments as "column_comment",
                  tables.table_schema as "table_owner"
              from tables
    

    I work for Snowflake and our database uses exactly the same conventions as Oracle for how identifiers are treated with and without quotes. Identifiers become upper case if not quoted and case sensitive if quoted. Therefore, I think it can be helpful to compare some of your code against the dbt-Labs code for dbt-Snowflake. In the dbt-Labs implementation of dbt-Snowflake they do not adjust the case of our databases, schemas, tables, or columns in the catalog.sql

    There could also be an issue with the oracle__get_columns_in_relation macro:

          select
              lower(column_name) as "name",
              lower(data_type) as "type",
              char_length as "character_maximum_length",
              numeric_precision as "numeric_precision",
              numeric_scale as "numeric_scale"
          from columns
    

    In the snowflake__get_columns_in_relation macro dbt-labs uses DESCRIBE TABLE to retrieve columns and they do not change the case of column names or data types (upper case just like Oracle).

    I reviewed all the SQL lookups dbt-Labs uses for Snowflake and I consistently see they use UPPER() = UPPER() in their WHERE clauses but in all other cases their metadata lookups preserve the case for databases, schema, tables, and columns. I recommend dbt-Oracle should do the same.

    Expected Behavior

    No response

    Steps To Reproduce

    No response

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS: MacOS
    - Python: 3.8.13
    - dbt: 1.0.6
    

    What Oracle database version are you using dbt with?

    Oracle XE 21c

    Additional Context

    No response

    bug 
    opened by sfc-gh-dflippo 16
  • [Bug] Incremental upsert strategy fails with ORA-00911 & ORA-01747

    [Bug] Incremental upsert strategy fails with ORA-00911 & ORA-01747

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    If a table contains columns names like for example

    • "DESC"
    • "_AIRBYTE_AB_ID"
    • "_AIRBYTE_EMITTED_AT"
    • "_AIRBYTE_NORMALIZED_AT"

    the dbt-oracle incremental strategy fails with ORA-01747: invalid user.table.column, table.column, or column specification for example the column name desc -> "DESC" or ORA-00911: invalid character which are required internal Airbyte system columns e.g. "_AIRBYTE_AB_ID" or "_AIRBYTE_NORMALIZED_AT".

    This is due to the fact that the following macro call returns the columns lowercase and unquoted instead of uppercase and quoted, no matter if the OracleQuotePolicy for schema and identifier is set to true or not. https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt/include/oracle/macros/materializations/incremental/helpers.sql#L37-L38

    Set schema and identifier to true

    https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt/include/oracle/dbt_project.yml#L6-L9 https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/dbt_project.yml#L8-L11

    The two for loops that fill the columns for the two when matched then update set and when not matched then insert(...) values(...) are causing the issue, as they produce for example:

    merge into staging.ln_tfgld008 target
      using staging.o$pt_ln_tfgld008125639 temp
      on (temp."_AIRBYTE_AB_ID" = target."_AIRBYTE_AB_ID")
    when matched then
      update set
        target.bloc = temp.bloc,
        target.desc = temp.desc, -- desc is not an issue as long it is quoted "DESC"
        target.timestamp = temp.timestamp, -- timestamp suprisingly is not an issue and does not trigger ORA-01747
        target._airbyte_ab_id = temp._airbyte_ab_id, -- triggers ORA-00911 without quoting
        target._airbyte_emitted_at = temp._airbyte_emitted_at, -- triggers ORA-00911 without quoting
        target._airbyte_normalized_at = temp._airbyte_normalized_at, -- triggers ORA-00911 without quoting
        target._airbyte_ln_tfgld008_hashid = temp._airbyte_ln_tfgld008_hashid -- triggers ORA-00911 without quoting
    when not matched then
      insert(bloc, desc, timestamp, _airbyte_ab_id, _airbyte_emitted_at, _airbyte_normalized_at, _airbyte_ln_tfgld008_hashid)
      values(
          temp.bloc,
          temp.desc,
          temp.username,
          temp.timestamp,
          temp._airbyte_ab_id,
          temp._airbyte_emitted_at,
          temp._airbyte_normalized_at,
          temp._airbyte_ln_tfgld008_hashid
      )
    

    Expected Behavior

    The dbt-oracle incremental upsert strategy should not fail for users that do not have control over the column names of the tables and the column names. If a column uses a reserved keyword or starts with an invalid character dbt-oracle by default should treat the columns to be quoted and uppercase.

    Steps To Reproduce

    1. Set the schema and identifier flag to true https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/dbt_project.yml#L8-L11
    2. Add one of the above columns that trigger the ORA error to https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/seeds/seed.csv
    3. Add the column to the incremental materialization test https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/models/us_product_sales_channel_ranking.sql
    4. Run the tests https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/README.md

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS: Oracle Linux Server 8.6
    - Python: 3.9.13
    - dbt: 1.0.6
    

    What Oracle database version are you using dbt with?

    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production

    Additional Context

    No response

    enhancement testing fixed 
    opened by ThoSap 15
  • Creation of Constraints fails

    Creation of Constraints fails

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    For some models I create primary keys with the following macro:

    {% macro primary_key(this, column) %}
    {% set index_name = 'PK_'+ this.name %}
    
    DECLARE
        constraint_name_exists  EXCEPTION;
        PRAGMA EXCEPTION_INIT(constraint_name_exists, -2264);
        name_exists  EXCEPTION;
        PRAGMA EXCEPTION_INIT(name_exists, -955);
    BEGIN
        EXECUTE IMMEDIATE 'alter table {{this.name}} add constraint {{index_name}} primary key ({{column}}) nologging';
    END;
    {% endmacro %}`
    

    When running the model for the first time everything works fine, after the second run, the macro throws an exception ORA-02264.

    Looks as if the constraints are not deleted before the table is recreated, so that there still exists an object in the database with the name of the constraint. But due to the renamining of the table from table__dbt_tmp to table, the constraint is not assigned to the table either

    Expected Behavior

    Delete all existing constraint and indexes during renaming of the table in oracle__rename_relation

    {% macro oracle__rename_relation(from_relation, to_relation) -%}
        {% call statement('drop_constraints') -%}
        BEGIN
            FOR r IN (SELECT table_name, constraint_name FROM user_constraints WHERE table_name = upper('{{ from_relation.include(False, False, True).quote(schema=False, identifier=False) }}')) LOOP
                EXECUTE IMMEDIATE 'alter table ' || r.table_name || ' drop constraint ' || r.constraint_name || ' CASCADE';
            END LOOP;
        END;
      {%- endcall %}
    
      {% call statement('drop_indexes') -%}
        BEGIN
            FOR r IN (SELECT table_name, index_name FROM user_indexes WHERE table_name = upper('{{ from_relation.include(False, False, True).quote(schema=False, identifier=False) }}')) LOOP
                EXECUTE IMMEDIATE 'DROP INDEX ' || r.index_name;
            END LOOP;
        END;
      {%- endcall %}
    
      {% call statement('rename_relation') -%}
        ALTER {{ from_relation.type }} {{ from_relation }} rename to {{ to_relation.include(schema=False) }}
      {%- endcall %}
    {% endmacro %}
    

    Steps To Reproduce

    No response

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS:Windows
    - Python: 3.8.10
    - dbt: 1.2.1
    

    What Oracle database version are you using dbt with?

    21c

    Additional Context

    No response

    bug 
    opened by sandhein 8
  • Feature enhancement to support quoting for column names

    Feature enhancement to support quoting for column names

    • This addresses the dbt-oracle issue https://github.com/oracle/dbt-oracle/issues/8
    • Oracle Adapter class exposes a new method called should_identifier_be_quoted(identifier) which returns true if the identifier is either in list of Oracle keywords or does not start with alphabet. This can be used in the macros in the following manner
      •      {% if adapter.should_identifier_be_quoted(col) == true %}
                 {% set quoted_column_name = quote ~ col ~ quote) %}
             {% else %}
        
    • Macro {{ column.name }} will return quoted name if column name is either in list of Oracle keywords or does not start with alphabet
    • List of Oracle keywords from the official documentation is defined in dbt/adapters/oracle/keyword_catalog.py
    • Handled column quoting in incremental materialization macros
    OCA Verified 
    opened by aosingh 8
  • [Bug] don't drop temporary table o$pt

    [Bug] don't drop temporary table o$pt

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    The global temporary table is not deleted after running the model due to an

    [dbt-oracle/blob/main/dbt/include/oracle/macros/adapters.sql](https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt/include/oracle/macros/adapters.sql#L259)

    example: create global temporary table TEST (object_id number, object_name varchar2(30)) on commit PRESERVE rows;

    Such tables get bound to the session with the first insert and then this binding is ONLY removed when the session exits or by truncate. If the binding exists DDL cannot be performed on these temporary tables and if you try a DDL on such tables you would receive ORA-14452 attempt to create, alter or drop an index on temporary table already in use.

    add before https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt/include/oracle/macros/materializations/incremental/incremental.sql#L63

    {% for rel in to_drop %} {% do adapter.truncate_relation(rel) %} {% endfor %}

    Expected Behavior

    drop table

    Steps To Reproduce

    1. create model materialized='incremental'

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS:MacOs
    - Python:3.9
    - dbt-core: 1.1.0
    

    What Oracle database version are you using dbt with?

    12.1.0

    Additional Context

    No response

    bug 
    opened by AlexMorti 7
  • Update adapters.sql

    Update adapters.sql

    changes are to address issues when using custom schemas

    1 - create schema call verifies database. As of 1.0.7 Oracle adapter allows for database parm not to be set since Oracle doesn't allow statement prefix to include database name. Trickle down effect of that change is 'create schema check' not working hence why lines are removed. There is even dummy-noop already in that method since you can't create schemas dbt-way in oracle (schemas are users).

    2 - once #1 is addressed and custom schema is used, rename command will not find temp model table since schema is not passed as part of rename statement. Furthermore, rename statement will not work because Oracle doesn't support renaming tables that way. Alter Table to the rescue.

    changed from rename <table_original_name> to <table_new_name>

    to

    alter table .<original_name> rename to <table_new_name>

    tested with Python 3.9.1 dbt-core 1.0.7 dbt-oracle 1.0.2

    OCA Verified 
    opened by algol68 6
  • [Bug]

    [Bug] "Compilation error" on second run. "When searching for a relation, dbt found an approximate match. Instead of guessing which relation to use, dbt will move on."

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    I am a beginner, trying to make a minimal working project using real data from an Oracle db that I have access to. My project consists of only a single model, which simply selects two columns from an existing table in my Oracle db.

    the model is called DBT_TEST_DM_SUBSET.sql

    its contents are simply this: SELECT MATERIAL_NUMBER_DESC, MATERIAL_GROUP_DESC FROM FCDR_V6_DM_MVW

    FCDR_V6_DM_MVW is a table (actually a materialized view) in a schema called EXPLORER.

    My project's name is "dan_test". When I run this at the command line: dbt run --select dan_test

    the first time everything is ok, and the view called DBT_TEST_DM_SUBSET is correctly created in my oracle db. The second time I run dbt run, I get this error: image

    Using flag "--full-refresh" does not help. I am aware this error often has to do with lower/upper casing. However, in my profiles.yml, my schema is also all-caps: EXPLORER . The model-defining file name itself is also in all-caps: DBT_TEST_DM_SUBSET.sql

    What could be causing this error? Because I am a raw beginner, I have no way of knowing if this is a bug, or if I have configured something incorrectly.

    I am on the brand-new version of dbt-oracle (1.3.0), but I also had the same problem before updating.

    Expected Behavior

    No response

    Steps To Reproduce

    No response

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS: macOS 12.6
    - Python: 3.8
    - dbt: 1.3.0
    

    What Oracle database version are you using dbt with?

    12c

    Additional Context

    No response

    bug 
    opened by iron0012 4
  • invalidate_hard_deletes=True directive does not work

    invalidate_hard_deletes=True directive does not work

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    If we have the following snapshot's confg:

    {{
        config(
          target_database='psbi',
          target_schema='dwh',
          unique_key='ID',
    
          strategy='check',
          check_cols='all',
          invalidate_hard_deletes=True
        )
    }}
    

    and run

    dbt snapshot

    we obtain the following error:

    04:46:44  Database Error in snapshot fct_piano_car (snapshots/fct_piano_car.sql)
    04:46:44    ORA-00905: missing keyword
    

    Expected Behavior

    No response

    Steps To Reproduce

    No response

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS: Ubuntu 20.04
    - Python:  3.8.10
    - dbt: 1.1.1
    - dbt-oracle: 1.1.1
    

    What Oracle database version are you using dbt with?

    19c

    Additional Context

    none

    bug testing fixed 
    opened by gbonazzoli 4
  • [Feature] oracle__make_temp_relation identifier uniqueness improvements

    [Feature] oracle__make_temp_relation identifier uniqueness improvements

    Describe the Feature

    Hello. At the moment temp relation unique identifier has the date string with "%H%M%S", but when we are making dbt run in parallel for the same model with incremental materialization it can cause an issue when a temporary table already exists. It's known that officially dbt doesn't support parallel runs but in some cases it may be quite useful. Is it possible to add milliseconds "%H%M%S%f" or maybe replace that identifier with something else to make it more robust?

    Describe alternatives you've considered

    No response

    Who will this benefit?

    No response

    Anything else?

    No response

    enhancement 
    opened by nzlysenko 3
  • dbt-oracle 1.3.1 problem

    dbt-oracle 1.3.1 problem "oracle adapter: Oracle error: ORA-00972: identifier is too long" on dbt snapshot

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    Same project:

    dbt-oracle 1.3.0 Works fine

    [email protected]:~/airflow/dags/SALES_ORDERS_MANAGEMENT/snapshots# dbt snapshot
    05:54:07  oracle adapter: Running in cx mode
    05:54:08  Running with dbt=1.3.0
    05:54:08  Found 1 model, 0 tests, 3 snapshots, 0 analyses, 324 macros, 2 operations, 0 seed files, 15 sources, 0 exposures, 0 metrics
    05:54:08  
    05:54:08  
    05:54:08  Running 1 on-run-start hook
    05:54:08  1 of 1 START hook: dbt_project.on-run-start.0 .................................. [RUN]
    05:54:08  1 of 1 OK hook: dbt_project.on-run-start.0 ..................................... [OK in 0.00s]
    05:54:08  
    05:54:08  Concurrency: 8 threads (target='prod')
    05:54:08  
    05:54:08  1 of 3 START snapshot psofa.sales_orders ....................................... [RUN]
    05:54:08  2 of 3 START snapshot psofa.sales_orders_appog ................................. [RUN]
    05:54:08  3 of 3 START snapshot psofa.uniters_orders ..................................... [RUN]
    05:54:14  1 of 3 OK snapshotted psofa.sales_orders ....................................... [OK in 5.32s]
    05:54:14  2 of 3 OK snapshotted psofa.sales_orders_appog ................................. [OK in 5.90s]
    05:54:22  3 of 3 OK snapshotted psofa.uniters_orders ..................................... [OK in 13.28s]
    05:54:22  
    05:54:22  Running 1 on-run-end hook
    05:54:22  1 of 1 START hook: dbt_project.on-run-end.0 .................................... [RUN]
    05:54:22  1 of 1 OK hook: dbt_project.on-run-end.0 ....................................... [OK in 0.00s]
    05:54:22  
    05:54:22  
    05:54:22  Finished running 3 snapshots, 2 hooks in 0 hours 0 minutes and 14.15 seconds (14.15s).
    05:54:22  
    05:54:22  Completed successfully
    05:54:22  
    05:54:22  Done. PASS=3 WARN=0 ERROR=0 SKIP=0 TOTAL=3```
    
    dbt-oracle 1.3.1 gives the following output:
    
    

    [email protected]:~/airflow/dags/SALES_ORDERS_MANAGEMENT/snapshots# dbt snapshot 05:43:33 oracle adapter: Running in cx mode 05:43:33 Running with dbt=1.3.1 05:43:33 Found 1 model, 0 tests, 3 snapshots, 0 analyses, 325 macros, 2 operations, 0 seed files, 15 sources, 0 exposures, 0 metrics 05:43:33
    05:43:34
    05:43:34 Running 1 on-run-start hook 05:43:34 1 of 1 START hook: dbt_project.on-run-start.0 .................................. [RUN] 05:43:34 1 of 1 OK hook: dbt_project.on-run-start.0 ..................................... [OK in 0.00s] 05:43:34
    05:43:34 Concurrency: 8 threads (target='prod') 05:43:34
    05:43:34 1 of 3 START snapshot psofa.sales_orders ....................................... [RUN] 05:43:34 2 of 3 START snapshot psofa.sales_orders_appog ................................. [RUN] 05:43:34 3 of 3 START snapshot psofa.uniters_orders ..................................... [RUN] 05:43:34 oracle adapter: Oracle error: ORA-00972: identificativo troppo lungo 05:43:34 3 of 3 ERROR snapshotting psofa.uniters_orders ................................. [ERROR in 0.46s] 05:43:35 oracle adapter: Oracle error: ORA-00972: identificativo troppo lungo 05:43:35 2 of 3 ERROR snapshotting psofa.sales_orders_appog ............................. [ERROR in 0.84s] 05:43:39 1 of 3 OK snapshotted psofa.sales_orders ....................................... [OK in 4.96s] 05:43:39
    05:43:39 Running 1 on-run-end hook 05:43:39 1 of 1 START hook: dbt_project.on-run-end.0 .................................... [RUN] 05:43:39 1 of 1 OK hook: dbt_project.on-run-end.0 ....................................... [OK in 0.00s] 05:43:39
    05:43:39
    05:43:39 Finished running 3 snapshots, 2 hooks in 0 hours 0 minutes and 5.84 seconds (5.84s). 05:43:39
    05:43:39 Completed with 2 errors and 0 warnings: 05:43:39
    05:43:39 Database Error in snapshot uniters_orders (snapshots/uniters_orders.sql) 05:43:39 ORA-00972: identificativo troppo lungo 05:43:39
    05:43:39 Database Error in snapshot sales_orders_appog (snapshots/sales_orders_appog.sql) 05:43:39 ORA-00972: identificativo troppo lungo 05:43:39
    05:43:39 Done. PASS=1 WARN=0 ERROR=2 SKIP=0 TOTAL=3```

    Expected Behavior

    It is logical to expect the same behaviour.

    Steps To Reproduce

    1. pip install --upgrade dbt-oracle==1.3.1
    2. dbt snapshot

    Relevant log output using --debug flag enabled

    [email protected]:~/airflow/dags/SALES_ORDERS_MANAGEMENT/snapshots# dbt debug
    05:58:19  Running with dbt=1.3.1
    dbt version: 1.3.1
    python version: 3.8.10
    python path: /root/dbt/bin/python3
    os info: Linux-5.4.17-2136.307.3.5.el8uek.x86_64-x86_64-with-glibc2.29
    Using profiles.yml file at /root/.dbt/profiles.yml
    Using dbt_project.yml file at /root/airflow/dags/SALES_ORDERS_MANAGEMENT/dbt_project.yml
    
    05:58:19  oracle adapter: Running in cx mode
    Configuration:
      profiles.yml file [OK found and valid]
      dbt_project.yml file [OK found and valid]
    
    Required dependencies:
     - git [OK found]
    
    Connection:
      user: psofa
      database: psdb
      schema: psofa
      protocol: tcp
      host: 172.16.3.32
      port: 1521
      tns_name: PSDBPROD
      service: psdb
      connection_string: None
      shardingkey: []
      supershardingkey: []
      cclass: None
      purity: None
      retry_count: 1
      retry_delay: 3
      Connection test: [OK connection ok]
    
    All checks passed!
    

    Environment

    - OS: Ubuntu 20.04.5 LTS
    - Python: Python 3.8.10
    - dbt: 1.3.1
    

    What Oracle database version are you using dbt with?

    11.2.0.4

    Additional Context

    No response

    bug 
    opened by gbonazzoli 5
  • [Bug]  multi schema materializations not supported without administrative privileges

    [Bug] multi schema materializations not supported without administrative privileges

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    multi schema materializations not supported without administrative privileges

    Expected Behavior

    DBT should be able to populate pre created schemata without administrative permissiones required to execute a CREATE table in these schematas.

    A proxy connect is not a viable workaround as it is explicit and cannot implicitly be inferred - as far as I understand.

    Steps To Reproduce

    DBT can nicely generate schemata in other databases (postgres, snowflake, ...). However, this is tricky as my DBT account has no administrative permissions. Rather, it is confined to 3 schemata a_raw, b_staging, c_mart. However, while my user foo foo[a_raw] has proxy-connect privileges to all a,b,c schemata DBT only has a single target (i.e. foo[a_raw] ) that can be registered. This means I would need to break down the dbt run into 3 separate callls to the cli and also that the nice selector for upstream models no longer works due to insufficient permissions. How can I change the situation to allow the dbt user (foo) to read/write all a,b,c schemata? My DBA is telling me that Oracle cannot grant CREATE permissions to any other user besides the specific schema user (and administrators) but he is not going to hand me administrative privileges. What can I do to return back to a dbt native situation?

    This means when changing the target user from foo[a_raw] to foo[b_staging] to foo[c_mart] (explicitly) CREATE permissions are there. Otherwise not.

    Relevant log output using --debug flag enabled

    No response

    Environment

    - OS: windows
    - Python: 3.10
    - dbt: 1.3.1
    

    What Oracle database version are you using dbt with?

    11 and hopefully soon 19c

    Additional Context

    No response

    bug 
    opened by geoHeil 0
  • [Feature] affected row count

    [Feature] affected row count

    Describe the Feature

    I would like to see a feature that shows affected row count, similar to sql%rowcount from PL/SQL. I'm aware of the variable "rows_affected" in run_results.json but currently it is always showing 0.

    Describe alternatives you've considered

    No response

    Who will this benefit?

    No response

    Anything else?

    No response

    enhancement 
    opened by Mirko-T 5
  • Bump certifi from 2021.10.8 to 2022.12.7 in /doc

    Bump certifi from 2021.10.8 to 2022.12.7 in /doc

    Bumps certifi from 2021.10.8 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    OCA Verified dependencies 
    opened by dependabot[bot] 0
  • [Feature] Adding WHERE codintion during MERGE update

    [Feature] Adding WHERE codintion during MERGE update

    Hi,

    It would be nice to be able to controll the incremental merge strategy by adding a WHERE clause to the WHEN MATCHED THEN part of the merge statement. As i see on dbt-oracle github it seems the adapter doesn't support it.

    ON (src.id = tgt.id AND src.date = tgt.date)
    WHEN MATCHED THEN
        UPDATE SET
            tgt.end_date = src.end_date
        WHERE
            src.end_date <> tgt.end_date
    WHEN NOT MATCHED THEN
        INSERT (tgt.id, tgt.date, tgt.end_date)
        VALUES (src.id, src.date, src.end_date)
    

    I got recommended by #db-oracle on dbt slack to raise a feature request so it gets tracked.

    Describe alternatives you've considered

    This is the repons i got from #db-oracle slack channel from @aosingh:

    dbt-oracle implements the functionality conforming to the standard defined by dbt.
    However, I see that you need support for an arbitrary where clause condition during MERGE update. We will need to think about the interface to implement this. One way could be to introduce a dbt-oracle specific parameter. The challenge is to make this a generic config which can work for any where condition. The where condition can be on the target data or source data or both.
    You can also raise a feature request here - https://github.com/oracle/dbt-oracle/issues so that this gets tracked
    

    Who will this benefit?

    No response

    Anything else?

    No response

    enhancement 
    opened by philiphagglund 0
Releases(v1.3.1)
  • v1.3.1(Dec 16, 2022)

    This is a minor release upgrade with the following updates

    • Tracing is enabled for database connection from dbt-oracle client. Session information can be found in V$SESSION

    • Upgraded dependencies oracledb==1.2.1 and dbt-core==1.3.1

    • Fix for issue https://github.com/oracle/dbt-oracle/issues/52 dbt snaphsots timestamp strategy where new records were being wrongly invalidated. The issue occurs only when update_dt, the column to track changes, is of DATE type. Casting DATE to VARCHAR during hash (dbt_scd_id) computation drops the time part. The fix is to CAST update_dt to TIMESTAMP to preserve the time part.

    • Addresses feature request https://github.com/oracle/dbt-oracle/issues/55 Global temporary relation name includes milliseconds o$pt_<base_relation_identifier>_<%H%M%S%f>

    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.3.1.tar.gz(46.75 KB)
    dbt_oracle-1.3.1-py3-none-any.whl(89.74 KB)
  • v1.3.0(Oct 26, 2022)

    This is minor release upgrade with the following updates

    • dbt-core and dbt-tests-adapter upgraded to the latest version v1.3.0
    • Streamline incremental materialization as per new dbt requirements
    • Consolidated timestamp based macros in dbt/include/oracle/util/timestamps.sql
    • Added new macros for data types. For example, BigInt and Int in dbt/include/oracle/util/data_types.sql
    • Added 11 new functional test cases. We now have a total of 71 test cases in the suite

    Note: Python models are not yet supported in dbt-oracle==1.3.0

    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.3.0.tar.gz(46.48 KB)
    dbt_oracle-1.3.0-py3-none-any.whl(89.48 KB)
  • v1.2.1(Oct 12, 2022)

  • v1.2.0(Sep 7, 2022)

    Following features and fixes are included in this release

    • Dependencies dbt-core and dbt-tests-adaptersare upgraded to the latest version 1.2.1
    • Following cross-db-macros are implemented into adapter and tested accordingly
      String macros
       - concat
       - hash
       - position
       - right
       - length
       - replace
      
      String literal macros
       - escape_single_quotes
       - string_literal
      
      Cast
       - cast_bool_to_text
      
      Date and Time
       - dateadd
       - date_trunc
       - last_day
      
      Set 
       - except
       - intersect
      
    • New basic tests BaseDocsGenerate and BaseDocsGenReferences are added to the test suite
    • Managing of access grants is now supported and tested. You can use this with different materialization strategies i.e. tables, views, incremental materialization and snapshots.
    • New parameters (retry_count and retry_wait) are supported in connection profile to support connection retries
    • Fix for “where clause” config in generic tests
    • Fix for dbt docs generation (catalog) where columns did not appear for some tables.
    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.2.0.tar.gz(46.27 KB)
    dbt_oracle-1.2.0-py3-none-any.whl(86.33 KB)
  • v1.1.2(Aug 3, 2022)

    This release includes the following features and fixes

    • Better support quoting of database object names (relations and columns). This also fixes a bug raised by @ThoSap to allow the use of keywords and special characters in columns. For e.g. Airbyte requires columns like _AIRBYTE_AB_ID, _AIRBYTE_EMITTED_AT, _AIRBYTE_NORMALIZED_AT and DESC for internal purposes

    • Fixes a bug to invalidate hard deletes during dbt snapshots.

    • Upgraded dependencies to the following versions

      • dbt-core==1.1.2
      • dbt-adapter-tests==1.1.2
      • oracledb==1.0.3
    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.1.2.tar.gz(40.24 KB)
    dbt_oracle-1.1.2-py3-none-any.whl(70.27 KB)
  • v1.1.1(Jul 15, 2022)

    This release includes following features and fixes

    • Addresses feature request https://github.com/oracle/dbt-oracle/issues/17 - dbt-oracle now supports python-oracledb which is the renamed, major release of Oracle's popular cx_Oracle interface.

      • It is strongly recommended to use this driver as it vastly simplifies dbt-oracle installation and makes it optional to install to Oracle Client Libraries. Python process directly communicates with the Oracle database.
      • To enable this new driver usage set the environment variable ORA_PYTHON_DRIVER_TYPE=thin
    • Fixes https://github.com/oracle/dbt-oracle/issues/29 to include size, scale and precision in {{ column.data_type }}

    • Addresses feature request https://github.com/oracle/dbt-oracle/issues/31 - To support config merge_update_columns in incremental materialization

    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.1.1.tar.gz(35.16 KB)
    dbt_oracle-1.1.1-py3-none-any.whl(53.05 KB)
  • v1.1.0(Jun 27, 2022)

    This release upgrades dbt-core requirement to the latest version v1.1.1 and addresses the feature request https://github.com/oracle/dbt-oracle/issues/19

    • Migrated testing framework to the new fixtures based dbt-tests-adapter==1.1.1. As part of Continuous Integration (CI) we test against 25 test cases defined in the adapter suite. Prior to this release we tested with 12 test cases.
    • Dropped support for Python 3.6
    • Added support for Python 3.10
    • Incremental materialization now supports a list of unique_keys. This translates to a comma separated AND predicates in the MERGE...ON statement
    • Incremental materialization now supports on_schema_change configuration to sync the source and target schemas. Following values can be used ['sync_all_columns', 'append_new_columns', 'fail', 'ignore']
    • pytest.ini configuration file now simplifies unit test configuration
    • Fixes bug https://github.com/oracle/dbt-oracle/issues/26 raised by AlexMorti
    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.1.0.tar.gz(31.64 KB)
    dbt_oracle-1.1.0-py3-none-any.whl(47.72 KB)
  • v1.0.4(Jun 17, 2022)

  • v1.0.3(Jun 16, 2022)

    dbt-oracle v1.0.3 is a maintenance release with the following fixes and features

    1. Clean up O$PT temporary tables which are created in dbt snapshots and incremental materialization

      • Fixes https://github.com/oracle/dbt-oracle/issues/12 raised by AlexMorti
    2. Support to create dbt models in custom schema other than the target schema.

      • Macros in adapter.sql use {schema}.{table} instead of just {table} to refer to the relation wherever possible

      • Fixes https://github.com/oracle/dbt-oracle/issues/14

      • Fixes https://github.com/oracle/dbt-oracle/issues/2

      • Addresses issues raised in PR https://github.com/oracle/dbt-oracle/pull/15 by @algol68

    3. dbt-oracle is now integrated with dbt_constraints

      • Fixes https://github.com/oracle/dbt-oracle/issues/5 raised by danflippo
    4. Includes a change by @ThoSap to remove whitespace control characters from MERGE jinja macro used in incremental materialization

    5. Simplify view creation using CREATE or REPLACE view clause

    6. The metadata retrieved using list_relations_without_caching, get_columns_in_a_relation and catalog.sql are no longer converted to lowercase. We keep them to default UPPERCASE as returned by the Database

    7. _make_match_kwargs() is overridden in oracle/impl.py to make the match kwargs uppercase

    8. quote_seed_column() is overridden in oracle/impl.py to make quote_columns=False as default value

    9. Simplified README.md. Most of the documentation is moved under dbt docs Oracle Profile

    10. The sample dbt_adbs_test_project now includes a model with custom schema and generate_schema_name macro to decide how the dbt schema should be resolved.

    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.0.3.tar.gz(29.81 KB)
    dbt_oracle-1.0.3-py3-none-any.whl(44.60 KB)
  • v1.0.2(May 26, 2022)

  • v1.0.1(May 24, 2022)

    Release v1.0.1 is a maintenance release.

    • Fixes bug https://github.com/oracle/dbt-oracle/issues/4 Renaming the old_relation does not work in Oracle 12c+ when using DBT table materialization with missing database name in profiles.yml
    • Continuous Integration Support using GitHub Actions to run dbt adapter test suite against an Oracle Express (XE) database instance. The test suite is run for Python 3.6, 3.7, 3.8 and 3.9
    • Oracle Contributor Agreement (OCA) bot integration by Oracle GitHub Organization (OGHO). For every PR raised this will verify if the external contributors have signed the OCA or not.
    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.0.1.tar.gz(34.36 KB)
    dbt_oracle-1.0.1-py3-none-any.whl(46.86 KB)
  • v1.0.0(May 4, 2022)

    • Python versions
      • Python 3.6, 3.7, 3.8 and 3.9 are supported.
      • Removed support for Python 3.5. Python 3.5 reached end-of-life in September 2020. Previous releases of dbt-oracle supported Python 3.5
    • Enhancements
      • Following dependencies are upgraded
        • cx_Oracle v8.3.0
        • dbt-core v1.0.6
      • Following development dependencies are removed
        • watchdog
        • bumpversion
        • flake8
        • docutils
        • Sphinx
      • Added conditional dependency on dataclasses package for Python 3.6. dataclasses package was included in the standard library from Python 3.7
      • Fallback to dbt-core v0.21.1 for Python 3.6
      • Added support to connect to a shard
      • Added support for Database Resident Connection Pooling (DRCP)
      • Remove hardcoded configurations. Configurations should be specified using environment variables and follow the prefix pattern DBT_ORACLE_*
      • PEP-517 and PEP-518 compliant build system.
        • Introduced pyproject.toml file to specify build dependencies.
        • Modified setup.py and setup.cfg to define dynamic and static metadata respectively.
      • tox automation to test the adapter plugin for Python versions 3.6, 3.7, 3.8 and 3.9
    • Fixes
      • Fix: ORA-12537 for OracleConnectionMethod.HOST
      • Fix: Generic tests and singular tests. Introduced macro oracle__get_test_sql and changed macro signatures in schema_tests.sql
      • Fix: tests/oracle.dbtspec::test_dbt_snapshot_strategy_check_cols - ORA-00942: table or view does not exist
      • Fix: tests/oracle.dbtspec::test_dbt_ephemeral_data_tests - ORA-32034: unsupported use of WITH clause
      • Fix: ORA-00933: SQL command not properly ended raised in https://github.com/techindicium/dbt-oracle/issues/26
      • Fix: Return an object of type AdapterResponse in adapter's get_response(cls, cursor) method. Cursor's rowcount attribute is included in the AdapterResponse
      • Commented the method list_relations_without_caching in dbt/adapters/oracle/impl.py
    • Integration Testing with Autonomous Database Service (ADBS)
      • The test project dbt_adbs_test_project can be used to perform integration testing with Oracle's Autonomous Database Cloud Service. Following features are tested
    Source code(tar.gz)
    Source code(zip)
    dbt-oracle-1.0.0.tar.gz(33.38 KB)
    dbt_oracle-1.0.0-py3-none-any.whl(45.91 KB)
Owner
Oracle
Open Source at Oracle
Oracle
Find all social media accounts with a username!

Aliens_eye FIND ALL SOCIAL MEDIA ACCOUNTS WITH A USERNAME! OSINT To install: Open terminal and type: git clone https://github.com/BLINKING-IDIOT/Alien

Aaron Thomas 84 Dec 28, 2022
Height 2 LDraw With python

Height2Ldraw About This project aims to be able to make a full lego 3D model using the ldraw file format (.ldr) from a height and color map, currently

1 Dec 22, 2021
tidevice can be used to communicate with iPhone device

h 该工具能够用于与iOS设备进行通信, 提供以下功能 截图 获取手机信息 ipa包的安装和卸载 根据bundleID 启动和停止应用 列出安装应用信息 模拟Xcode运行XCTest,常用的如启动WebDriverAgent测试

Alibaba 1.8k Dec 30, 2022
A inspector to be able to view and edit Qt style sheet while an application is running

Qt Style Sheet Inspector An inspector widget to view and modify the style sheet of a Qt app at runtime. Usage In order to use the inspector widget on

ESSS 46 Dec 10, 2022
Um pequeno painel de consulta grátis.

[PAINEL-DE-CONSULTA 3.8(BETA)] · Confira meu canal do YouTube. Clique aqui! Nota: Próxima Atualização será a última com coisas novas, o resto será par

276 Jan 05, 2023
This repository contains Python games that I've worked on. You'll learn how to create python games with AI. I try to focus on creating board games without GUI in Jupyter-notebook.

92_Python_Games 🎮 Introduction 👋 This repository contains Python games that I've worked on. You'll learn how to create python games with AI. I try t

Milaan Parmar / Милан пармар / _米兰 帕尔马 166 Jan 01, 2023
Automated GitHub profile content using the USGS API, Plotly and GitHub Actions.

Top 20 Largest Earthquakes in the Past 24 Hours Location Mag Date and Time (UTC) 92 km SW of Sechura, Peru 5.2 11-05-2021 23:19:50 113 km NNE of Lobuj

Mr. Phantom 28 Oct 31, 2022
An easy python calculator for those who want's to know how if statements, loops, and imports works give it a try!

A usefull calculator for any student or anyone who want's to know how to build a simple 2 mode python based calculator.

Antonio Sánchez 1 Jan 06, 2022
Python API for HotBits random data generator

HotBits Python API Python API for HotBits random data generator. Description This project is random data generator. It uses is HotBits API web service

Filip Š 2 Sep 11, 2020
With the initiation of the COVID vaccination drive across India for all individuals above the age of 18, I wrote a python script which alerts the user regarding open slots in the vicinity!

cowin_notifier With the initiation of the COVID vaccination drive across India for all individuals above the age of 18, I wrote a python script which

13 Aug 01, 2021
An OpenSource crowd-sourced cooking recipes website

An OpenSource crowd-sourced cooking recipes website

21 Jul 31, 2022
AMTIO aka All My Tools in One

AMTIO AMTIO aka All My Tools In One. I plan to put a bunch of my tools in this one repo since im too lazy to make one big tool. Installation git clone

osintcat 3 Jul 29, 2021
Script de monitoramento de telemetria para missões espaciais, cansat e foguetemodelismo.

Aeroespace_GroundStation Script de monitoramento de telemetria para missões espaciais, cansat e foguetemodelismo. Imagem 1 - Dashboard realizando moni

Vinícius Azevedo 5 Nov 27, 2022
OnTime is a small python that you set a time and on that time, app will send you notification and also play an alarm.

OnTime Always be OnTime! What is OnTime? OnTime is a small python that you set a time and on that time, app will send you notification and also play a

AmirHossein Mohammadi 11 Jan 16, 2022
A collection of simple tools that proved to be needed for hadling large periodic calculations with the VASP software package.

VESTA-tools A collection of simple tools that proved to be needed for handling large periodic calculations with the VASP software package. distTotCalc

Ilia Kichev 2 Dec 14, 2021
Bu repoda python ile CAN-Bus çalışmalarını nasıl gerçekleyeceğiniz anlatılmaktadır.

CAN-Bus-with-Python "CAN Bus 1980'li yıllarda Robert BOSCH tarafından geliştirilmiş bir iletişim protokoldür. Hızlı ve hata oranının çok düşük olması

Yunus Emre Coşkun 16 Aug 29, 2022
Repository specifically for tcss503-22-wi Students

TCSS503: Algorithms and Problem Solving for Software Developers Course Description Introduces advanced data structures and key algorithmic techniques

Kevin E. Anderson 3 Nov 08, 2022
Script that creates graphical representations of Julia an Mandelbrot sets.

Julia and Mandelbrot Picture Maker This simple functions create simple plots of the Julia and Mandelbrot sets. The Julia set require the important par

Juan Riera Gomez 1 Jan 10, 2022
Educational Repo. Used whilst learning Flask.

flask_python Educational Repo. Used whilst learning Flask. The below instructions will be required whilst establishing as new project. Install Flask (

Jordan 2 Oct 15, 2021
A shim for the typeshed changes in mypy 0.900

types-all A shim for the typeshed changes in mypy 0.900 installation pip install types-all why --install-types is annoying, this installs all the thin

Anthony Sottile 28 Oct 20, 2022