Hue Editor: Open source SQL Query Assistant for Databases/Warehouses

Overview

CircleCI DockerPulls GitHub contributors

Hue Logo

Query. Explore. Share.

The Hue Editor is a mature open source SQL Assistant for querying any Databases & Data Warehouses.

Many companies and organizations use Hue to quickly answer questions via self-service querying.

  • 1000+ customers
  • Top Fortune 500

are executing 1000s of queries daily. It ships in data platforms like Cloudera, Google DataProc, Amazon EMR, Open Data Hub...

Hue is also ideal for building your own Cloud SQL Editor and any contributions are welcome.

Read more on: gethue.com

Hue Editor

Getting Started

You can start Hue via three ways described below. Once setup, you would then need to configure Hue to point to the desired databases you want to query.

Quick Demos:

The Forum is here in case you are looking for help.

Docker

Start Hue in a single click with the Docker Guide or the video blog post.

docker run -it -p 8888:8888 gethue/hue:latest

Now Hue should be up and running on your default Docker IP on the port 8888 http://localhost:8888!

Read more about configurations then.

Kubernetes

helm repo add gethue https://helm.gethue.com
helm repo update
helm install hue gethue/hue

Read more about configurations at tools/kubernetes.

Development

First install the dependencies, clone the Hue repo, build and get the development server running.

# 
git clone https://github.com/cloudera/hue.git
cd hue
make apps
build/env/bin/hue runserver

Now Hue should be running on http://localhost:8000!

Read more in the development documentation.

Note: For a very Quick Start and not even bother with installing a dev environment, go with the Dev Docker

Community

License

Apache License, Version 2.0

Comments
  • Building a debian package with Python 3.7 deps

    Building a debian package with Python 3.7 deps

    Describe the bug:

    Hi! This is more a request to figure out what is the best procedure to build Hue from source, and then package it (in my case, in a deb package). From the documentation I see that make apps is needed to populate the build directory, and I though that the Debian package needed only to copy that directory on the target system, but then I realized that the procedure is more complex.

    I also tried with PREFIX=/some/path PYTHON_VER=python3.7 make install and ended up in:

      File "/home/vagrant/hue-release-4.7.1/desktop/core/ext-py/MySQL-python-1.2.5/setup_posix.py", line 2, in <module>
        from ConfigParser import SafeConfigParser
    ModuleNotFoundError: No module named 'ConfigParser'
    

    I checked the ext-py directories and some of them seem not ready for Python3, so I am wondering if I am doing the right steps.

    If I follow https://github.com/cloudera/hue#building-from-source everything works fine (so the dev server comes up without any issue).

    Steps to reproduce it?

    Hue source version 4.7.1, then:

    PYTHON_VER=python3.7 make apps
    PYTHON_VER=python3.7 PREFIX=/some/local/path make install
    

    Followed https://docs.gethue.com/administrator/installation/install/

    Ideally what I want to achieve is something similar to the debian package that Cloudera releases with CDH, but built for python 3.7 and some other dependencies (like Mariadb dev instead of Mysql, etc..).

    Any help would be really appreciated :)

    opened by elukey 28
  • add Apache Flink connector to Hue

    add Apache Flink connector to Hue

    Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com?

    no

    What is the Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...)

    n/a

    Is there a way to help reproduce it?

    n/a

    opened by bowenli86 26
  • Oozie job browser's graph do not show red/green bars for failed/successful subworkflows

    Oozie job browser's graph do not show red/green bars for failed/successful subworkflows

    Describe the bug:

    While testing a failed coordinator run with the new Hue version I noticed that the Oozie's Job browser graph doesn't show anymore red/green bars under the subworkflows failed/completed. It is more difficult to find what was the problem and debug it under this conditions :(

    To visualize:

    Current behavior: Screen Shot 2020-09-22 at 8 44 04 AM



    Expected/old behavior: Screen Shot 2020-09-22 at 8 44 56 AM

    I noticed the following JS error though:

    hue.errorcatcher.34bb8f5ecd32.js:31 Hue detected a Javascript error:  https://hue-next.wikimedia.org/static/desktop/js/bundles/hue/vendors~hue~notebook~tableBrowser-chunk-4d9d2c608a19e4e1ab7a.51951e55bebc.js 1705 2 Uncaught Error: Syntax error, unrecognized expression: [id^[email protected]_partition]
    window.onerror @ hue.errorcatcher.34bb8f5ecd32.js:31
    jquery.js:1677 Uncaught Error: Syntax error, unrecognized expression: [id^[email protected]_partition]
        at Function.Sizzle.error (jquery.js:1677)
        at Sizzle.tokenize (jquery.js:2377)
        at Sizzle.select (jquery.js:2838)
        at Function.Sizzle [as find] (jquery.js:894)
        at jQuery.fn.init.find (jquery.js:3095)
        at new jQuery.fn.init (jquery.js:3205)
        at jQuery (jquery.js:157)
        at <anonymous>:66:25
        at Object.D (knockout-latest.js:11)
        at Object.success (<anonymous>:59:26)
        at fire (jquery.js:3496)
        at Object.fireWith [as resolveWith] (jquery.js:3626)
        at done (jquery.js:9786)
        at XMLHttpRequest.<anonymous> (jquery.js:10047)
    

    Steps to reproduce it?

    Simply navigate to an URL like /hue/jobbrowser/jobs/0009826-200915132022208-oozie-oozi-W#!id=0009824-200915132022208-oozie-oozi-W.

    Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...). System info (e.g. OS, Browser...).

    4.7.1

    Stale 
    opened by elukey 25
  • Add a dask-sql parser and connector

    Add a dask-sql parser and connector

    From Nils:

    Thanks to @romain for his nice comment to this medium post on dask-sql, where he asks if it would be a good idea to add dask-sql as a parser and a connector to Hue. I would be very happy to collaborate on this! Thank you very much for proposing this.

    Looking at the code (and your very nice documentation), I have the feeling that adding a new "dask-sql" component should be rather straightforward - whereas adding the SQL parser would probably be a bit more work. I would probably start with the presto dialect and first remove everything that dask-sql does not implement (so far).

    Creating a correct parser seems like a really large task to me and I would be happy if you have good suggestions on how to properly sync Hue with the SQL dask-sql understands and if you have any suggestions on how to speed up the development.

    I would also be happy for any guidance on how to reasonably split this large task and if you think in general if this is a good idea.

    Thanks!

    connector Stale 
    opened by romainr 21
  • Presto SQLAlchemy Interface over Https fails to connect to the server

    Presto SQLAlchemy Interface over Https fails to connect to the server

    Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com?

    Describe the bug: For SSL enabled Presto Cluster Hue's SQLAlchemy interface fails to connect.

    Steps to reproduce it? Our Presto cluster is SSL LDAP enabled, so configured it like below in Notebook section:

    Case-1: [[[presto]]] interface=sqlalchemy name=Presto options=’{“url”: “presto://user:[email protected]:8443/hive/dwh”}’ In the above case, below is the exception in hue logs:

    (exceptions.ValueError) Protocol must be https when passing a password [SQL: SHOW SCHEMAS] Case-2: [[[presto]]] interface=sqlalchemy name=Presto options=’{“url”: “presto://username:[email protected]://presto coordinator:8443/hive/dwh”}’

    In this case, following is the error:

    ValueError: invalid literal for int() with base 10: ‘’ [14/Aug/2020 10:59:28 -0700] decorators ERROR Error running autocomplete Traceback (most recent call last): File “/usr/share/hue/desktop/libs/notebook/src/notebook/decorators.py”, line 114, in wrapper return f(*args, **kwargs) File “/usr/share/hue/desktop/libs/notebook/src/notebook/api.py”, line 729, in autocomplete autocomplete_data = get_api(request, snippet).autocomplete(snippet, database, table, column, nested) File “/usr/share/hue/desktop/libs/notebook/src/notebook/connectors/sql_alchemy.py”, line 113, in decorator raise QueryError(message) QueryError: invalid literal for int() with base 10: ‘’

    Hue version or source? Docker image of hue 4.7.1 deployed in Kubernetes

    opened by DRavikanth 20
  • HUE-2962 [desktop] -  'Config' object has no attribute 'get'

    HUE-2962 [desktop] - 'Config' object has no attribute 'get'

    Hello @spaztic1215

    I am trying to upgrade to that last version of Hue and I think the commit 8c88233 (HUE-2962 [desktop] Add configuration property definition for HS2) introduced an error in the "connectors/hiveserver2.py" when it tries to read the Hive configuration.

    I am running without Impala (app_blacklist=impala,security,search,rdbms,zookeeper)

    Have you experienced this?

    Traceback (most recent call last):
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/wsgiserver.py", line 1215, in communicate
        req.respond()
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/wsgiserver.py", line 576, in respond
        self._respond()
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/wsgiserver.py", line 588, in _respond
        response = self.wsgi_app(self.environ, self.start_response)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/wsgi.py", line 206, in __call__
        response = self.get_response(request)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", line 194, in get_response
        response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", line 236, in handle_uncaught_exception
        return callback(request, **param_dict)
      File "/usr/local/hue.prod/desktop/core/src/desktop/views.py", line 331, in serve_500_error
        return render("500.mako", request, {'traceback': traceback.extract_tb(exc_info[2])})
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_util.py", line 227, in render
        **kwargs)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_util.py", line 148, in _render_to_response
        return django_mako.render_to_response(template, *args, **kwargs)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 125, in render_to_response
        return HttpResponse(render_to_string(template_name, data_dictionary), **kwargs)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 114, in render_to_string_normal
        result = template.render(**data_dict)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 443, in render
        return runtime._render(self, self.callable_, args, data)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 786, in _render
        **_kwargs_for_callable(callable_, data))
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 818, in _render_context
        _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 844, in _exec_template
        callable_(context, *args, **kwargs)
      File "/tmp/tmpkguRIt/desktop/500.mako.py", line 40, in render_body
        __M_writer(unicode( commonheader(_('500 - Server error'), "", user) ))
      File "/usr/local/hue.prod/desktop/core/src/desktop/views.py", line 409, in commonheader
        'is_ldap_setup': 'desktop.auth.backend.LdapBackend' in desktop.conf.AUTH.BACKEND.get()
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 112, in render_to_string_normal
        template = lookup.get_template(template_name)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 89, in get_template
        return real_loader.get_template(uri)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/lookup.py", line 245, in get_template
        return self._load(srcfile, uri)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/lookup.py", line 311, in _load
        **self.template_args)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 321, in __init__
        module = self._compile_from_file(path, filename)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 379, in _compile_from_file
        module = compat.load_module(self.module_id, path)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/compat.py", line 55, in load_module
        return imp.load_source(module_id, path, fp)
      File "/tmp/tmpkguRIt/desktop/common_header.mako.py", line 25, in <module>
        home_url = url('desktop.views.home')
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 131, in url
        return reverse(view_name, args=args, kwargs=view_args)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 536, in reverse
        return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 403, in _reverse_with_prefix
        self._populate()
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 303, in _populate
        lookups.appendlist(pattern.callback, (bits, p_pattern, pattern.default_args))
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 230, in callback
        self._callback = get_callable(self._callback_str)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/functional.py", line 32, in wrapper
        result = func(*args)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 97, in get_callable
        mod = import_module(mod_name)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/importlib.py", line 40, in import_module
        __import__(name)
      File "/usr/local/hue.prod/desktop/core/src/desktop/configuration/api.py", line 29, in <module>
        from notebook.connectors.hiveserver2 import HiveConfiguration, ImpalaConfiguration
      File "/usr/local/hue.prod/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 77, in <module>
        class HiveConfiguration(object):
      File "/usr/local/hue.prod/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 103, in HiveConfiguration
        "options": [config.lower() for config in hive_settings.get()]
    AttributeError: 'Config' object has no attribute 'get'
    
    opened by syepes 20
  • Receiving Error

    Receiving Error "Incorrect string value: '\xC5\x9F\xC4\xB1k ...' for column 'query' at row 1" while running query from HUE Hive Query editor

    We are running a simple select query with one condition. The query is running fine from hive cli, beeline, zeppelin but when ruuning the same query from HUE Hive query editor, received the below error: "Incorrect string value: '\xC5\x9F\xC4\xB1k ...' for column 'query' at row 1" SELECT * from <table_name> WHERE = 'Turkish Characters' ; HUE Version: 3.9.0 Note: 1.Data has some special (Turkish) Characters. Not sure how HUE handles this type of characters. 2.HUE is running in web browser which is in UTF-8, 3.Looking for any configurations in HUE for turkish character encoding. During initial research, this issue seems to be related to HUE-4889. Any help?

    Stale 
    opened by shyamshaw 18
  • Hue 3.8.0 - Map reduce job in workflow is not using the job properties

    Hue 3.8.0 - Map reduce job in workflow is not using the job properties

    Mapreduce job as part of a workflow is not submitting the job properties to oozie/yarn. When I submit the following as part of a workflow, I am getting an exception about the output dir. Similar job in Job Designer is succeeding with out any issues.

    Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.MapReduceMain], main() threw exception, Output directory not set in JobConf.
      org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.
      at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:118)
      at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:460)
      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    

    image

    On a side note, is there a way to export the oozie workflow xml ? I don't see it in 3.8.0, I have it in my existing 3.7.1 installation?

    Also, the import feature is missing image

    opened by skkolli 18
  • CircleCI: Add build jobs that run on Linux ARM64

    CircleCI: Add build jobs that run on Linux ARM64

    What changes were proposed in this pull request?

    This is the third part of #2531! It is based on top of #2555, so it should be merged after #2555 !

    This PR introduces the build jobs on Linux ARM64 and a new workflow that runs these jobs every Sunday at 1AM UTC.

    How was this patch tested?

    Only CircleCI's config.yml has been modified. CircleCI jobs should still work!

    feature tools roadmap 
    opened by martin-g 17
  • While creating the jdbc connecting string getting the error invalid syntax (jdbc.py, line 47)

    While creating the jdbc connecting string getting the error invalid syntax (jdbc.py, line 47)

    Please find the below error message while accessing the data from JDBC connection string

    JDBC connection string: [[[Avalanche]]] name=Avalanche interface=jdbc options='{"url": "jdbc:ingres://av-0pq13o8h1zrp.avdev.actiandatacloud.com:27839/db", "driver": "com.ingres.jdbc.IngresDriver", "user": "*", "password": "**"}' jdbc py-error

    Error Message:

    exceptions_renderable ERROR Potential trace: [<FrameSummary file /home/actian/Hue_server/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py, line 172 in _execute>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/resource.py, line 157 in get>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/resource.py, line 78 in invoke>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/resource.py, line 105 in _invoke>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/http_client.py, line 229 in execute>] [10/Dec/2020 06:26:27 -0800] cluster INFO Resource Manager not available, trying another RM: YARN RM returned a failed response: HTTPConnectionPool(host='localhost', port=8088): Max retries exceeded with url: /ws/v1/cluster/apps?doAs=admin&user.name=hue&user=admin&finalStatus=UNDEFINED&limit=1000&startedTimeBegin=1607005587000 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f98a04db5b0>: Failed to establish a new connection: [Errno 111] Connection refused')).

    Stale 
    opened by balajivenka 17
  • MockRequest instance has no attribute 'fs'

    MockRequest instance has no attribute 'fs'

    Hue version 4.8.0 (docker)

    When I execute a script in PIG editor or a spark program in SPARK editor, I always get the message "MockRequest instance has no attribute 'fs'". It seems that there's no impact on the behaviour. In the logs, there's this error:

    [09/Feb/2021 10:30:48 +0100] decorators   ERROR    Error running fetch_result_data
    Traceback (most recent call last):
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/decorators.py", line 114, in wrapper
        return f(*args, **kwargs)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/api.py", line 318, in fetch_result_data
        response = _fetch_result_data(request.user, notebook, snippet, operation_id, rows=rows, start_over=start_over)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/api.py", line 329, in _fetch_result_data
        'result': get_api(request, snippet).fetch_result(notebook, snippet, rows, start_over)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/models.py", line 517, in get_api
        return ApiWrapper(request, snippet)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/models.py", line 503, in __init__
        self.api = _get_api(request, snippet)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/connectors/base.py", line 436, in get_api
        return OozieApi(user=request.user, request=request)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/connectors/oozie_batch.py", line 58, in __init__
        self.fs = self.request.fs
    AttributeError: MockRequest instance has no attribute 'fs'
    

    What is the reason of this message ? Maybe a miss configuration in HUE ? thanks in advance

    opened by stephbat 16
  • Fix aws region attribute in values.yaml

    Fix aws region attribute in values.yaml

    What changes were proposed in this pull request?

    • Fix aws region attribute in values.yaml

    How was this patch tested?

    • Manual

    There seems to be a typo in helm chart values.yaml. configmap-hue yaml has a .Values.aws.region

    please review.

    opened by a1tair6 1
  • [frontend] fix multi session id mismatch error

    [frontend] fix multi session id mismatch error

    What changes were proposed in this pull request?

    https://github.com/cloudera/hue/issues/3145 I create this pr to fix the issue above. But I am not so familliar with javascript and the original session logic here, maybe there are better solutions.

    • (Please fill in changes proposed in this fix) remove the code of updating existed session's id after execution.

    How was this patch tested?

    manual tests

    Please review Hue Contributing Guide before opening a pull request.

    opened by sumtumn 1
  • notebook multisession errror: Session matching query does not exist.

    notebook multisession errror: Session matching query does not exist.

    Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com? no Describe the bug: session ids mismatch in notebook, which leads to the exception in hue server.

    Traceback (most recent call last): File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/decorators.py", line 119, in wrapper return f(*args, **kwargs) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/api.py", line 236, in execute response = _execute_notebook(request, notebook, snippet) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/api.py", line 161, in _execute_notebook response['handle'] = interpreter.execute(notebook, snippet) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 99, in decorator return func(*args, **kwargs) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 322, in execute _session = self._get_session_by_id(notebook, snippet['type']) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 721, in _get_session_by_id return Session.objects.get(**filters) File "/usr/lib/emr/current/hue/build/env/lib/python2.7/site-packages/Django-1.11.29-py2.7.egg/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/lib/emr/current/hue/build/env/lib/python2.7/site-packages/Django-1.11.29-py2.7.egg/django/db/models/query.py", line 380, in get self.model._meta.object_name DoesNotExist: Session matching query does not exist.

    Steps to reproduce it?

    1. create a blank notebook
    2. add a hive snippet, which invokes api /notebook/api/create_session, the response is
    {"status": 0, "session": {"reuse_session": false, "session_id": "2b47d9a6457354c0:9f8030b890dbe8af", "properties": [{"multiple": true, "defaultValue": [], "value": [], "nice_name": "Files", "key": "files", "help_text": "Add one or more files, jars, or archives to the list of resources.", "type": "hdfs-files"}, {"multiple": true, "defaultValue": [], "value": [], "nice_name": "Functions", "key": "functions", "help_text": "Add one or more registered UDFs (requires function name and fully-qualified class name).", "type": "functions"}, {"nice_name": "Settings", "multiple": true, "key": "settings", "help_text": "Hive and Hadoop configuration properties.", "defaultValue": [], "type": "settings", "options": ["hive.map.aggr", "hive.exec.compress.output", "hive.exec.parallel", "hive.execution.engine", "mapreduce.job.queuename"], "value": []}], "configuration": {"hive.map.aggr": "true", "hive.execution.engine": "tez", "mapreduce.job.queuename": "default", "hive.exec.parallel": "true", "hive.exec.compress.output": "false", "hive.server2.thrift.resultset.default.fetch.size": "1000"}, "type": "hive", "id": 101}}
    

    session id is 101 3. execute hive sql, it passes. 4. add a sparksql snippet, which also invokes api /notebook/api/create_session, the response is

    {"status": 0, "session": {"reuse_session": false, "session_id": "4745b846bfcd60a7:ab1a5701d2f35983", "properties": [{"multiple": true, "defaultValue": [], "value": [], "nice_name": "Files", "key": "files", "help_text": "Add one or more files, jars, or archives to the list of resources.", "type": "hdfs-files"}, {"multiple": true, "defaultValue": [], "value": [], "nice_name": "Functions", "key": "functions", "help_text": "Add one or more registered UDFs (requires function name and fully-qualified class name).", "type": "functions"}, {"nice_name": "Settings", "multiple": true, "key": "settings", "help_text": "Hive and Hadoop configuration properties.", "defaultValue": [], "type": "settings", "options": ["hive.map.aggr", "hive.exec.compress.output", "hive.exec.parallel", "hive.execution.engine", "mapreduce.job.queuename"], "value": []}], "configuration": {}, "type": "sparksql", "id": 103}}
    

    session id is 103 5. execute one sparksql, it passes. 6. reexecute the hive sql, error happens: Session matching query does not exist.

    I find it happens because the session ids of hive and sparksql mismatch. E.g. reexecute the hive sql invokes api /notebook/api/execute/hive, and the notebook field in payload is

    {"id":null,"uuid":"05536e0a-016b-4b38-b547-b492733fe6c9","parentSavedQueryUuid":null,"isSaved":false,"sessions":[{"type":"hive","properties":[{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Files","key":"files","help_text":"Add one or more files, jars, or archives to the list of resources.","type":"hdfs-files"},{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Functions","key":"functions","help_text":"Add one or more registered UDFs (requires function name and fully-qualified class name).","type":"functions"},{"nice_name":"Settings","multiple":true,"key":"settings","help_text":"Hive and Hadoop configuration properties.","defaultValue":[],"type":"settings","options":["hive.map.aggr","hive.exec.compress.output","hive.exec.parallel","hive.execution.engine","mapreduce.job.queuename"],"value":[]}],"reuse_session":false,"session_id":"4745b846bfcd60a7:ab1a5701d2f35983","configuration":{"hive.map.aggr":"true","hive.execution.engine":"tez","mapreduce.job.queuename":"default","hive.exec.parallel":"true","hive.exec.compress.output":"false","hive.server2.thrift.resultset.default.fetch.size":"1000"},"id":103},{"type":"sparksql","properties":[{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Files","key":"files","help_text":"Add one or more files, jars, or archives to the list of resources.","type":"hdfs-files"},{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Functions","key":"functions","help_text":"Add one or more registered UDFs (requires function name and fully-qualified class name).","type":"functions"},{"nice_name":"Settings","multiple":true,"key":"settings","help_text":"Hive and Hadoop configuration properties.","defaultValue":[],"type":"settings","options":["hive.map.aggr","hive.exec.compress.output","hive.exec.parallel","hive.execution.engine","mapreduce.job.queuename"],"value":[]}],"reuse_session":false,"session_id":"4745b846bfcd60a7:ab1a5701d2f35983","configuration":{},"id":103}],"type":"notebook","name":"My Notebook"}
    

    We can see that session ids of hive and sparsql are both 103, instead of hive 101, sparksql 103.

    Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...). System info (e.g. OS, Browser...). open source 4.9.0

    editor 
    opened by sumtumn 1
  • [raz] Fix non-ascii directory creation in S3

    [raz] Fix non-ascii directory creation in S3

    What changes were proposed in this pull request?

    • Creating a directory with non-ascii characters was throwing a signature mismatch error.

    • On further investigation, found out that there was a signature mismatch for the GET requests which go through the boto2 client implementation in Hue, and the signed headers from RAZ are not matching what S3 was generating to verify them.

    • This is only happening for GET requests, so most probably all GET method operations must be failing with this error.

    • Next steps were to see why is this signature mismatch happening for the path, is it the path which we sent to RAZ for signed header generation or to S3 side request for actual S3 operation where S3 verifies the header?

    • After narrowing down the issue, found out that we need to fully unquote the path before sending to RAZ so that signed headers are sent for correct path and S3 can verify them correctly. We didn't touch the path sent to S3 side.

    How was this patch tested?

    Tested E2E in a live cluster.

    Please review Hue Contributing Guide before opening a pull request.

    opened by Harshg999 0
  • [docs] Update docs to use latest npm version v5

    [docs] Update docs to use latest npm version v5

    What changes were proposed in this pull request?

    gethue npm was upgraded to v5, this doc upgrades the same in the doc

    How was this patch tested?

    By running hugo server

    opened by sreenaths 1
Releases(release-4.10.0)
  • release-4.10.0(Jun 11, 2021)

    This release is 4.10.

    Please see more at:

    https://docs.gethue.com/releases/release-notes-4.10.0/ https://gethue.com/blog/hue-4-10-sql-scratchpad-component-rest-api-small-file-importer-slack-app/

    And feel free to send feedback!

    Hue Team

    Source code(tar.gz)
    Source code(zip)
  • release-4.9.0(Feb 2, 2021)

    This release is 4.9.

    Please see more at:

    https://docs.gethue.com/releases/release-notes-4.9.0/ https://gethue.com/blog/hue-4-9-sql-dialects-phoenix-dasksql-flink-components/ And feel free to send feedback!

    Hue Team

    Source code(tar.gz)
    Source code(zip)
  • release-4.8.0(Sep 23, 2020)

    This release is 4.8.

    Please see more at:

    • https://docs.gethue.com/releases/release-notes-4.8.0/
    • https://gethue.com/blog/hue-4-8-phoenix-flink-sparksql-components/

    And feel free to send feedback!

    Hue Team

    Source code(tar.gz)
    Source code(zip)
  • release-4.7.1(Jun 26, 2020)

    This release contains 4.7 and additional fixes.

    Please see more at:

    • https://docs.gethue.com/releases/release-notes-4.7.0/
    • https://gethue.com/blog/hue-4-7-and-its-improvements-are-out/

    And feel free to send feedback.

    Hue Team

    Source code(tar.gz)
    Source code(zip)
Handle, manipulate, and convert data with units in Python

unyt A package for handling numpy arrays with units. Often writing code that deals with data that has units can be confusing. A function might return

The yt project 304 Jan 02, 2023
Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python.

Fast Laplacian Eigenmaps in python Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python. Comes with an wrapper for NMS

17 Jul 09, 2022
Containerized Demo of Apache Spark MLlib on a Data Lakehouse (2022)

Spark-DeltaLake-Demo Reliable, Scalable Machine Learning (2022) This project was completed in an attempt to become better acquainted with the latest b

8 Mar 21, 2022
Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Hatchet Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing

Lawrence Livermore National Laboratory 14 Aug 19, 2022
Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Teo Calvo 5 Apr 26, 2022
For making Tagtog annotation into csv dataset

tagtog_relation_extraction for making Tagtog annotation into csv dataset How to Use On Tagtog 1. Go to Project Downloads 2. Download all documents,

hyeong 4 Dec 28, 2021
4CAT: Capture and Analysis Toolkit

4CAT: Capture and Analysis Toolkit 4CAT is a research tool that can be used to analyse and process data from online social platforms. Its goal is to m

Digital Methods Initiative 147 Dec 20, 2022
Projeto para realizar o RPA Challenge . Utilizando Python e as bibliotecas Selenium e Pandas.

RPA Challenge in Python Projeto para realizar o RPA Challenge (www.rpachallenge.com), utilizando Python. O objetivo deste desafio é criar um fluxo de

Henrique A. Lourenço 1 Apr 12, 2022
Weather analysis with Python, SQLite, SQLAlchemy, and Flask

Surf's Up Weather analysis with Python, SQLite, SQLAlchemy, and Flask Overview The purpose of this analysis was to examine weather trends (precipitati

Art Tucker 1 Sep 05, 2021
A tax calculator for stocks and dividends activities.

Revolut Stocks calculator for Bulgarian National Revenue Agency Information Processing and calculating the required information about stock possession

Doino Gretchenliev 200 Oct 25, 2022
Spectral Analysis in Python

SPECTRUM : Spectral Analysis in Python contributions: Please join https://github.com/cokelaer/spectrum contributors: https://github.com/cokelaer/spect

Thomas Cokelaer 280 Dec 16, 2022
A Python Tools to imaging the shallow seismic structure

ShallowSeismicImaging Tools to imaging the shallow seismic structure, above 10 km, based on the ZH ratio measured from the ambient seismic noise, and

Xiao Xiao 9 Aug 09, 2022
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 09, 2023
The repo for mlbtradetrees.com. Analyze any trade in baseball history!

The repo for mlbtradetrees.com. Analyze any trade in baseball history!

7 Nov 20, 2022
Bearsql allows you to query pandas dataframe with sql syntax.

Bearsql adds sql syntax on pandas dataframe. It uses duckdb to speedup the pandas processing and as the sql engine

14 Jun 22, 2022
Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine Intro This repo contains the python/stan version of the Statistical Rethinking

Andrés Suárez 3 Nov 08, 2022
Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt Labs 6.3k Jan 08, 2023
An ETL framework + Monitoring UI/API (experimental project for learning purposes)

Fastlane An ETL framework for building pipelines, and Flask based web API/UI for monitoring pipelines. Project structure fastlane |- fastlane: (ETL fr

Dan Katz 2 Jan 06, 2022
The official pytorch implementation of ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias Introduction | Updates | Usage | Results&Pretrained Models | Statement | Intr

104 Nov 27, 2022
sportsdataverse python package

sportsdataverse-py See CHANGELOG.md for details. The goal of sportsdataverse-py is to provide the community with a python package for working with spo

Saiem Gilani 37 Dec 27, 2022