A Python and R autograding solution

Overview

Otter-Grader

PyPI DOI Run tests codecov Documentation Status Slack

Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is designed to work with classes at any scale by abstracting away the autograding internals in a way that is compatible with any instructor's assignment distribution and collection pipeline. Otter supports local grading through parallel Docker containers, grading using the autograder platforms of 3rd party learning management systems (LMSs), the deployment of an Otter-managed grading virtual machine, and a client package that allows students to run public checks on their own machines. Otter is designed to grade Python scripts and Jupyter Notebooks, and is compatible with a few different LMSs, including Canvas and Gradescope.

Documentation

The documentation for Otter can be found here.

Contributing

See CONTRIBUTING.md.

Changelog

See CHANGELOG.md.

Comments
  • Make otter-grader installable in JupyterLite

    Make otter-grader installable in JupyterLite

    Is your feature request related to a problem? Please describe.

    jupyterlite is a full blown scientific python environment running entirely in your browser, no server required! You can try a demo here.

    If you try to install otter-grader with:

    import micropip
    await micropip.install('otter-grader')
    

    It'll fails, trying to install the pypdf2 package. JupyterLite can only install pure python packages (other than the compiled packages it comes with, such as numpy, pandas, etc). So these packages will need to be made optional wherever possible.

    Describe the solution you'd like

    Figure out what are the required packages for our user functionality, and try making everything else optional.

    Describe alternatives you've considered

    1. Don't support jupyterlite

    Additional context

    /cc @ericvd-ucb who really wants this, and @jptio the core dev of jupyterlite.

    enhancement 
    opened by yuvipanda 24
  • Otter assign - tests fail when answers are correct,

    Otter assign - tests fail when answers are correct, "Error: object not found"

    Describe the bug I am following the otter sample at https://github.com/ucbds-infra/ottr-sample to create an assignment, and all of my tests are failing on Gradescope in the tested Rmd submission (a submission with all correct solutions). It is notable that all tests pass successfully when testing locally using otter run lab04.Rmd. I am attaching a ZIP file containing the master notebook file that I am using and a screenshot of the failing tests. It seems that the tests are unable to recognize any of the variables that are created in the immediately preceding solution cell.

    Archive.zip

    Screen Shot 2021-07-03 at 12 45 21 AM

    To Reproduce Steps to reproduce the behavior:

    1. Unzip the Archive.zip attached. cd into its directory.
    2. Use Otter assign to prepare the autograder.zip file: otter assign lab04.Rmd dist. Note that the header of the master notebook has the following:
    BEGIN ASSIGNMENT
    requirements: requirements.R
    generate: true
    
    1. Upload the autograder.zip from dist/autograder to Gradescope.
    2. Test the autograder using the solution Rmd file.
    3. The above error is produced.

    Expected behavior All tests should pass successfully for the tested submission .Rmd file on Gradescope.

    Versions Python version: 3.9.5 Otter-Grader version: 2.2.2

    bug 
    opened by jerrybonnell 19
  • Issues running tests with Rmd notebooks

    Issues running tests with Rmd notebooks

    Hello,

    I've gotten through the full workflow with the ipynb demo files in the tutorial documation, but I want to use otter to grade Rmd notebooks.

    I have the following setup so far (all files can be found on my github):

    cd ~/Documents/otter-test
    
    .
    ├── TestAssignment.Rmd
    ├── dist
    │   ├── autograder
    │   │   ├── TestAssignment.Rmd
    │   │   └── tests
    │   │       ├── Question1.R
    │   │       └── Question2.R
    │   └── student
    │       ├── TestAssignment.Rmd
    │       └── tests
    │           ├── Question1.R
    │           └── Question2.R
    └── submissions
        ├── Student1-Correct-TestAssignment.Rmd
        └── Student2-Wrong-TestAssignment.Rmd
    
    

    Question1 has one open and one hidden test, and Question2 has 2 hidden tests. The student distribution notebook looks fine to me, and I copied them and put in answers to make the two submission files. I've run into three issues:

    1. In R, the relative path to tests does not work in the student notebook, I need the full path (not the end of the world; the dist/autograder notebook can use relative paths fine):
    > setwd("~/Documents/otter-test/dist/student/")
    > 
    > library(testthat)
    > library(ottr)
    >
    > ice_cream <- "vanilla" # YOUR CODE HERE
    > . = ottr::check("tests/Question1.R")
    cannot open file 'tests/Question1.R': No such file or directoryError in file(filename, "r") : cannot open the connection
    > getwd()
    [1] "/Users/hgibling/Documents/otter-test/dist/student"
    > . = ottr::check("~/Documents/otter-test/dist/student/tests/Question1.R")
    All tests passed!
    
    1. While the tests work appropriately in the original raw notebook, in the student notebooks (and in the dist/autograder notebook) the tests pass when the answer is blatantly wrong:
    > getwd()
    [1] "/Users/hgibling/Documents/otter-test/dist/student"
    > ice_cream <- 13 # YOUR CODE HERE
    > . = ottr::check("~/Documents/otter-test/dist/student/tests/Question1.R")
    All tests passed!
    
    ###
    
    > setwd("~/Documents/otter-test/dist/autograder/")
    > ice_cream <- "chocolate" #SOLUTION
    > . = ottr::check("tests/Question1.R")
    All tests passed!
    > ice_cream <- 13 #SOLUTION
    > . = ottr::check("tests/Question1.R")
    All tests passed!
    
    1. I imagine the previous issue might have something to do with not being able to run otter check in terminal. Since I'm pretending to be a student, I am supplying the path to the student tests created after otter assign in dist:
    cd ~/Documents/otter-test/submissions
    otter check Student1-Correct-TestAssignment.Rmd -t ~/Documents/otter-test/dist/student/tests -q Question1 
    
    Traceback (most recent call last):
      File "/Users/hgibling/miniconda3/bin/otter", line 8, in <module>
        sys.exit(cli())
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1137, in __call__
        return self.main(*args, **kwargs)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1062, in main
        rv = self.invoke(ctx)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1668, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/otter/cli.py", line 55, in check_cli
        return check(*args, **kwargs)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/otter/check/__init__.py", line 77, in main
        raise e
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/otter/check/__init__.py", line 50, in main
        assert os.path.isfile(test_path), "Test {} does not exist".format(question)
    AssertionError: Test Question1.R does not exist
    

    I'm not sure what's happening. Any ideas?

    bug question 
    opened by hgibling 13
  • Showing output to students for debugging purposes

    Showing output to students for debugging purposes

    Hi team,

    I am using Gradescope and I would like to display the autograder output when a notebook failed to run to students so they can debug their own notebooks. Currently, only instructors can see the log of the errors but the students cannot.

    According to Gradescope, I need to change the "stdout_visibility" in results.JSON file but there is no results.JSON file when I run otter assign.

    How can I configure this? Thanks, Quan

    question 
    opened by quan3010 12
  • Add download HTML attribute to download links for JupyterLab.

    Add download HTML attribute to download links for JupyterLab.

    As explained here by @jasongrout, JupyterLab will handle download links correctly if the download attribute is set. We will continue investigating ways to smooth this experience in JupyterLab, as it does come up in other scenarios for our users, but for our use case with otter for grading, this simple fix should do the trick.

    Closes #339.

    bug 
    opened by fperez 12
  • grader generates empty pdfs when grading open questions

    grader generates empty pdfs when grading open questions

    Describe the bug

    I have open questions in notebooks and I grade zip files. The grader works fine but when supplying the --pdfs option, it generates empty pdfs. I could not find information of how to solve it in the documentation.

    To Reproduce

    My otter_config.json contains

    {
      "pdf": true,
      "zips": true
    }
    

    The zipped notebooks contain cells of the form

    <!-- BEGIN QUESTION -->
    <!--
    BEGIN QUESTION
    name: q
    manual: True
    -->
    

    And answers of the form

    **My Answer:**

    I run otter grade -z --pdfs

    Expected behavior

    I would like to get the grading summary and generated pdfs in the submissions_pdfs folder. The grading summary is ok but all the pdfs in the generated pdfs folder are empty.

    Versions Please provide your Python and Otter versions. The Otter can be obtained using from otter import __version__

    Python version: 3.8.10 Otter-Grader version: 3.2.1

    bug wontfix 
    opened by shaolintl 10
  • `otter generate autograder` generates buggy `autograder.zip`

    `otter generate autograder` generates buggy `autograder.zip`

    otter generate autograder generates autograder.zip with both requirements.txt and requirements.r.

    Unfortrunately, I am not permitted to share the code, but I imagine this will reproduce the same situation, after you cd into any Jupyter Notebook Python (not R) assignment:

    conda activate base
    otter generate autograder
    

    The problem is that Gradescope programming assignments don't like this zip file. It's hard to copy/paste Gradescope programming assignment Docker build output, but here's the last few lines (all kind of messed up)

    ==> For changes to take effect, close and re-open your current shell. <==
    
    Cloning into '/autograder/source/ottr'...
    ERROR conda.cli.main_run:execute(32): Subprocess for 'conda run ['Rscript', '-e', 'devtools::install\\(\\)']' command failed.  (See above for error)
    Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : 
      there is no package called ‘usethis’
    Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
    Execution halted
    
    ERROR conda.cli.main_run:execute(32): Subprocess for 'conda run ['Rscript', '-e', 'devtools::install\\(\\)']' command failed.  (See above for error)
    Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : 
      there is no package called ‘usethis’
    Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
    Execution halted
    

    There should be no running of Rscript at all, so it's likely an easy fix.

    I can get around it quite easily by just deleting requirements.r before I upload the zip file to Gradescope.

    Versions Otter: 1.1.6 Python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]

    bug 
    opened by tbrown122387 10
  • UnicodeDecodeError

    UnicodeDecodeError

    Describe the bug A UnicodeDecodeError occurs when running grader.check_all() in a student JN in JupyterLab Desktop in Windows 11 if a question uses Japanese.

    • No such error for grader.check() for individual questions.
    • No such error occurs if grader.check_all() of the same file is run in JupyterLab Desktop in Mac.

    To Reproduce Steps to reproduce the behavior:

    1. otter assign the attached file in Mac.
    2. Open a student JN in JupyterLab Desktop in Windows 11 Pro
    3. Run all commands including grader.check_all()
    4. See the error message attached below.

    Expected behavior No such error occurs

    Versions otter-grader 4.0.1 JupyterLab Desktop 3.4.5-1

    Additional context

    cp932.ipynb.txt

    The error message

    ---------------------------------------------------------------------------
    UnicodeDecodeError                        Traceback (most recent call last)
    Input In [5], in <cell line: 1>()
    ----> 1 grader.check_all()
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:151, in grading_mode_disabled(wrapped, self, args, kwargs)
        149 if type(self)._grading_mode:
        150     return
    --> 151 return wrapped(*args, **kwargs)
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:188, in logs_event.<locals>.event_logger(wrapped, self, args, kwargs)
        186 except Exception as e:
        187     self._log_event(event_type, success=False, error=e)
    --> 188     raise e
        190 else:
        191     self._log_event(event_type, results=results, question=question, shelve_env=shelve_env)
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:182, in logs_event.<locals>.event_logger(wrapped, self, args, kwargs)
        179     question, results, shelve_env = wrapped(*args, **kwargs)
        181 else:
    --> 182     results = wrapped(*args, **kwargs)
        183     shelve_env = {}
        184     question = None
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\notebook.py:438, in Notebook.check_all(self)
        432 """
        433 Runs all tests on this notebook. Tests are run against the current global environment, so any
        434 tests with variable name collisions will fail.
        435 """
        436 self._log_event(EventType.BEGIN_CHECK_ALL)
    --> 438 tests = list_available_tests(self._path, self._resolve_nb_path(None, fail_silently=True))
        440 global_env = inspect.currentframe().f_back.f_back.f_back.f_globals
        442 self._logger.debug(f"Found available tests: {', '.join(tests)}")
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:221, in list_available_tests(tests_dir, nb_path)
        218         raise ValueError("Tests directory does not exist and no notebook path provided")
        220     with open(nb_path) as f:
    --> 221         nb = json.load(f)
        223     tests = list(nb["metadata"][NOTEBOOK_METADATA_KEY]["tests"].keys())
        225 return sorted(tests)
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
        274 def load(fp, *, cls=None, object_hook=None, parse_float=None,
        275         parse_int=None, parse_constant=None, object_pairs_hook=None, **kw):
        276     """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
        277     a JSON document) to a Python object.
        278 
       (...)
        291     kwarg; otherwise ``JSONDecoder`` is used.
        292     """
    --> 293     return loads(fp.read(),
        294         cls=cls, object_hook=object_hook,
        295         parse_float=parse_float, parse_int=parse_int,
        296         parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
    
    UnicodeDecodeError: 'cp932' codec can't decode byte 0x82 in position 508: illegal multibyte sequence
    
    bug 
    opened by spring-haru 9
  • Otter grading fails when checking more than 2 matplotlib objects

    Otter grading fails when checking more than 2 matplotlib objects

    Describe the bug

    I'm trying to create exercise for students to try different visualizations with matplotlib. I'm checking their work by saving the output to an axis object and then interrogating those properties.

    In general:

    ax_a = sns.barplot(data = table.to_df(), x = 'col2', y = 'col1', hue='col3') # SOLUTION
    

    and then in the tests

    # Test #
    xticks = {tick.get_text() for tick in ax_a.get_xticklabels()}
    expected = {'True', 'False'}
    assert (xticks == expected), 'Wrong patient ticks, did you plot right independent variable?'
    

    This all works great.

    Then, for some reason, when any more than 2 plots are in the notebook. The otter autochecker fails, yet generates notebooks that pass when run in interactive jupyter notebooks.

    To Reproduce

    I've attached a minimal_example.ipynb file minimal_example.ipynb.txt . Currently, when running otter assign it fails yet it creates a autograder solution that passes in the distribution folder.

    If you delete the third question. otter assign runs correctly. It doesn't even matter if its part of the doc tests. Just generating the third plot causes the previous two to fail. This is even more perplexing because I intentionally saved each plot into its own axis so I could robustly validate them even if the student ran things out of order.

    Expected behavior 3 or more plots.

    Versions

    Interactive notebooks otter-grader 1.1.6 and general Colab environment.

    Generating/grading version 2.1.7, Python 3.7.10

    Additional context

    Is there an alternative strategy for validating the "correct plot" was made aside from manual mode?

    bug 
    opened by JudoWill 9
  • Optimize & fix docker grading

    Optimize & fix docker grading

    Hello 😀 there have been some minor issues using the provided setup.sh and I would like to provide a fix:

    • Because the otter-grader base image is an ubuntu 20.04 texlive-generic-recommended is replaced by texlive-plain-generic
    • Use wkhtmltox_0.12.6-1.focal_amd64.deb instead of wkhtmltox_0.12.6-1.bionic_amd64.deb
    • Installing libcurl4-gnutls-dev and libcurl4-openssl-dev at once causes conflicts.
    • You can reduce the layers while building the docker image by merging the commands in the Dockerfile
    opened by fritterhoff 9
  • Speed up building R autograders on gradescope

    Speed up building R autograders on gradescope

    Describe the bug Building an R autograder on gradescope takes about 25 min each time, while building one in Python takes about 5 min. There are two steps where R autograders on gradescope get stuck on for ~10 min with no additional output in the console:

    Just after installing the conda packages (this takes about 3 seconds in python): image

    Just after running apt get clean (this takes about 2-3 minutes for python): image

    To Reproduce Upload the example R autograder config with no additional packages to gradescope.

    Expected behavior Faster build times on R would be very helpful. From sample setup.sh in the docs it seems like several of the steps could be skipped with access to the ucbdsinfra/otter-grader container image. Is there any way this could be distributed and uploaded to gradescope as part of the zip file (or something along those lines), so that the wait time is less than 5 min?

    It would also be great if there could be output to console during the most time-consuming steps to know what is taking such a long time.

    Versions 3.10, 4.0.2

    enhancement 
    opened by joelostblom 8
  • Error encountered while generating and submitting PDF

    Error encountered while generating and submitting PDF

    Describe the bug

    Error encountered while generating and submitting PDF: There was an error generating your LaTeX; showing full error message: Failed to run "['xelatex', 'notebook.tex', '-quiet']" command: This is XeTeX, Version 3.141592653-2.6-0.999993 (TeX Live 2022/dev/Debian) (preloaded format=xelatex) restricted \write18 enabled.

    If the error above is related to xeCJK or fandol in LaTeX and you don't require this functionality, try running again with no_xecjk set to True or the --no-xecjk flag.

    To Reproduce Steps to reproduce the behavior:

    1. Prepare the assignment notebook
    2. Run otter assign
    3. upload the zip file under autograder to Gradescope
    4. test autograder by submitting a solution notebook

    Expected behavior A pdf should be generated and submitted to another Gradescope assignment.

    Versions Python: 3.7.4 Otter: 4.2.1 nbconvert: 6.4.4

    Additional context Besides the failure mentioned above, I am also wondering how I can set no_xecjk to True. I can do that locally but it is not in one of the configurations in the Assignment Metadata.

    Also I have tried these two commands:

    otter export -v   --no-xecjk .\Homework1_F22_Solution.ipynb dist
    
    otter export -v  --exporter html  .\Homework1_F22_Solution.ipynb dist
    

    it gives Failed to run "xelatex notebook.tex -quiet" command: and otter.export.utils.WkhtmltopdfNotFoundError: PDF via HTML indicated but wkhtmltopdf not found , respectively.

    bug 
    opened by ericchouzyb 0
  • Executing otter-grader v4.2.1 for RMD files fails to generate different versions (student, grader) of the RMD master file

    Executing otter-grader v4.2.1 for RMD files fails to generate different versions (student, grader) of the RMD master file

    Describe the bug Executing the ottr-sample with a new otter installation v.4.0. After executing the following command in console: otter assign hw01.Rmd dist

    with otter v4.2.1 the process is executed silently without any errors (even with --verbose and --debug flags). However the resulting Rmd files are the same. No extraction of the R blocks and otter-grader instructions is performed in any of the versions

    To Reproduce Steps to reproduce the behavior:

    1. Install otter-grader v4.2.1
    2. Download the otter-sample directory
    3. Execute otter assign hw01.Rmd dist in the otter-sample directory
    4. There is no difference between ./dist/autograder/hw01.Rmd and ./dist/student/hw01.Rmd

    Expected behavior If there is any problem converting the RMD file, the execution of the command should point out any error.

    Versions python = 3.8.5 otter-grader = 4.2.1

    Additional context I downgraded the otter-grader to v3.3.0, and the parsing of the RMD document is working fine

    bug 
    opened by opterix 0
  • Can't create raw cells in Deepnote/Colab

    Can't create raw cells in Deepnote/Colab

    From an instructor that filled out the Otter-Grader Adoption Survey:

    One pain point of Otter is that it requires using Raw cells for denoting start/end of sections. Some notebook tools (Deepnote, Colab, etc) don't allow for the creation of Raw cells, which makes authoring notebooks using Otter v1 impossible.

    enhancement 
    opened by surajrampure 1
  • Multiline statements cause test failures

    Multiline statements cause test failures

    Describe the bug Having any multiline statement in test cells lead to a test failure because of unmatched parenthesis although the code works fine when executed in the Jupyter Notebook.

    For example, a test that looks like this:

    assert y.find(
        'a'
    ) == 1
    

    Will work fine in Jupyter and with otter assign, but when doing otter run it will throw the following error:

    /home/joel/miniconda3/envs/573/lib/python3.10/site-packages/nbformat/__init__.py:128: MissingIDFieldWarning: Code cell is missing an id field, this will become a hard error in future nbformat versions. You may want to use `normalize()` on your notebooks before validations (available since nbformat 5.1.4). Previous versions of nbformat are fixing this issue transparently, and will stop doing so in the future.
      validate(nb)
    Traceback (most recent call last):
      File "/home/joel/miniconda3/envs/573/bin/otter", line 8, in <module>
        sys.exit(cli())
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
        return self.main(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1055, in main
        rv = self.invoke(ctx)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 760, in invoke
        return __callback(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/cli.py", line 32, in wrapper
        return f(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/cli.py", line 65, in assign_cli
        return assign(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/assign/__init__.py", line 177, in main
        run_tests(
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/assign/utils.py", line 216, in run_tests
        raise RuntimeError(f"Some autograder tests failed in the autograder notebook:\n" + \
    RuntimeError: Some autograder tests failed in the autograder notebook:
        q1 results:
            q1 - 1 result:
                ❌ Test case failed
                Trying:
                    assert y.find(
                        'a'
                Expecting nothing
                **********************************************************************
                Line 1, in q1 0
                Failed example:
                    assert y.find(
                        'a'
                Exception raised:
                    Traceback (most recent call last):
                      File "/home/joel/miniconda3/envs/573/lib/python3.10/doctest.py", line 1350, in __run
                        exec(compile(example.source, filename, "single",
                      File "<doctest q1 0[0]>", line 1
                        assert y.find(
                                     ^
                    SyntaxError: '(' was never closed
                Trying:
                    ) == 1
                Expecting nothing
                **********************************************************************
                Line 3, in q1 0
                Failed example:
                    ) == 1
                Exception raised:
                    Traceback (most recent call last):
                      File "/home/joel/miniconda3/envs/573/lib/python3.10/doctest.py", line 1350, in __run
                        exec(compile(example.source, filename, "single",
                      File "<doctest q1 0[1]>", line 1
                        ) == 1
                        ^
                    SyntaxError: unmatched ')'
    

    To Reproduce Setup a notebook question with a test cell that has multiple lines, e.g. image

    Expected behavior It is useful to be able to split code over mutliple lines to make it more readable when doing method chaining and similar, so it would be great it this did not lead to an error, but worked since it is valid Python syntax.

    Versions Otter 4.2.0 Python 3.10.x

    bug 
    opened by joelostblom 0
  • Add progress indicator to `otter assign`

    Add progress indicator to `otter assign`

    Is your feature request related to a problem? Please describe. On long assignments, it can be hard to know whether otter assign is stuck or it is just taking a long time to go through all the questions. Getting some feedback on the grader's progress would be nice.

    Describe the solution you'd like It would be convenient if otter assign printed which question it was executing. A fancier version would be to add a progress meter such as https://tqdm.github.io/. Something like this should work well

    from tqdm import tqdm
    
    ...
    
    question_ids = # all question ids in the assignment 
    question_progress_bar = tqdm(question_ids)
    for question_id in question_progress_bar:
         question_progress_bar.set_description(question_id)
        ...
    
    enhancement 
    opened by joelostblom 1
  • Make `# SOLUTION` compatible with multi-line assignment

    Make `# SOLUTION` compatible with multi-line assignment

    Is your feature request related to a problem? Please describe. It would be convenient if # SOLUTION tags would work when the assigned expression is split over multiple lines. e.g.

    x = {  # SOLUTION
        'one': 1,
        'two': 2
    }
    

    which currently becomes:

    x = ...
        'one': 1,
        'two': 2
    }
    

    Describe the solution you'd like I would like the output of the above to be:

    x = ...
    

    Describe alternatives you've considered I am currently doing this which works but is a bit more typing:

    x = {  # SOLUTION
    # BEGIN SOLUTION NO PROMPT
        'one': 1,
        'two': 2
    }
    # END SOLUTION
    
    enhancement 
    opened by joelostblom 0
Releases(v4.2.1)
Owner
Infrastructure Team
Infrastructure Team at UC Berkeley Data Science Education Program
Infrastructure Team
Data Analytics on Genomes and Genetics

Data Analytics performed on On genomes and Genetics dataset to predict genetic disorder and disorder subclass. DONE by TEAM SIGMA!

1 Jan 12, 2022
Renato 214 Jan 02, 2023
Flexible HDF5 saving/loading and other data science tools from the University of Chicago

deepdish Flexible HDF5 saving/loading and other data science tools from the University of Chicago. This repository also host a Deep Learning blog: htt

UChicago - Department of Computer Science 255 Dec 10, 2022
We're Team Arson and we're using the power of predictive modeling to combat wildfires.

We're Team Arson and we're using the power of predictive modeling to combat wildfires. Arson Map Inspiration There’s been a lot of wildfires in Califo

Jerry Lee 3 Oct 17, 2021
Additional tools for particle accelerator data analysis and machine information

PyLHC Tools This package is a collection of useful scripts and tools for the Optics Measurements and Corrections group (OMC) at CERN. Documentation Au

PyLHC 3 Apr 13, 2022
PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

Burn Research 4 Oct 13, 2022
Stitch together Nanopore tiled amplicon data without polishing a reference

Stitch together Nanopore tiled amplicon data using a reference guided approach Tiled amplicon data, like those produced from primers designed with pri

Amanda Warr 14 Aug 30, 2022
track your GitHub statistics

GitHub-Stalker track your github statistics 👀 features find new followers or unfollowers find who got a star on your project or remove stars find who

Bahadır Araz 34 Nov 18, 2022
Universal data analysis tools for atmospheric sciences

U_analysis Universal data analysis tools for atmospheric sciences Script written in python 3. This file defines multiple functions that can be used fo

Luis Ackermann 1 Oct 10, 2021
Stochastic Gradient Trees implementation in Python

Stochastic Gradient Trees - Python Stochastic Gradient Trees1 by Henry Gouk, Bernhard Pfahringer, and Eibe Frank implementation in Python. Based on th

John Koumentis 2 Nov 18, 2022
Tools for the analysis, simulation, and presentation of Lorentz TEM data.

ltempy ltempy is a set of tools for Lorentz TEM data analysis, simulation, and presentation. Features Single Image Transport of Intensity Equation (SI

McMorran Lab 1 Dec 26, 2022
A Numba-based two-point correlation function calculator using a grid decomposition

A Numba-based two-point correlation function (2PCF) calculator using a grid decomposition. Like Corrfunc, but written in Numba, with simplicity and hackability in mind.

Lehman Garrison 3 Aug 24, 2022
Toolchest provides APIs for scientific and bioinformatic data analysis.

Toolchest Python Client Toolchest provides APIs for scientific and bioinformatic data analysis. It allows you to abstract away the costliness of runni

Toolchest 11 Jun 30, 2022
First steps with Python in Life Sciences

First steps with Python in Life Sciences This course material is part of the "First Steps with Python in Life Science" three-day course of SIB-trainin

SIB Swiss Institute of Bioinformatics 22 Jan 08, 2023
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

Blaze 3.1k Jan 05, 2023
Fitting thermodynamic models with pycalphad

ESPEI ESPEI, or Extensible Self-optimizing Phase Equilibria Infrastructure, is a tool for thermodynamic database development within the CALPHAD method

Phases Research Lab 42 Sep 12, 2022
Incubator for useful bioinformatics code, primarily in Python and R

Collection of useful code related to biological analysis. Much of this is discussed with examples at Blue collar bioinformatics. All code, images and

Brad Chapman 560 Jan 03, 2023
A Python package for modular causal inference analysis and model evaluations

Causal Inference 360 A Python package for inferring causal effects from observational data. Description Causal inference analysis enables estimating t

International Business Machines 506 Dec 19, 2022
This is an analysis and prediction project for house prices in King County, USA based on certain features of the house

This is a project for analysis and estimation of House Prices in King County USA The .csv file contains the data of the house and the .ipynb file con

Amit Prakash 1 Jan 21, 2022
Top 50 best selling books on amazon

It's a dashboard that shows the detailed information about each book in the top 50 best selling books on amazon over the last ten years

Nahla Tarek 1 Nov 18, 2021