Repository of best practices for deep learning in Julia, inspired by fastai

Overview

FastAI

Docs: Stable | Dev

FastAI.jl is inspired by fastai, and is a repository of best practices for deep learning in Julia. Its goal is to easily enable creating state-of-the-art models. FastAI enables the design, training, and delivery of deep learning models that compete with the best in class, using few lines of code.

Install with

using Pkg
Pkg.add("FastAI")

or try it out with this Google Colab template.

As an example, here is how to train an image classification model:

using FastAI
data, blocks = loaddataset("imagenette2-160", (Image, Label))
method = ImageClassificationSingle(blocks)
learner = methodlearner(method, data, callbacks=[ToGPU()])
fitonecycle!(learner, 10)
showoutputs(method, learner)

Please read the documentation for more information and see the setup instructions.

Comments
  • Add Time Series Block

    Add Time Series Block

    Added Time Series Container and Block. It is capable of loading all datasets from timeseriesclassification. The .ts files are loaded using Julia translation of this method .

    I have also added a basic test case for the recipe. This allows us to do the following

    using FastAI
    
    data, blocks = load(datarecipes()["ecg5000"])
    nobs(data)
    sample = series, class = getobs(data, 10)
    

    Just wanted to get some initial thoughts on the work, there might be more changes as I continue to work on the other parts.

    opened by codeboy5 23
  • TagBot trigger issue

    TagBot trigger issue

    This issue is used to trigger TagBot; feel free to unsubscribe.

    If you haven't already, you should update your TagBot.yml to include issue comment triggers. Please see this post on Discourse for instructions and more details.

    If you'd like for me to do this for you, comment TagBot fix on this issue. I'll open a PR within a few hours, please be patient!

    opened by JuliaTagBot 17
  • Blocks and container added for Text Dataset

    Blocks and container added for Text Dataset

    Registered the NLP ( Text ) dataset to be added in the upcoming months. Added functions for the blocks of the Text dataset. All the nlp dataset ( which are registered ) along with their forthcoming models will be added . Exploring Julia Text, MLutils and other package along with FastAI concepts so that these datasets can work well with Flux. As almost all the text datasets are in csv format it will be easily lo load them and create the corresponding container, working on further concepts to implement these text datasets.

    Currently I have added the entire basic structure of the Text Data comprising of the blocks and the containers. Have researched a lot since a week ( understanding FastAI docs and codebase ). Currently working on adding textrow block along with the recipes.jl. Also currently working on two datasets "imdb" and "amazon_review_full" as both have different folder structure so different blocks would be required. Also going through the 2 papers which have built state of the art model for these two datasets and working on its implementation. Any reviews thus far will be appreciated.

    Reopened PR#100 , needed to delete that repo due to merging issue.

    opened by arcAman07 12
  • InceptionTime Model for Time Series

    InceptionTime Model for Time Series

    This PR will contain the implementation of InceptionTime Model and it's use for classification and regression task.

    Some of the commits from the PR #253 are also in this PR, but will take care of them when that PR is merged.

    opened by codeboy5 9
  • Added Model for Time Series Classification

    Added Model for Time Series Classification

    I have added the code for a basic RNN Model for the task of time series classification.

    > data, blocks = load(datarecipes()["ecg5000"]);
    > task = TSClassificationSingle(blocks, data);
    > model = FastAI.taskmodel(task);
    > traindl, validdl = taskdataloaders(data, task, 32);
    > callbacks = [ToGPU(), Metrics(accuracy)];
    > learner = Learner(model, tasklossfn(task); data=(traindl, validdl), optimizer=ADAM(), callbacks = callbacks);
    > fitonecycle!(learner, 10, 0.033)
    
    image

    As I discussed with @darsnack, the idea is to add an encoding to do the reshaping to (features, batch, time) instead of doing it inside on the RNN Model. Working on that right now.

    Have remove the type from StackedLSTM as it was redundant.

    opened by codeboy5 9
  • FastAI seems very slow compared to

    FastAI seems very slow compared to "vanilla" Flux

    When I try to train a simple resnet on CIFAR10 dataset, FastAi seems very slow compared to Flux (≈ 9-19 times slower). It seems, it could be a garbage collector problem, because with Flux I can have a batch-size of 512, and with FastAI I can't exceed 128 without having a out of memory error.

    FastAI code:

    using FastAI
    using ResNet9 # Pkg.add(url = "https://github.com/a-r-n-o-l-d/ResNet9.jl", rev="v0.1.1")
    
    data, blocks = loaddataset("cifar10", (Image, Label))
    method = ImageClassificationSingle(blocks)
    model = resnet9(inchannels=3, nclasses=10, dropout=0.0)
    method = ImageClassificationSingle(blocks)
    learner = methodlearner(method, data; 
        lossfn=Flux.crossentropy,
        callbacks=[ToGPU()],
        batchsize=16,
        model=model,
        optimizer=Descent())
    
    @time fitonecycle!(learner, 5, 1f-3, pct_start=0.5, divfinal=100, div=100)
    

    Flux code:

    using Flux
    using Flux: DataLoader, onehotbatch
    using Augmentor
    using MLDatasets
    using ParameterSchedulers
    using ParameterSchedulers: Scheduler
    using ResNet9 # Pkg.add(url = "https://github.com/a-r-n-o-l-d/ResNet9.jl", rev="v0.1.1")
    
    normpip = SplitChannels() |> PermuteDims(3, 2, 1) |> ConvertEltype(Float32)
    
    labels = CIFAR10.classnames() .|> Symbol
    
    function datasets(batchsize)
        train = let
            x = CIFAR10.traintensor() |> CIFAR10.convert2image
            y = map(i -> labels[i + 1], CIFAR10.trainlabels())
            DataLoader((x, y), batchsize = batchsize, shuffle = true, partial = false)
        end
    
        test = let
            x = CIFAR10.testtensor() |> CIFAR10.convert2image
            y = map(i -> labels[i + 1], CIFAR10.testlabels())
            DataLoader((x, y), batchsize = batchsize)
        end
        
        train, test
    end
    
    function minibatch(x, y)
        h, w, n = size(x)
        xb = Array{Float32}(undef, w, h, 3, n)
        augmentbatch!(CPUThreads(), xb, x, normpip)
        yb = onehotbatch(y, labels)
        xb, yb
    end
    
    function train!(model, optimiser, nepochs)
        loss_hist = []
        loss(x, y) = Flux.crossentropy(model(x), y)
        ps = params(model)
        for e in 1:nepochs
            # Training phase
            tloss = 0
            trainmode!(model)
            for (x, y) ∈ train
                x, y = minibatch(x, y) |> gpu
                gs = gradient(ps) do
                    l = loss(x, y)
                    tloss += l
                    l
                end
                Flux.Optimise.update!(optimiser, ps, gs)
            end
            tloss /= length(train)
            # Validation phase
            testmode!(model)
            vloss = 0
            for (x, y) ∈ test
                x, y = minibatch(x, y) |> gpu
                vloss += loss(x, y)
            end
            vloss /= length(test)
            push!(loss_hist, (tloss, vloss))
        end
        
        loss_hist
    end
    
    train, test = datasets(16)
    nepochs = 5
    s = Triangle(λ0 = 1f-5, λ1 = 1f-3, period = nepochs * length(train))
    opt = Scheduler(s, Descent())
    model = resnet9(inchannels = 3, nclasses = 10, dropout = 0.0) |> gpu
    @time train!(model, opt, nepochs)
    

    Results on a RTX 2080 Ti: FastAI: 1841.008685 seconds (3.92 G allocations: 212.561 GiB, 59.59% gc time, 0.00% compilation time) Flux: 98.444806 seconds (106.49 M allocations: 16.643 GiB, 3.58% gc time, 2.58% compilation time)

    Results on a Quadro P5000: FastAI: 1574.714976 seconds (3.92 G allocations: 212.473 GiB, 11.08% gc time) Flux: 177.416636 seconds (105.55 M allocations: 16.639 GiB, 2.05% gc time, 1.42% compilation time)

    opened by a-r-n-o-l-d 9
  • Discriminative learning rates

    Discriminative learning rates

    Discriminative learning rates means using different learning rates for differents part of a model, so-called layer groups. This is used in fastai when finetuning models.

    enhancement numfocusgrant fastai-parity 
    opened by lorenzoh 9
  • word and character level word tokenizer.

    word and character level word tokenizer.

    Some miscellaneous changes with the tokenizer being the main focus. Implements the LearnBase getobs and nobs methods. Uses the WordTokenizers module.

    Closes #24

    opened by SamuelzXu 9
  • Add Container and Block for Text

    Add Container and Block for Text

    Tried starting at creating a simple textual recipe based on ImageFolders dataset recipe. This specifically works for imdb and similar datasets. Any feedback is highly appreciated.

    opened by Chandu-4444 8
  • Added Time Series Container and Block

    Added Time Series Container and Block

    Added Time Series Container and Block. Currently can only load univariate time series. This is work in progress for issue #155 . I was planning to add loaddataset function for such datasets. Currently all datasets have the same root URL :- "https://s3.amazonaws.com/fast-ai-" . For time series datasets the root url is different so i think we can proceed by add root_url field in the FastAIDataset structure. How does this sound ?

    opened by codeboy5 8
  • Problem in ResNet50 backbone of

    Problem in ResNet50 backbone of "Image segmentation" example

    I suspect that the following code line from the Image segmentation example

    backbone = Metalhead.ResNet50(pretrain=true).layers[1:end-3]
    

    is not doing what is intended since ResNet50 from Metalhead (https://github.com/darsnack/Metalhead.jl/tree/darsnack/vision-refactor) returns a 2 item Chain (backbone and head), and the 1:end-3 indexing returns an empty Chain.

    Funny enough, with the model return by methodmodel (basically a Conv((1,1), 3=>32)) the example still works and is able to produce some image segmentation (does it works like just a pixel color indexer?).

    I'd say the expected code should be something Metalhead.ResNet50(pretrain=true).layers[1].layers and I would open a PR, but I'm not sure since, with that, the example fails later in the training loop.

    opened by cdsousa 7
  • CompatHelper: bump compat for MLUtils to 0.4, (keep existing compat)

    CompatHelper: bump compat for MLUtils to 0.4, (keep existing compat)

    This pull request changes the compat entry for the MLUtils package from 0.2.6, 0.3 to 0.2.6, 0.3, 0.4. This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 0
  • loading datasets fails under proxy, but Base.download works

    loading datasets fails under proxy, but Base.download works

    Package Version

    0.5.0

    Julia Version

    1.8.3

    OS / Environment

    Windows10

    Describe the bug

    The downloads do not work under proxy, although Base.download and Downloads.download works just fine. The HTTP_PROXY and HTTPS_PROXY are set properly, ENV["HTTP_PROXY"] = "http://127.0.0.1:3128".

    using FastAI
    
    imagenette2_url = "https://s3.amazonaws.com/fast-ai-imageclas/imagenette2-160.tgz"
    FastAI.load(datasets()["imagenette2-160"])
    
    Do you want to download the dataset from https://s3.amazonaws.com/fast-ai-imageclas/imagenette2-160.tgz to "D:\z_installed_programs\julia-depot\datadeps\fastai-imagenette2-160"?
    [y/n]
    y
    ERROR: HTTP.Exceptions.RequestError(HTTP.Messages.Request:
    """
    GET /fast-ai-imageclas/imagenette2-160.tgz HTTP/1.1
    Host: s3.amazonaws.com
    Accept: */*
    User-Agent: HTTP.jl/1.8.3
    Content-Length: 0
    
    [Message Body was streamed]""", Base.IOError("X509 - Certificate verification failed, e.g. CRL, CA or signature check failed", -9984))
    

    Using Downloads.download or Base.download works just fine under the same proxy conditions.

    import Downloads
    imagenette2_url = "https://s3.amazonaws.com/fast-ai-imageclas/imagenette2-160.tgz"
    Downloads.download(imagenette2_url, "imagenette2-160.tgz")
    

    Steps to Reproduce

    see above

    Expected Results

    see above

    Observed Results

    see above

    Relevant log output

    see above

    bug 
    opened by MariusDrulea 0
  • the dataset is deleted right after download in Windows10

    the dataset is deleted right after download in Windows10

    Package Version

    0.5.0

    Julia Version

    1.8.3

    OS / Environment

    Windows10

    Describe the bug

    I just run the following code to download the coco_sample dataset: FastAI.load(datasets()["coco_sample"]). The download is succesful. After the download 7zip is being called to unpack the archive. After the unzipping the following error occurs. It looks like the script tries to delete the folder it just created, fastai-coco_cample. This happens with all the datasets.

    ERROR: LoadError: IOError: rm("D:\\z_installed_programs\\julia-depot\\datadeps\\fastai-coco_sample"): resource busy or locked (EBUSY)
    

    Note that I have the julia's DEPOT_PATH environment variable set to D:\\z_installed_programs\\julia-depot, instead of the default home directory of the user.

    Steps to Reproduce

    using FastAI
    FastAI.load(datasets()["coco_sample"])
    

    Expected Results

    get the coco sample dataset on the PC

    Observed Results

    the archive of the coco sample is downloaded, the archive is unzipped, then the error occurs and then the fastai-coco_cample folder containing the archive and the unzipped data is deleted.

    Relevant log output

    ERROR: LoadError: IOError: rm("D:\\z_installed_programs\\julia-depot\\datadeps\\fastai-coco_sample"): resource busy or locked (EBUSY)
    Stacktrace:
      [1] uv_error
        @ .\libuv.jl:97 [inlined]
      [2] rm(path::String; force::Bool, recursive::Bool)
        @ Base.Filesystem .\file.jl:306
      [3] checkfor_mv_cp_cptree(src::String, dst::String, txt::String; force::Bool)
        @ Base.Filesystem .\file.jl:330
      [4] #mv#15
        @ .\file.jl:425 [inlined]
      [5] (::FastAI.Datasets.var"#10#11")(f::String)
        @ FastAI.Datasets D:\z_installed_programs\julia-depot\packages\FastAI\as9UG\src\datasets\fastaidatasets.jl:261
      [6] #16
        @ D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution_automatic.jl:122 [inlined]
      [7] cd(f::DataDeps.var"#16#17"{FastAI.Datasets.var"#10#11", String}, dir::String)
        @ Base.Filesystem .\file.jl:101
      [8] run_post_fetch(post_fetch_method::FastAI.Datasets.var"#10#11", fetched_path::String)
        @ DataDeps D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution_automatic.jl:119
      [9] download(datadep::DataDeps.DataDep{String, String, typeof(DataDeps.fetch_default), FastAI.Datasets.var"#10#11"}, localdir::String; remotepath::String, i_accept_the_terms_of_use::Nothing, skip_checksum::Bool)
        @ DataDeps D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution_automatic.jl:84
     [10] download
        @ D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution_automatic.jl:63 [inlined]
     [11] handle_missing
        @ D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution_automatic.jl:10 [inlined]
     [12] _resolve
        @ D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution.jl:83 [inlined]
     [13] resolve(datadep::DataDeps.DataDep{String, String, typeof(DataDeps.fetch_default), FastAI.Datasets.var"#10#11"}, inner_filepath::String, calling_filepath::String)
        @ DataDeps D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution.jl:29
     [14] resolve(datadep_name::String, inner_filepath::String, calling_filepath::String)
        @ DataDeps D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution.jl:54
     [15] resolve
        @ D:\z_installed_programs\julia-depot\packages\DataDeps\ae6dT\src\resolution.jl:73 [inlined]
     [16] makeavailable
        @ D:\z_installed_programs\julia-depot\packages\FastAI\as9UG\src\datasets\loaders.jl:46 [inlined]
     [17] loaddata(loader::FastAI.Datasets.DataDepLoader)
        @ FastAI.Datasets D:\z_installed_programs\julia-depot\packages\FastAI\as9UG\src\datasets\loaders.jl:50
     [18] (::FastAI.Registries.var"#8#13")(row::NamedTuple{(:id, :description, :size, :tags, :package, :downloaded, :loader), Tuple{String, Union{Missing, String}, Union{Missing, String}, Vector{String}, Module, Bool, FastAI.Datasets.DatasetLoader}})
        @ FastAI.Registries D:\z_installed_programs\julia-depot\packages\FastAI\as9UG\src\Registries\datasets.jl:38
     [19] load(entry::FeatureRegistries.RegistryEntry; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
        @ FeatureRegistries D:\z_installed_programs\julia-depot\packages\FeatureRegistries\FBMLI\src\registry.jl:135
     [20] load
        @ D:\z_installed_programs\julia-depot\packages\FeatureRegistries\FBMLI\src\registry.jl:135 [inlined]
    
    bug 
    opened by MariusDrulea 2
  • Add Metalhead.jl models to model registry

    Add Metalhead.jl models to model registry

    This populates the model registry from #267 with models from Metalhead.jl.

    Depends on #267 as well as unreleased [email protected] . Possibly supersedes https://github.com/FluxML/Metalhead.jl/pull/153.

    See #267 for usage.

    PR Checklist

    • [ ] Tests are added
    • [ ] Documentation, if applicable
    opened by lorenzoh 0
  • Add a feature registry for models

    Add a feature registry for models

    Implements #246.

    PR Checklist

    • [x] Tests are added
    • [x] Documentation (this is an internal change for now, docs will be added in follow-up when functionality is made available and used in domain package)

    Usage examples

    From #269:

    
    using FastAI: models
    # loading this adds the models to registry
    using FastVision
    
    
    # Load original model, 1000 output classes, no weights (`ResNet(18)`):
    load(models()["metalhead/resnet18"]);
    
    # Load original model, 1000 output classes, with weights (`ResNet(18), pretrain=true`):
    load(models()["metalhead/resnet18"], pretrained = true);
    
    # Load only backbone, without weights:
    load(models()["metalhead/resnet18"], variant = "backbone");
    
    # Load only backbone, with weights:
    load(models()["metalhead/resnet18"], pretrained = true, variant = "backbone");
    
    # Load model for task, adapting layers as necessary:
    task = ImageClassificationSingle((256, 256), 1:5, C = Gray{N0f8}) # input with 1 color channel, 5 classes
    load(models()["metalhead/resnet18"], input = task.blocks.x, output = task.blocks.y)
    # Also works with pretrained weights
    load(models()["metalhead/resnet18"], pretrained = true, input = task.blocks.x, output = task.blocks.y)
    
    # Correct variants are selected automatically given the blocks:
    load(models()["metalhead/resnet18"], output = FastAI.ConvFeatures)  # uses backbone variant
    
    
    # Support for multiple checkpoints, selectable by name:
    load(models()["metalhead/resnet18"], checkpoint = "imagenet1k")
    
    

    Docs

    The proposed interface is well-described by the registry description, pasted below:

    A FeatureRegistry for models. Allows you to find and load models for various learning tasks using a unified interface. Call models() to see a table view of available models:

    using FastAI
    models()
    

    Which models are available depends on the loaded packages. For example, FastVision.jl adds vision models from Metalhead to the registry. Index the registry with a model ID to get more information about that model:

    using FastAI: models
    using FastVision  # loading the package extends the list of available models
    
    models()["metalhead/resnet18"]
    

    If you've selected a model, call load to then instantiate a model:

    model = load("metalhead/resnet18")
    

    By default, load loads a default version of the model without any pretrained weights.

    load(model) also accepts keyword arguments that allow you to specify variants of the model and weight checkpoints that should be loaded.

    Loading a checkpoint of pretrained weights:

    • load(entry; pretrained = true): Use any pretrained weights, if they are available.
    • load(entry; checkpoint = "checkpoint-name"): Use the weights with given name. See entry.checkpoints for available checkpoints (if any).
    • load(entry; pretrained = false): Don't use pretrained weights

    Loading a model variant for a specific task:

    • load(entry; input = ImageTensor, output = OneHotLabel): Load a model variant matching an input and output block.
    • load(entry; variant = "backbone"): Load a model variant by name. Seeentry.variants` for available variants.
    opened by lorenzoh 2
Releases(v0.5.0)
  • v0.5.0(Oct 22, 2022)

    FastAI v0.5.0

    Diff since v0.4.3

    Closed issues:

    • Move datasets to MLDatasets.jl (#22)
    • MLUtils.jl transition (#196)
    • Functions getcoltypes and gettransformdict are not exported properly (#210)
    • Makie 0.17 support (#224)
    • Keypoint regression example: The input graph contains at least one loop (#231)
    • Log to TensorBoard link in TOC (#232)
    • Log to TensorBoard link in TOC (#233)
    • Docs aren't working correctly. (#237)
    • Make a subpackage for Makie support (#241)

    Merged pull requests:

    • Update TagBot.yml (#226) (@lorenzoh)
    • Improve onboarding experience (#227) (@lorenzoh)
    • Switch LearnBase + MLDataPattern + DataLoaders -> MLUtils (#229) (@lorenzoh)
    • Fix link to TensorBoard how-to (#234) (@lorenzoh)
    • Add Time Series Block (#239) (@codeboy5)
    • Move domain-specific functionality to subpackages (#240) (@lorenzoh)
    • CompatHelper: bump compat for UnicodePlots to 3, (keep existing compat) (#244) (@github-actions[bot])
    • Text classification task (#245) (@Chandu-4444)
    • Use Adam instead of ADAM (#247) (@lorenzoh)
    • CompatHelper: add new compat entry for TextAnalysis at version 0.7, (keep existing compat) (#249) (@github-actions[bot])
    • Added Model for Time Series Classification (#253) (@codeboy5)
    • InceptionTime Model for Time Series (#256) (@codeboy5)
    • Fix the broken link in the README (#257) (@nathanaelbosch)
    • CompatHelper: bump compat for PrettyTables to 2, (keep existing compat) (#260) (@github-actions[bot])
    • Fix _segmentationloss for 3D images (#261) (@itan1)
    • Update Pollen.jl documentation (#262) (@lorenzoh)
    • Fix UNet for 3D convolutions (specify ndim to convxlayer and ResBlock) (#263) (@itan1)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(May 14, 2022)

    FastAI v0.4.3

    Diff since v0.4.2

    Closed issues:

    • Register Everything (#206)

    Merged pull requests:

    • Add Container and Block for Text (#207) (@Chandu-4444)
    • Feature registries (#208) (@lorenzoh)
    • CompatHelper: add new compat entry for FeatureRegistries at version 0.1, (keep existing compat) (#225) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Apr 30, 2022)

    FastAI v0.4.2

    Diff since v0.4.1

    Closed issues:

    • Keypoint Regression example in documentation (#218)
    • Documentation link broken for Custom Learning tasks (#220)

    Merged pull requests:

    • Keypoint regression example (#221) (@itan1)
    • Update to new Pollen template (#222) (@lorenzoh)
    • Add FluxTraining 0.3 compatibility (#223) (@lorenzoh)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Apr 20, 2022)

    FastAI v0.4.1

    Diff since v0.4.0

    Closed issues:

    • Support for non-supervised learning tasks (#165)
    • LoadError in some pages of documentation. (#192)
    • Drop DLPipelines.jl (#197)
    • Update for Flux 0.13 (#201)
    • Error in reproducing the "Data containers" tutorial: "key :nothing not found" (#209)
    • Missing ProgressBars.jl import for Vision.imagedatasetstats (#214)
    • Use JpegTurbo.jl to load .jpg images (#216)

    Merged pull requests:

    • Add Flux 0.13 compatibility (#202) (@lorenzoh)
    • New documentation frontend (#203) (@lorenzoh)
    • Pollen docs update part 2 (#213) (@lorenzoh)
    • Ports over _predictx from DLPipelines.jl (#215) (@lorenzoh)
    • Fix progress bar in imagedatasetstats (#217) (@lorenzoh)
    • add ImageIO backend (#219) (@johnnychen94)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 19, 2022)

    FastAI v0.4.0

    Diff since v0.3.0

    Closed issues:

    • N-dimensional CNN models (#137)
    • Better visualization/interpretation API (#154)
    • FastAI seems very slow compared to "vanilla" Flux (#187)

    Merged pull requests:

    • change julia compat to 1.6 (#175) (@EvoArt)
    • Add support for 3D convolutional backbones (#181) (@lorenzoh)
    • Move domain-specific functionality to submodules (#186) (@lorenzoh)
    • Make block learning methods more modular (#188) (@lorenzoh)
    • CompatHelper: bump compat for LearnBase to 0.6, (keep existing compat) (#189) (@github-actions[bot])
    • Make UNet closer to fastai (#190) (@lorenzoh)
    • CompatHelper: bump compat for CSV to 0.10, (keep existing compat) (#191) (@github-actions[bot])
    • Add imdb_sample recipe (#193) (@Chandu-4444)
    • Add food-101 recipe (#194) (@Chandu-4444)
    • Any-length dimensions for Bounded (#195) (@lorenzoh)
    • Removing DLPipelines.jl; Learning method -> Learning task (#198) (@lorenzoh)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Dec 11, 2021)

    FastAI v0.3.0

    Diff since v0.2.0

    v0.3.0

    Added

    • A new API for visualizing data. See this issue for motivation. This includes:

      • High-level functions for visualizing data related to a learning method: showsample, showsamples, showencodedsample, showencodedsamples, showbatch, showprediction, showpredictions, showoutput, showoutputs, showoutputbatch
      • Support for multiple backends, including a new text-based show backend that you can use to visualize data in a non-graphical environment. This is also the default unless Makie is imported.
      • Functions for showing blocks directly: showblock, showblocks
      • Interfaces for extension: ShowBackend, showblock!, showblocks!

    Removed

    • The old visualization API incl. all its plot* methods: plotbatch, plotsample, plotsamples, plotpredictions

    Closed issues:

    • Visualization functions are not working (#184)

    Merged pull requests:

    • CompatHelper: bump compat for CSV to 0.9, (keep existing compat) (#168) (@github-actions[bot])
    • New interpretation/Visualization API (#176) (@lorenzoh)
    • CompatHelper: add new compat entry for InlineTest at version 0.2, (keep existing compat) (#177) (@github-actions[bot])
    • CompatHelper: add new compat entry for ImageInTerminal at version 0.4, (keep existing compat) (#178) (@github-actions[bot])
    • CompatHelper: add new compat entry for Requires at version 1, (keep existing compat) (#179) (@github-actions[bot])
    • CompatHelper: add new compat entry for UnicodePlots at version 2, (keep existing compat) (#180) (@github-actions[bot])
    • Make Only more generic (#182) (@lorenzoh)
    • v0.3.0 (#185) (@lorenzoh)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Sep 21, 2021)

    FastAI v0.2.0

    Diff since v0.1.0

    0.2.0

    Added

    Changed

    • Documentation sections to reference FasterAI interfaces:
    • Breaking changes to methodlearner:
      • now accepts callbacks as kwarg
      • validdata no longer keyword
      • model and backbone now kwargs; isbackbone removed. if neither backbone or model are given, uses blockbackbone for default backbone.
      • see updated docstring for details

    Closed issues:

    • FasterAI: a roadmap to user-friendly, high-level interfaces (#148)
    • Makie crashes precompilation (#156)
    • Problem in ResNet50 backbone of "Image segmentation" example (#169)

    Merged pull requests:

    • Add tabular model (#124) (@manikyabard)
    • Add tabular learning methods (#141) (@manikyabard)
    • Fix Pkg usage in developing.md (#149) (@amqdn)
    • Remove unused notebooks in docs (#150) (@amqdn)
    • FasterAI (#151) (@lorenzoh)
    • Fix WrapperBlock behavior and add Many (#158) (@lorenzoh)
    • Documentation for FasterAI updates (#161) (@lorenzoh)
    • CompatHelper: add new compat entry for "Setfield" at version "0.7" (#162) (@github-actions[bot])
    • Adding some more dataset recipes (#163) (@lorenzoh)
    • Breaking API changes to methodlearner (#164) (@lorenzoh)
    • CompatHelper: bump compat for IndirectArrays to 1, (keep existing compat) (#166) (@github-actions[bot])
    • update get_emb_sz method (#167) (@manikyabard)
    • Better docstrings for data block functions (#170) (@lorenzoh)
    • Setup step for encodings (#171) (@lorenzoh)
    • CompatHelper: bump compat for Setfield to 0.8, (keep existing compat) (#172) (@github-actions[bot])
    • Fix #169 by constructing pretrained backbone correctly (#173) (@lorenzoh)
    • Release v0.2.0 (#174) (@lorenzoh)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 28, 2021)

    FastAI v0.1.0

    Closed issues:

    • First pass at datasets and data loaders (#1)
    • Thank you! (#7)
    • MIT vs Apache license (#8)
    • Some maintenance and housekeeping things (#9)
    • Contributions? (#15)
    • Tutorial errors out (#19)
    • Add TableDataset (#23)
    • Learning Method: Single-label image classification (#25)
    • Learning method: Image segmentation (#29)
    • Learning Method: Keypoint Regression (#30)
    • Use case: Siamese networks for image similarity (#31)
    • Improve projective data augmentation (#32)
    • Learning Method: Multi-label image classification (#33)
    • Weight decay option in fitonecycle! (#34)
    • Discriminative learning rates (#35)
    • Augmentor.jl (#68)
    • Data Block API (#135)
    • Malformed FluxTraining compat requirement in Project.toml (#138)
    • Two FluxTraining entry in Project.toml (#139)

    Merged pull requests:

    • CompatHelper: add new compat entry for "Revise" at version "2.7" (#2) (@github-actions[bot])
    • CompatHelper: add new compat entry for "StatsBase" at version "0.33" (#3) (@github-actions[bot])
    • CompatHelper: add new compat entry for "Infiltrator" at version "0.3" (#4) (@github-actions[bot])
    • CompatHelper: add new compat entry for "Flux" at version "0.11" (#5) (@github-actions[bot])
    • CompatHelper: add new compat entry for "Zygote" at version "0.5" (#6) (@github-actions[bot])
    • Simplied Recorder and Metric (#10) (@opus111)
    • Use DocStringExtensions to remove manual types and signatures (#11) (@ToucheSir)
    • New README for FastAI (#14) (@opus111)
    • FastAI.jl revamp (#17) (@lorenzoh)
    • update docs deps (#18) (@lorenzoh)
    • Fix the Quickstart tutorial and make the docs refer to this repo instead of a fork (#20) (@dave7895)
    • added TableDataset (#26) (@manikyabard)
    • tutorial errors: changed {load->get}classesclassification (#58) (@SamuelzXu)
    • move docs to Pollen (#60) (@lorenzoh)
    • Lo/fix test (#66) (@lorenzoh)
    • Better vision augmentations (#67) (@lorenzoh)
    • CompatHelper: bump compat for "LearnBase" to "0.4" (#113) (@github-actions[bot])
    • CompatHelper: bump compat for "MosaicViews" to "0.3" (#115) (@github-actions[bot])
    • Move to DataAugmentation v0.2.0 (#116) (@lorenzoh)
    • Small fixes (#117) (@lorenzoh)
    • WIP: Ongoing development (#118) (@lorenzoh)
    • Develop (#119) (@lorenzoh)
    • CompatHelper: add new compat entry for "ShowCases" at version "0.1" (#120) (@github-actions[bot])
    • CompatHelper: add new compat entry for "JLD2" at version "0.4" (#121) (@github-actions[bot])
    • Replace AbstractPlotting.jl with Makie.jl (#122) (@lorenzoh)
    • CompatHelper: add new compat entry for "Makie" at version "0.13" (#123) (@github-actions[bot])
    • CompatHelper: bump compat for "Makie" to "0.14" (#125) (@github-actions[bot])
    • Docs: how-to for logging (#126) (@lorenzoh)
    • Docs: tutorial on dataset presizing (#127) (@lorenzoh)
    • CompatHelper: add new compat entry for "CSV" at version "0.8" (#128) (@github-actions[bot])
    • CompatHelper: add new compat entry for "DataFrames" at version "1" (#129) (@github-actions[bot])
    • Update keypoint regression tutorial to include custom learning method and plotting functions. (#130) (@lorenzoh)
    • Remove all reference to LearningTask. (#131) (@lorenzoh)
    • Update to FluxTraining.jl v0.2.0 interfaces (#134) (@lorenzoh)
    • Data block API (#136) (@lorenzoh)
    • Get ready for release of 0.1.0 (#145) (@lorenzoh)
    Source code(tar.gz)
    Source code(zip)
Owner
FluxML
The Elegant Machine Learning Stack
FluxML
FTIR-Deep Learning - FTIR Deep Learning With Python

CANDIY-spectrum Human analyis of chemical spectra such as Mass Spectra (MS), Inf

Wei Mei 1 Jan 03, 2022
Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

The Second Situated Interactive MultiModal Conversations (SIMMC 2.0) Challenge 2021 Welcome to the Second Situated Interactive Multimodal Conversation

Facebook Research 81 Nov 22, 2022
The official implementation of CircleNet: Anchor-free Detection with Circle Representation, MICCAI 2030

CircleNet: Anchor-free Detection with Circle Representation The official implementation of CircleNet, MICCAI 2020 [PyTorch] [project page] [MICCAI pap

The Biomedical Data Representation and Learning Lab 45 Nov 18, 2022
Official implementation of the paper Momentum Capsule Networks (MoCapsNet)

Momentum Capsule Network Official implementation of the paper Momentum Capsule Networks (MoCapsNet). Abstract Capsule networks are a class of neural n

8 Oct 20, 2022
OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021)

OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021) Video demo We here provide a video demo from co

20 Nov 25, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

2.7k Jan 05, 2023
Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks

Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks - Official Project Page This repository contains the code develope

Amirsina Torfi 1.7k Dec 18, 2022
Unofficial Pytorch Implementation of WaveGrad2

WaveGrad 2 — Unofficial PyTorch Implementation WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis Unofficial PyTorch+Lightning Implementati

MINDs Lab 104 Nov 29, 2022
Python scripts for performing stereo depth estimation using the HITNET Tensorflow model.

HITNET-Stereo-Depth-estimation Python scripts for performing stereo depth estimation using the HITNET Tensorflow model from Google Research. Stereo de

Ibai Gorordo 76 Jan 02, 2023
exponential adaptive pooling for PyTorch

AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling Abstract Pooling layers are essential building blocks of Convolutional Ne

Alexandros Stergiou 55 Jan 04, 2023
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
Mixed Transformer UNet for Medical Image Segmentation

MT-UNet Update 2021/11/19 Thank you for your interest in our work. We have uploaded the code of our MTUNet to help peers conduct further research on i

dotman 92 Dec 25, 2022
chainladder - Property and Casualty Loss Reserving in Python

chainladder (python) chainladder - Property and Casualty Loss Reserving in Python This package gets inspiration from the popular R ChainLadder package

Casualty Actuarial Society 130 Dec 07, 2022
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads-Tutorial-3 Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads Inc 2 Jan 03, 2022
Implementation of Memory-Efficient Neural Networks with Multi-Level Generation, ICCV 2021

Memory-Efficient Multi-Level In-Situ Generation (MLG) By Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen and David Z. Pan

Jiaqi Gu 2 Jan 04, 2022
Subdivision-based Mesh Convolutional Networks

Subdivision-based Mesh Convolutional Networks The official implementation of SubdivNet in our paper, Subdivion-based Mesh Convolutional Networks Requi

Zheng-Ning Liu 181 Dec 28, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
codes for Self-paced Deep Regression Forests with Consideration on Ranking Fairness

Self-paced Deep Regression Forests with Consideration on Ranking Fairness This is official codes for paper Self-paced Deep Regression Forests with Con

Learning in Vision 4 Sep 11, 2022