Multiple paper open-source codes of the Microsoft Research Asia DKI group

Overview

📫 Paper Code Collection (MSRA DKI Group)

License: MIT

This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corresponding code as below:

News

Release (Click Title to Locate the Code)

  • Learning Algebraic Recombination for Compositional Generalization: Neural sequence models exhibit limited compositional generalization ability in semantic parsing tasks. Compositional generalization requires algebraic recombination, i.e., dynamically recombining structured expressions in a recursive manner. However, most previous studies mainly concentrate on recombining lexical units, which is an important but not sufficient part of algebraic recombination. In this paper, we propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization. The key insight is to model the semantic parsing task as a homomorphism between a latent syntactic algebra and a semantic algebra, thus encouraging algebraic recombination. Specifically, we learn two modules jointly: a Composer for producing latent syntax, and an Interpreter for assigning semantic operations. Experiments on two realistic and comprehensive compositional generalization benchmarks demonstrate the effectiveness of our model.
  • "What Do You Mean by That?" A Parser-Independent Interactive Approach for Enhancing Text-to-SQL: In Natural Language Interfaces to Databases (NLIDB) systems, the text-to-Structured Query Language (SQL) technique allows users to query databases by using natural language questions. Though significant progress in this area has been made recently, most parsers may fall short when they deal with real systems. One main reason stems from the difficulty of fully understanding the user's natural language questions. In this paper, we include human in the loop and present a novel parser-independent interactive approach (PIIA) that interacts with users using multi-choice questions and can easily work with arbitrary parsers. Experiments were conducted on two cross-domain datasets, the WikiSQL and the more complex Spider, with five state-of-the-art parsers. These demonstrated that PIIA is capable of enhancing the text-to-SQL performance with limited interaction turns by using both simulation and human evaluation.
  • Incomplete Utterance Rewriting as Semantic Segmentation: Recent years the task of incomplete utterance rewriting has raised a large attention. Previous works usually shape it as a machine translation task and employ sequence to sequence based architecture with copy mechanism. In this paper, we present a novel and extensive approach, which formulates it as a semantic segmentation task. Instead of generating from scratch, such a formulation introduces edit operations and shapes the problem as prediction of a word-level edit matrix. Benefiting from being able to capture both local and global information, our approach achieves state-ofthe-art performance on several public datasets. Furthermore, our approach is four times faster than the standard approach in inference.
  • Hierarchical Poset Decoding for Compositional Generalization in Language: We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset). Current encoder-decoder architectures do not take the poset structure of semantics into account properly, thus suffering from poor compositional generalization ability. In this paper, we propose a novel hierarchical poset decoding paradigm for compositional generalization in language. Intuitively: (1) the proposed paradigm enforces partial permutation invariance in semantics, thus avoiding overfitting to bias ordering information; (2) the hierarchical mechanism allows to capture high-level structures of posets. We evaluate our proposed decoder on Compositional Freebase Questions (CFQ), a large and realistic natural language question answering dataset that is specifically designed to measure compositional generalization. Results show that it outperforms current decoders.
  • Compositional Generalization by Learning Analytical Expressions: Compositional generalization is a basic but essential intellective capability of human beings, which allows us to recombine known parts readily. However, existing neural network based models have been proven to be extremely deficient in such a capability. Inspired by work in cognition which argues compositionality can be captured by variable slots with symbolic functions, we present a refreshing view that connects a memory-augmented neural model with analytical expressions, to achieve compositional generalization. Our model consists of two cooperative neural modules Composer and Solver, fitting well with the cognitive argument while still being trained in an end-to-end manner via a hierarchical reinforcement learning algorithm. Experiments on a well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization, solving all challenges addressed by previous works with 100% accuracies.
  • How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context: Semantic parsing in context is challenging since there are complex contextual phenomena. Previous works verified their proposed methods in limited scenarios, which motivates us to conduct an exploratory study on context modeling methods under real-world semantic parsing in context. We present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. We evaluate 13 context modeling methods on two large complex cross-domain datasets, and our best model achieves state-of-the-art performances on both datasets with significant improvements. Furthermore, we summarize the most frequent contextual phenomena, with a fine-grained analysis on representative models, which may shed light on potential research directions.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Question

If you have any question or find any bug, please go ahead and open an issue. Issues are an acceptable discussion forum as well.

If you want to concat the author, please email: qian DOT liu AT buaa.edu.cn.

Comments
  • 关于`python predict.py`的问题~

    关于`python predict.py`的问题~

    您好! 感谢作者的开源! 我在执行python install -r requirement.txt之后执行 cd src && python predict.py之后出现错误~ 似乎是模型"../pretrained_weights/multi_bert.tar.gz"并没有加载进去~

    非常期待能得到您的回复~ 祝您五一快乐~

    Model name 'bert-base-chinese' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese.tar.gz' was a path or url but couldn't find any file associated to this path or url.
    Traceback (most recent call last):
      File "predict.py", line 28, in <module>
        manager = PredictManager("../pretrained_weights/multi_bert.tar.gz")
      File "predict.py", line 12, in __init__
        archive = load_archive(archive_file)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/models/archival.py", line 230, in load_archive
        cuda_device=cuda_device)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/models/model.py", line 327, in load
        return cls.by_name(model_type)._load(config, serialization_dir, weights_file, cuda_device)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/models/model.py", line 265, in _load
        model = Model.from_params(vocab=vocab, params=model_params)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 365, in from_params
        return subclass.from_params(params=params, **extras)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 386, in from_params
        kwargs = create_kwargs(cls, params, **extras)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 133, in create_kwargs
        kwargs[name] = construct_arg(cls, name, annotation, param.default, params, **extras)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 229, in construct_arg
        return annotation.from_params(params=subparams, **subextras)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 365, in from_params
        return subclass.from_params(params=params, **extras)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/modules/text_field_embedders/basic_text_field_embedder.py", line 168, in from_params
        token_embedders[key] = TokenEmbedder.from_params(vocab=vocab, params=embedder_params)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 365, in from_params
        return subclass.from_params(params=params, **extras)
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/common/from_params.py", line 388, in from_params
        return cls(**kwargs)  # type: ignore
      File "/home/cingti/anaconda3/envs/cpu-py36/lib/python3.6/site-packages/allennlp/modules/token_embedders/bert_token_embedder.py", line 272, in __init__
        for param in model.parameters():
    AttributeError: 'NoneType' object has no attribute 'parameters'
    
    opened by cingtiye 18
  • MCD2 and MCD3 specific data processing?

    MCD2 and MCD3 specific data processing?

    Hi authors, @SivilTaram

    I see there is some specialized logic to process the CFQ dataset for the MCD2 and MCD3 datasets. We are confused why this special path is present. Why did you add this special logic? What what the behavior if you preprocessed MCD2 and MCD3 with the MCD1 preprocessing code paths?

    https://github.com/microsoft/ContextualSP/blob/73c8b03c8d633be4145df8564c7bc952ef5b0079/poset_decoding/preprocess_hierarchical_inference.py#L61-L67

    https://github.com/microsoft/ContextualSP/blob/b4890daf936e394e0c577401415a75028e638f4b/poset_decoding/data/generate_phrase_table.py#L731-L735

    Thanks, Paras

    opened by parasj 8
  • LogiGAN: dataset creation

    LogiGAN: dataset creation

    Hi! Thank you for sharing the code for LogiGAN paper. I'm having troubles creating training set. In particular:

    1. Here code refers to non-exiting script. I have replaced commands with "python corpus_construction.py --start 0 --end 500 --indicator_type conclusion &" - is it the right way to do?
    2. elastic_search/build_gen_train/ver_train refers to files that do not exist in the bookcorpus, and there are no instructions how to create them. Is there a script/link to generate gan_corpus_new/beta/gen_train_B.jsonl and gan_corpus_new/beta/ver_train.jsonl files?
    opened by Golovneva 6
  • about prediction problem

    about prediction problem

    I'm not very familiar with AllenNLP api,How do you use the prediction code? I write the following code that reports an " TypeError: is_bidirectional() missing 1 required positional argument: 'self'" error

    @Predictor.register("rewrite") class RewritePredictor(Predictor):

    @overrides
    def _json_to_instance(self, json_dict: JsonDict) -> Instance:
        """
        Expects JSON that looks like `{"source": "..."}`.
        """
        context = json_dict["context"]
        current = json_dict["current"]
        # placeholder
        # restate_utt = "hi"
        restate_utt = json_dict["restate_utt"]
        return self._dataset_reader.text_to_instance(context, current, restate_utt, training=False)
    

    inputs = { "context":'浙 江 省 温 州 市 鹿 城 区 有 好 天 气 这 种 天 气 最 适 合 出 门 了 骑 骑 车 兜 兜 风', "current":'明 天 天 气 咋 样', "restate_utt":'hi' } model = UnifiedFollowUp( Vocabulary, Seq2SeqEncoder, TextFieldEmbedder ) dataset_reader = RewriteDatasetReader()

    pred_fun = RewritePredictor(model = model,dataset_reader = dataset_reader) result = pred_fun._json_to_instance(inputs)

    opened by LLLLLLoki 6
  • Whether the model of 'awakening_latent_grounding' is open source.

    Whether the model of 'awakening_latent_grounding' is open source.

    Hi! Your paper has inspired me a lot, and I am very grateful.

    I wonder if the code for this paper will be open source. Because it hasn't been updated on github.

    opened by Soul-Immortal 5
  • Semantic parsing in context predict sql

    Semantic parsing in context predict sql

    Hello everyone in the Semantic Parsing in Context repository, predicted sql queries with where are never correct. example: what is the abbreviation for Jetblue? given as query "SELECT airlines.abbreviation FROM airlines WHERE airlines.airline = 1" as you can see the value associated with WHERE is 1 instead of Jetblue. it's the same for all queries with WHERE. Is there a way to resolve this. Thanks in advance

    opened by eche043 5
  • Error while training using turn.none.jsonnet

    Error while training using turn.none.jsonnet

    Hi,

    I am trying to run the code for turn.none.jsonnet, I am getting the following error

    Traceback (most recent call last): File "./dataset_reader/sparc_reader.py", line 143, in build_instance sql_query_list=sql_query_list File "./dataset_reader/sparc_reader.py", line 417, in text_to_instance action_non_terminal, action_seq, all_valid_actions = world.get_action_sequence_and_all_actions() File "./context/world.py", line 155, in get_action_sequence_and_all_actions action_sequence = self.sql_converter.translate_to_intermediate(self.sql_clause) File "./context/converter.py", line 87, in translate_to_intermediate return self._process_statement(sql_clause=sql_clause) File "./context/converter.py", line 117, in _process_statement inter_seq.extend(self._process_root(sql_clause)) File "./context/converter.py", line 657, in _process_root step_inter_seq = _process_step(cur_state) File "./context/converter.py", line 511, in _process_step return call_back_mapping[step_state](sql_clause) File "./context/converter.py", line 262, in _process_join if self.col_names[col_ind].refer_table.name == join_tab_name: KeyError: 'C'

    After this the code fails with: site-packages/allennlp/data/vocabulary.py", line 399, in from_instances instance.count_vocab_items(namespace_token_counts) AttributeError: 'NoneType' object has no attribute 'count_vocab_items

    I have downloaded the glove embeddings in glove folder and the dataset is in dataset_sparc along with the code. Do you have any suggestions what might be the issue?

    Thanks

    opened by ndmanifold 5
  • multi replcae position in same current position

    multi replcae position in same current position

    Hi,

    You pointed out that there are multiple inserts in the same location in your article, but what if context_seq has more than one replace on the current_seq [m,n] position?

    Thanks!

    opened by lianzhaoy 4
  • Unable to re-produce LANE results on SCAN length split

    Unable to re-produce LANE results on SCAN length split

    Hello,

    I am running the training command as instructed in LANE: CUDA_VISIBLE_DEVICES=0 python main.py --mode train --checkpoint length_model --task length but am not able to get the expected results. I tried this with random seeds 1, 2. I am attaching the log files - can you please let me know what I might be doing wrong. length_model_seed2.log length_model.log

    opened by ag1988 4
  • Questions about RUN model

    Questions about RUN model

    Dear Dr. Liu Qian,

    I appreciate your work that combining cv's semantic segmentation with NLP is a fantastic idea. I have run the code and have some questions. I hope you can help me. Thank you very much. Below are questions:

    1. Using the similarity function to build the feature map then "ellipsis" and "coreference" 's pixel value will be close, how can semantic segmentation differ them while predicting.
    2. Related to the first question, since using similarity function to build the feature map then the replicas' pixel value in the context utterance will be closer, which means they are easy to be predicted as the same class resulting in replica operation in the result. It could owe to the nature of CNN nature to have the invariance in the image.
    3. Do you have some other tricks since I still cannot reproduce your result by retraining the "train_multi.sh" several time

    Best Regards, Yong

    opened by chuckhope 2
  • Bump joblib from 1.1.0 to 1.2.0 in /robustness_of_text_to_sql/CTA

    Bump joblib from 1.1.0 to 1.2.0 in /robustness_of_text_to_sql/CTA

    Bumps joblib from 1.1.0 to 1.2.0.

    Changelog

    Sourced from joblib's changelog.

    Release 1.2.0

    • Fix a security issue where eval(pre_dispatch) could potentially run arbitrary code. Now only basic numerics are supported. joblib/joblib#1327

    • Make sure that joblib works even when multiprocessing is not available, for instance with Pyodide joblib/joblib#1256

    • Avoid unnecessary warnings when workers and main process delete the temporary memmap folder contents concurrently. joblib/joblib#1263

    • Fix memory alignment bug for pickles containing numpy arrays. This is especially important when loading the pickle with mmap_mode != None as the resulting numpy.memmap object would not be able to correct the misalignment without performing a memory copy. This bug would cause invalid computation and segmentation faults with native code that would directly access the underlying data buffer of a numpy array, for instance C/C++/Cython code compiled with older GCC versions or some old OpenBLAS written in platform specific assembly. joblib/joblib#1254

    • Vendor cloudpickle 2.2.0 which adds support for PyPy 3.8+.

    • Vendor loky 3.3.0 which fixes several bugs including:

      • robustly forcibly terminating worker processes in case of a crash (joblib/joblib#1269);

      • avoiding leaking worker processes in case of nested loky parallel calls;

      • reliability spawn the correct number of reusable workers.

    Commits
    • 5991350 Release 1.2.0
    • 3fa2188 MAINT cleanup numpy warnings related to np.matrix in tests (#1340)
    • cea26ff CI test the future loky-3.3.0 branch (#1338)
    • 8aca6f4 MAINT: remove pytest.warns(None) warnings in pytest 7 (#1264)
    • 067ed4f XFAIL test_child_raises_parent_exits_cleanly with multiprocessing (#1339)
    • ac4ebd5 MAINT add back pytest warnings plugin (#1337)
    • a23427d Test child raises parent exits cleanly more reliable on macos (#1335)
    • ac09691 [MAINT] various test updates (#1334)
    • 4a314b1 Vendor loky 3.2.0 (#1333)
    • bdf47e9 Make test_parallel_with_interactively_defined_functions_default_backend timeo...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump certifi from 2021.10.8 to 2022.12.7 in /robustness_of_text_to_sql/CTA

    Bump certifi from 2021.10.8 to 2022.12.7 in /robustness_of_text_to_sql/CTA

    Bumps certifi from 2021.10.8 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.2.0rc4 to 2.9.3 in /poset_decoding

    Bump tensorflow from 2.2.0rc4 to 2.9.3 in /poset_decoding

    Bumps tensorflow from 2.2.0rc4 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • custom dataset creation for unisar

    custom dataset creation for unisar

    Hello @SivilTaram,

    I've two tables and they can be linked with primary and foreign key. I would like to use unisar on these tables. could you please share steps or hints on how can i create custom dataset for making use of unisar? I appreciate your help:)

    opened by srewai 8
Releases(logigan)
  • logigan(Nov 2, 2022)

  • lemon_data(Oct 24, 2022)

  • multi.bert(Oct 27, 2020)

    {
        "ROUGE": 0.8954699040374693,
        "_ROUGE1": 0.9248370079585566,
        "_ROUGE2": 0.8548729804396925,
        "EM": 0.4933385579937304,
        "_P1": 0.7443478260869565,
        "_R1": 0.6512335615693946,
        "F1": 0.694684366123703,
        "_P2": 0.6040515653775322,
        "_R2": 0.5369713506139154,
        "F2": 0.5685396504405605,
        "_P3": 0.515867089789061,
        "_R3": 0.4619306310071041,
        "F3": 0.48741126151946734,
        "_BLEU1": 0.9203424772022362,
        "_BLEU2": 0.8919446800461631,
        "_BLEU3": 0.8644065076063657,
        "BLEU4": 0.836555297206264,
        "loss": 0.012869063752786444
    }
    
    Source code(tar.gz)
    Source code(zip)
    multi_bert.tar.gz(371.92 MB)
  • multi(Oct 27, 2020)

    {
        "ROUGE": 0.8761914250271031,
        "_ROUGE1": 0.9096109449987978,
        "_ROUGE2": 0.8299625766685176,
        "EM": 0.43260188087774293,
        "_P1": 0.663828689370485,
        "_R1": 0.5592870340180415,
        "F1": 0.6070901905267504,
        "_P2": 0.5118311660626732,
        "_R2": 0.43674397453387903,
        "F2": 0.4713156990724837,
        "_P3": 0.42003325833903943,
        "_R3": 0.3588800668616799,
        "F3": 0.38705606634216694,
        "_BLEU1": 0.9049646708813686,
        "_BLEU2": 0.8726836197111325,
        "_BLEU3": 0.8417857314177102,
        "BLEU4": 0.810576724115273,
        "loss": 0.01269258979597015
    }
    
    Source code(tar.gz)
    Source code(zip)
    multi.tar.gz(11.04 MB)
  • rewrite.bert(Oct 27, 2020)

    {
        "ROUGE": 0.9394040084189113,
        "_ROUGE1": 0.961865057419486,
        "_ROUGE2": 0.9113051224617216,
        "EM": 0.688,
        "_P1": 0.9451903332806824,
        "_R1": 0.8668694770389685,
        "F1": 0.9043373129817137,
        "_P2": 0.8648273949812838,
        "_R2": 0.7989241803278688,
        "F2": 0.8305705345849144,
        "_P3": 0.8075098814229249,
        "_R3": 0.7449860216360763,
        "F3": 0.774988935954985,
        "_BLEU1": 0.9405510823944796,
        "_BLEU2": 0.9172718486250105,
        "_BLEU3": 0.8932687251641028,
        "BLEU4": 0.8691863201601382,
        "loss": 0.2084200546145439
    }
    
    Source code(tar.gz)
    Source code(zip)
    rewrite_bert.tar.gz(416.07 MB)
  • rewrite(Oct 27, 2020)

    {
        "ROUGE": 0.8927719935645371,
        "_ROUGE1": 0.9235141523874184,
        "_ROUGE2": 0.8481181915297369,
        "EM": 0.536,
        "_P1": 0.8750417083750417,
        "_R1": 0.7598145733738955,
        "F1": 0.8133674497945259,
        "_P2": 0.7759128386336867,
        "_R2": 0.6749487704918032,
        "F2": 0.7219178082191782,
        "_P3": 0.7060550971892043,
        "_R3": 0.6137109517442567,
        "F3": 0.6566523605150214,
        "_BLEU1": 0.8953007499145113,
        "_BLEU2": 0.8627271154439679,
        "_BLEU3": 0.8298150265279266,
        "BLEU4": 0.7961327186223426,
        "loss": 0.07524961760888497
    }
    
    Source code(tar.gz)
    Source code(zip)
    rewrite.tar.gz(51.82 MB)
  • canard(Oct 27, 2020)

    {
        "ROUGE": 0.7672702010110017,
        "_ROUGE1": 0.8060129493808116,
        "_ROUGE2": 0.6272019845050271,
        "EM": 0.1825525040387722,
        "_P1": 0.6173277359583093,
        "_R1": 0.34435562015503873,
        "F1": 0.44210035247771096,
        "_P2": 0.39885553659271156,
        "_R2": 0.24790964682391115,
        "F2": 0.3057682687497595,
        "_P3": 0.31307374374592123,
        "_R3": 0.20793481087641227,
        "F3": 0.24989581886373105,
        "_BLEU1": 0.7020165933536937,
        "_BLEU2": 0.6217319698491652,
        "_BLEU3": 0.5573085608208992,
        "BLEU4": 0.4977857601746373,
        "loss": 0.01892541069261615
    }
    
    Source code(tar.gz)
    Source code(zip)
    canard.tar.gz(26.42 MB)
  • sparc.turn(Apr 21, 2020)

    {
      "best_epoch": 48,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 3489,
      "peak_gpu_2_memory_MB": 16884,
      "peak_gpu_3_memory_MB": 16857,
      "peak_gpu_4_memory_MB": 16886,
      "peak_gpu_5_memory_MB": 16816,
      "peak_gpu_6_memory_MB": 20163,
      "peak_gpu_7_memory_MB": 19975,
      "training_duration": "8:57:20.749721",
      "training_start_epoch": 0,
      "training_epochs": 57,
      "epoch": 57,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training_loss": 0.03923457075834353,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 1751,
      "training_gpu_2_memory_MB": 16884,
      "training_gpu_3_memory_MB": 16857,
      "training_gpu_4_memory_MB": 16886,
      "training_gpu_5_memory_MB": 16816,
      "training_gpu_6_memory_MB": 20163,
      "training_gpu_7_memory_MB": 2,
      "validation_action_exact_match": 0.4056525353283458,
      "validation__action_inter_exact_match": 0.1966824644549763,
      "validation__action_turn_1_exact_match": 0.5521327014218009,
      "validation__action_turn_2_exact_match": 0.3767772511848341,
      "validation__action_turn_3_exact_match": 0.29259259259259257,
      "validation__action_turn_4_exact_match": 0.19101123595505617,
      "validation_sql_exact_match": 0.42726517040731504,
      "validation__sql_inter_exact_match": 0.2132701421800948,
      "validation__sql_turn_1_exact_match": 0.5592417061611374,
      "validation__sql_turn_2_exact_match": 0.3933649289099526,
      "validation__sql_turn_3_exact_match": 0.3296296296296296,
      "validation__sql_turn_4_exact_match": 0.25842696629213485,
      "validation_loss": 0.1820486258486621,
      "best_validation_action_exact_match": 0.4222776392352452,
      "best_validation__action_inter_exact_match": 0.2014218009478673,
      "best_validation__action_turn_1_exact_match": 0.5710900473933649,
      "best_validation__action_turn_2_exact_match": 0.3933649289099526,
      "best_validation__action_turn_3_exact_match": 0.3,
      "best_validation__action_turn_4_exact_match": 0.2247191011235955,
      "best_validation_sql_exact_match": 0.43557772236076475,
      "best_validation__sql_inter_exact_match": 0.2037914691943128,
      "best_validation__sql_turn_1_exact_match": 0.5687203791469194,
      "best_validation__sql_turn_2_exact_match": 0.3981042654028436,
      "best_validation__sql_turn_3_exact_match": 0.3296296296296296,
      "best_validation__sql_turn_4_exact_match": 0.30337078651685395,
      "best_validation_loss": 0.17382763872011436
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(14.89 MB)
  • sparc.token(Apr 21, 2020)

    {
      "best_epoch": 22,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 2796,
      "peak_gpu_1_memory_MB": 16881,
      "peak_gpu_2_memory_MB": 1479,
      "peak_gpu_3_memory_MB": 16586,
      "peak_gpu_4_memory_MB": 496,
      "peak_gpu_5_memory_MB": 1968,
      "peak_gpu_6_memory_MB": 496,
      "peak_gpu_7_memory_MB": 507,
      "training_duration": "5:31:39.494340",
      "training_start_epoch": 0,
      "training_epochs": 31,
      "epoch": 31,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training__copy": 0,
      "training_info": 0,
      "training_loss": 0.06896944765227608,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 2796,
      "training_gpu_1_memory_MB": 16879,
      "training_gpu_2_memory_MB": 1479,
      "training_gpu_3_memory_MB": 16586,
      "training_gpu_4_memory_MB": 496,
      "training_gpu_5_memory_MB": 1968,
      "training_gpu_6_memory_MB": 496,
      "training_gpu_7_memory_MB": 507,
      "validation_action_exact_match": 0.37323358270989193,
      "validation__action_inter_exact_match": 0.17535545023696683,
      "validation__action_turn_1_exact_match": 0.5165876777251185,
      "validation__action_turn_2_exact_match": 0.33886255924170616,
      "validation__action_turn_3_exact_match": 0.2851851851851852,
      "validation__action_turn_4_exact_match": 0.12359550561797752,
      "validation_sql_exact_match": 0.3873649210307564,
      "validation__sql_inter_exact_match": 0.1895734597156398,
      "validation__sql_turn_1_exact_match": 0.518957345971564,
      "validation__sql_turn_2_exact_match": 0.35545023696682465,
      "validation__sql_turn_3_exact_match": 0.3,
      "validation__sql_turn_4_exact_match": 0.1797752808988764,
      "validation__copy": 0,
      "validation_info": 0,
      "validation_loss": 0.13728585836002538,
      "best_validation_action_exact_match": 0.372402327514547,
      "best_validation__action_inter_exact_match": 0.18009478672985782,
      "best_validation__action_turn_1_exact_match": 0.5260663507109005,
      "best_validation__action_turn_2_exact_match": 0.33886255924170616,
      "best_validation__action_turn_3_exact_match": 0.25925925925925924,
      "best_validation__action_turn_4_exact_match": 0.14606741573033707,
      "best_validation_sql_exact_match": 0.38902743142144636,
      "best_validation__sql_inter_exact_match": 0.1966824644549763,
      "best_validation__sql_turn_1_exact_match": 0.5355450236966824,
      "best_validation__sql_turn_2_exact_match": 0.35545023696682465,
      "best_validation__sql_turn_3_exact_match": 0.2740740740740741,
      "best_validation__sql_turn_4_exact_match": 0.20224719101123595,
      "best_validation__copy": 0,
      "best_validation_info": 0,
      "best_validation_loss": 0.13164386158200506
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(10.93 MB)
  • sparc.bert.turn(Apr 23, 2020)

    {
      "best_epoch": 45,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 4185,
      "peak_gpu_2_memory_MB": 16884,
      "peak_gpu_3_memory_MB": 16857,
      "peak_gpu_4_memory_MB": 16886,
      "peak_gpu_5_memory_MB": 16816,
      "peak_gpu_6_memory_MB": 22499,
      "peak_gpu_7_memory_MB": 20477,
      "training_duration": "15:38:25.817253",
      "training_start_epoch": 0,
      "training_epochs": 54,
      "epoch": 54,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training_loss": 0.03840326214035231,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 3238,
      "training_gpu_2_memory_MB": 16884,
      "training_gpu_3_memory_MB": 16857,
      "training_gpu_4_memory_MB": 16886,
      "training_gpu_5_memory_MB": 2,
      "training_gpu_6_memory_MB": 22499,
      "training_gpu_7_memory_MB": 20477,
      "validation_action_exact_match": 0.4405652535328346,
      "validation__action_inter_exact_match": 0.23459715639810427,
      "validation__action_turn_1_exact_match": 0.6018957345971564,
      "validation__action_turn_2_exact_match": 0.4218009478672986,
      "validation__action_turn_3_exact_match": 0.2962962962962963,
      "validation__action_turn_4_exact_match": 0.20224719101123595,
      "validation_sql_exact_match": 0.46467165419783873,
      "validation__sql_inter_exact_match": 0.26303317535545023,
      "validation__sql_turn_1_exact_match": 0.6184834123222749,
      "validation__sql_turn_2_exact_match": 0.45260663507109006,
      "validation__sql_turn_3_exact_match": 0.32592592592592595,
      "validation__sql_turn_4_exact_match": 0.21348314606741572,
      "validation_loss": 0.20552104420625325,
      "best_validation_action_exact_match": 0.4513715710723192,
      "best_validation__action_inter_exact_match": 0.23933649289099526,
      "best_validation__action_turn_1_exact_match": 0.6137440758293838,
      "best_validation__action_turn_2_exact_match": 0.43364928909952605,
      "best_validation__action_turn_3_exact_match": 0.28888888888888886,
      "best_validation__action_turn_4_exact_match": 0.25842696629213485,
      "best_validation_sql_exact_match": 0.4696591853699086,
      "best_validation__sql_inter_exact_match": 0.26303317535545023,
      "best_validation__sql_turn_1_exact_match": 0.6255924170616114,
      "best_validation__sql_turn_2_exact_match": 0.45734597156398105,
      "best_validation__sql_turn_3_exact_match": 0.3148148148148148,
      "best_validation__sql_turn_4_exact_match": 0.25842696629213485,
      "best_validation_loss": 0.18703062564368952
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(456.31 MB)
  • sparc.bert.token(Apr 23, 2020)

    {
      "best_epoch": 13,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 22815,
      "peak_gpu_2_memory_MB": 22671,
      "peak_gpu_3_memory_MB": 19426,
      "peak_gpu_4_memory_MB": 22798,
      "peak_gpu_5_memory_MB": 17284,
      "peak_gpu_6_memory_MB": 21951,
      "peak_gpu_7_memory_MB": 1,
      "training_duration": "11:21:16.976743",
      "training_start_epoch": 0,
      "training_epochs": 22,
      "epoch": 22,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training__copy": 0,
      "training_info": 0,
      "training_loss": 0.19148788920165136,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 22325,
      "training_gpu_2_memory_MB": 19190,
      "training_gpu_3_memory_MB": 19426,
      "training_gpu_4_memory_MB": 18642,
      "training_gpu_5_memory_MB": 16566,
      "training_gpu_6_memory_MB": 19856,
      "training_gpu_7_memory_MB": 1,
      "validation_action_exact_match": 0.43308395677472983,
      "validation__action_inter_exact_match": 0.22037914691943128,
      "validation__action_turn_1_exact_match": 0.6255924170616114,
      "validation__action_turn_2_exact_match": 0.3909952606635071,
      "validation__action_turn_3_exact_match": 0.2962962962962963,
      "validation__action_turn_4_exact_match": 0.1348314606741573,
      "validation_sql_exact_match": 0.4546965918536991,
      "validation__sql_inter_exact_match": 0.25118483412322273,
      "validation__sql_turn_1_exact_match": 0.6421800947867299,
      "validation__sql_turn_2_exact_match": 0.41706161137440756,
      "validation__sql_turn_3_exact_match": 0.3148148148148148,
      "validation__sql_turn_4_exact_match": 0.16853932584269662,
      "validation__copy": 0,
      "validation_info": 0,
      "validation_loss": 0.3569193813490231,
      "best_validation_action_exact_match": 0.4455527847049044,
      "best_validation__action_inter_exact_match": 0.22274881516587677,
      "best_validation__action_turn_1_exact_match": 0.6255924170616114,
      "best_validation__action_turn_2_exact_match": 0.4052132701421801,
      "best_validation__action_turn_3_exact_match": 0.3111111111111111,
      "best_validation__action_turn_4_exact_match": 0.19101123595505617,
      "best_validation_sql_exact_match": 0.4605153782211139,
      "best_validation__sql_inter_exact_match": 0.24407582938388625,
      "best_validation__sql_turn_1_exact_match": 0.6374407582938388,
      "best_validation__sql_turn_2_exact_match": 0.4218009478672986,
      "best_validation__sql_turn_3_exact_match": 0.32222222222222224,
      "best_validation__sql_turn_4_exact_match": 0.2247191011235955,
      "best_validation__copy": 0,
      "best_validation_info": 0,
      "best_validation_loss": 0.32762717183953
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(416.06 MB)
  • cosql.turn(Apr 23, 2020)

    {
      "best_epoch": 24,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 22815,
      "peak_gpu_2_memory_MB": 22671,
      "peak_gpu_3_memory_MB": 19422,
      "peak_gpu_4_memory_MB": 22974,
      "peak_gpu_5_memory_MB": 17284,
      "peak_gpu_6_memory_MB": 21951,
      "peak_gpu_7_memory_MB": 1,
      "training_duration": "6:32:27.738503",
      "training_start_epoch": 0,
      "training_epochs": 33,
      "epoch": 33,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training_loss": 0.18920585506216245,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 22237,
      "training_gpu_2_memory_MB": 19158,
      "training_gpu_3_memory_MB": 19422,
      "training_gpu_4_memory_MB": 20185,
      "training_gpu_5_memory_MB": 17284,
      "training_gpu_6_memory_MB": 21951,
      "training_gpu_7_memory_MB": 1,
      "validation_action_exact_match": 0.29890764647467727,
      "validation__action_inter_exact_match": 0.07167235494880546,
      "validation__action_turn_1_exact_match": 0.36860068259385664,
      "validation__action_turn_2_exact_match": 0.2771929824561403,
      "validation__action_turn_3_exact_match": 0.26229508196721313,
      "validation__action_turn_4_exact_match": 0.2702702702702703,
      "validation_sql_exact_match": 0.29791459781529295,
      "validation__sql_inter_exact_match": 0.08191126279863481,
      "validation__sql_turn_1_exact_match": 0.378839590443686,
      "validation__sql_turn_2_exact_match": 0.2736842105263158,
      "validation__sql_turn_3_exact_match": 0.26229508196721313,
      "validation__sql_turn_4_exact_match": 0.25405405405405407,
      "validation_loss": 0.47665515542030334,
      "best_validation_action_exact_match": 0.32075471698113206,
      "best_validation__action_inter_exact_match": 0.09215017064846416,
      "best_validation__action_turn_1_exact_match": 0.3890784982935154,
      "best_validation__action_turn_2_exact_match": 0.2912280701754386,
      "best_validation__action_turn_3_exact_match": 0.29098360655737704,
      "best_validation__action_turn_4_exact_match": 0.2972972972972973,
      "best_validation_sql_exact_match": 0.3187686196623635,
      "best_validation__sql_inter_exact_match": 0.09897610921501707,
      "best_validation__sql_turn_1_exact_match": 0.3993174061433447,
      "best_validation__sql_turn_2_exact_match": 0.28421052631578947,
      "best_validation__sql_turn_3_exact_match": 0.28688524590163933,
      "best_validation__sql_turn_4_exact_match": 0.2864864864864865,
      "best_validation_loss": 0.45512670278549194
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(15.20 MB)
  • cosql.token(Apr 23, 2020)

    {
      "best_epoch": 35,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 22811,
      "peak_gpu_2_memory_MB": 22671,
      "peak_gpu_3_memory_MB": 19426,
      "peak_gpu_4_memory_MB": 22798,
      "peak_gpu_5_memory_MB": 17286,
      "peak_gpu_6_memory_MB": 21951,
      "peak_gpu_7_memory_MB": 1,
      "training_duration": "12:23:41.284435",
      "training_start_epoch": 0,
      "training_epochs": 44,
      "epoch": 44,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training__copy": 0,
      "training_info": 0,
      "training_loss": 0.17617646926255137,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 3,
      "training_gpu_2_memory_MB": 1743,
      "training_gpu_3_memory_MB": 19426,
      "training_gpu_4_memory_MB": 1981,
      "training_gpu_5_memory_MB": 16566,
      "training_gpu_6_memory_MB": 19858,
      "training_gpu_7_memory_MB": 1,
      "validation_action_exact_match": 0.30883813306852037,
      "validation__action_inter_exact_match": 0.07849829351535836,
      "validation__action_turn_1_exact_match": 0.3822525597269625,
      "validation__action_turn_2_exact_match": 0.2771929824561403,
      "validation__action_turn_3_exact_match": 0.28688524590163933,
      "validation__action_turn_4_exact_match": 0.2702702702702703,
      "validation_sql_exact_match": 0.3118172790466733,
      "validation__sql_inter_exact_match": 0.08532423208191127,
      "validation__sql_turn_1_exact_match": 0.3993174061433447,
      "validation__sql_turn_2_exact_match": 0.2807017543859649,
      "validation__sql_turn_3_exact_match": 0.29098360655737704,
      "validation__sql_turn_4_exact_match": 0.24864864864864866,
      "validation__copy": 0,
      "validation_info": 0,
      "validation_loss": 0.42742106318473816,
      "best_validation_action_exact_match": 0.323733862959285,
      "best_validation__action_inter_exact_match": 0.08191126279863481,
      "best_validation__action_turn_1_exact_match": 0.3856655290102389,
      "best_validation__action_turn_2_exact_match": 0.3017543859649123,
      "best_validation__action_turn_3_exact_match": 0.2786885245901639,
      "best_validation__action_turn_4_exact_match": 0.31891891891891894,
      "best_validation_sql_exact_match": 0.3227408142999007,
      "best_validation__sql_inter_exact_match": 0.08532423208191127,
      "best_validation__sql_turn_1_exact_match": 0.40273037542662116,
      "best_validation__sql_turn_2_exact_match": 0.2912280701754386,
      "best_validation__sql_turn_3_exact_match": 0.2786885245901639,
      "best_validation__sql_turn_4_exact_match": 0.3027027027027027,
      "best_validation__copy": 0,
      "best_validation_info": 0,
      "best_validation_loss": 0.40968313813209534
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(15.20 MB)
  • cosql.bert.turn(Apr 23, 2020)

    {
      "best_epoch": 23,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 3489,
      "peak_gpu_2_memory_MB": 16884,
      "peak_gpu_3_memory_MB": 16857,
      "peak_gpu_4_memory_MB": 16886,
      "peak_gpu_5_memory_MB": 16816,
      "peak_gpu_6_memory_MB": 20163,
      "peak_gpu_7_memory_MB": 19975,
      "training_duration": "8:04:29.261273",
      "training_start_epoch": 0,
      "training_epochs": 32,
      "epoch": 32,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training_loss": 0.05288350063624136,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 1751,
      "training_gpu_2_memory_MB": 16884,
      "training_gpu_3_memory_MB": 16857,
      "training_gpu_4_memory_MB": 16886,
      "training_gpu_5_memory_MB": 16816,
      "training_gpu_6_memory_MB": 20163,
      "training_gpu_7_memory_MB": 2,
      "validation_action_exact_match": 0.3843098311817279,
      "validation__action_inter_exact_match": 0.12286689419795221,
      "validation__action_turn_1_exact_match": 0.48464163822525597,
      "validation__action_turn_2_exact_match": 0.35789473684210527,
      "validation__action_turn_3_exact_match": 0.3483606557377049,
      "validation__action_turn_4_exact_match": 0.31351351351351353,
      "validation_sql_exact_match": 0.38828202581926513,
      "validation__sql_inter_exact_match": 0.1296928327645051,
      "validation__sql_turn_1_exact_match": 0.49829351535836175,
      "validation__sql_turn_2_exact_match": 0.3473684210526316,
      "validation__sql_turn_3_exact_match": 0.36065573770491804,
      "validation__sql_turn_4_exact_match": 0.31351351351351353,
      "validation_loss": 0.24153371155261993,
      "best_validation_action_exact_match": 0.38927507447864945,
      "best_validation__action_inter_exact_match": 0.13651877133105803,
      "best_validation__action_turn_1_exact_match": 0.47440273037542663,
      "best_validation__action_turn_2_exact_match": 0.38596491228070173,
      "best_validation__action_turn_3_exact_match": 0.3442622950819672,
      "best_validation__action_turn_4_exact_match": 0.31891891891891894,
      "best_validation_sql_exact_match": 0.39225422045680236,
      "best_validation__sql_inter_exact_match": 0.14334470989761092,
      "best_validation__sql_turn_1_exact_match": 0.4948805460750853,
      "best_validation__sql_turn_2_exact_match": 0.3684210526315789,
      "best_validation__sql_turn_3_exact_match": 0.36475409836065575,
      "best_validation__sql_turn_4_exact_match": 0.3027027027027027,
      "best_validation_loss": 0.20682337880134583
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(456.36 MB)
  • cosql.bert.token(Apr 23, 2020)

    {
      "best_epoch": 11,
      "peak_cpu_memory_MB": 0,
      "peak_gpu_0_memory_MB": 1,
      "peak_gpu_1_memory_MB": 3489,
      "peak_gpu_2_memory_MB": 16884,
      "peak_gpu_3_memory_MB": 16857,
      "peak_gpu_4_memory_MB": 16886,
      "peak_gpu_5_memory_MB": 16816,
      "peak_gpu_6_memory_MB": 20163,
      "peak_gpu_7_memory_MB": 19975,
      "training_duration": "6:05:48.267567",
      "training_start_epoch": 0,
      "training_epochs": 20,
      "epoch": 20,
      "training_action_exact_match": 0,
      "training__action_inter_exact_match": 0,
      "training__action_turn_1_exact_match": 0,
      "training__action_turn_2_exact_match": 0,
      "training__action_turn_3_exact_match": 0,
      "training__action_turn_4_exact_match": 0,
      "training_sql_exact_match": 0,
      "training__sql_inter_exact_match": 0,
      "training__sql_turn_1_exact_match": 0,
      "training__sql_turn_2_exact_match": 0,
      "training__sql_turn_3_exact_match": 0,
      "training__sql_turn_4_exact_match": 0,
      "training__copy": 0,
      "training_info": 0,
      "training_loss": 0.07746467799366073,
      "training_cpu_memory_MB": 0.0,
      "training_gpu_0_memory_MB": 1,
      "training_gpu_1_memory_MB": 3489,
      "training_gpu_2_memory_MB": 16884,
      "training_gpu_3_memory_MB": 16857,
      "training_gpu_4_memory_MB": 16886,
      "training_gpu_5_memory_MB": 16816,
      "training_gpu_6_memory_MB": 20163,
      "training_gpu_7_memory_MB": 19975,
      "validation_action_exact_match": 0.407149950347567,
      "validation__action_inter_exact_match": 0.12286689419795221,
      "validation__action_turn_1_exact_match": 0.5085324232081911,
      "validation__action_turn_2_exact_match": 0.3894736842105263,
      "validation__action_turn_3_exact_match": 0.35655737704918034,
      "validation__action_turn_4_exact_match": 0.34054054054054056,
      "validation_sql_exact_match": 0.41211519364448856,
      "validation__sql_inter_exact_match": 0.13993174061433447,
      "validation__sql_turn_1_exact_match": 0.5324232081911263,
      "validation__sql_turn_2_exact_match": 0.37894736842105264,
      "validation__sql_turn_3_exact_match": 0.3770491803278688,
      "validation__sql_turn_4_exact_match": 0.31891891891891894,
      "validation__copy": 0,
      "validation_info": 0,
      "validation_loss": 0.16448146104812622,
      "best_validation_action_exact_match": 0.42204568023833167,
      "best_validation__action_inter_exact_match": 0.13993174061433447,
      "best_validation__action_turn_1_exact_match": 0.515358361774744,
      "best_validation__action_turn_2_exact_match": 0.39649122807017545,
      "best_validation__action_turn_3_exact_match": 0.38524590163934425,
      "best_validation__action_turn_4_exact_match": 0.3621621621621622,
      "best_validation_sql_exact_match": 0.42105263157894735,
      "best_validation__sql_inter_exact_match": 0.15358361774744028,
      "best_validation__sql_turn_1_exact_match": 0.5290102389078498,
      "best_validation__sql_turn_2_exact_match": 0.3824561403508772,
      "best_validation__sql_turn_3_exact_match": 0.3975409836065574,
      "best_validation__sql_turn_4_exact_match": 0.34054054054054056,
      "best_validation__copy": 0,
      "best_validation_info": 0,
      "best_validation_loss": 0.1453363001346588
    }
    
    Source code(tar.gz)
    Source code(zip)
    model.tar.gz(416.10 MB)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Multi-query Video Retreival

Multi-query Video Retreival

Princeton Visual AI Lab 17 Nov 22, 2022
some classic model used to segment the medical images like CT、X-ray and so on

github_project This is a project for medical image segmentation. This project includes common medical image segmentation models such as U-net, FCN, De

2 Mar 30, 2022
Stock-history-display - something like a easy yearly review for your stock performance

Stock History Display Available on Heroku: https://stock-history-display.herokua

LiaoJJ 1 Jan 07, 2022
a Pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021"

A pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021" 1. Notes This is a pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in

91 Dec 26, 2022
Convert dog pictures into various painting styles. Try LimnPet

LimnPet Cartoon stylization service project Try our service » Home page · Team notion · Members 목차 프로젝트 소개 프로젝트 목표 사용한 기술스택과 수행도구 팀원 구현 기능 주요 기능 추가 기능

LiJell 7 Jul 14, 2022
Code for the Paper: Conditional Variational Capsule Network for Open Set Recognition

Conditional Variational Capsule Network for Open Set Recognition This repository hosts the official code related to "Conditional Variational Capsule N

Guglielmo Camporese 35 Nov 21, 2022
A library for optimization on Riemannian manifolds

TensorFlow RiemOpt A library for manifold-constrained optimization in TensorFlow. Installation To install the latest development version from GitHub:

Oleg Smirnov 83 Dec 27, 2022
Repository sharing code and the model for the paper "Rescoring Sequence-to-Sequence Models for Text Line Recognition with CTC-Prefixes"

Rescoring Sequence-to-Sequence Models for Text Line Recognition with CTC-Prefixes Setup virtualenv -p python3 venv source venv/bin/activate pip instal

Planet AI GmbH 9 May 20, 2022
UCSD Oasis platform

oasis UCSD Oasis platform Local project setup Install Docker Compose and make sure you have Pip installed Clone the project and go to the project fold

InSTEDD 4 Jun 16, 2021
Meli Data Challenge 2021 - First Place Solution

My solution for the Meli Data Challenge 2021

Matias Moreyra 23 Mar 09, 2022
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Fine-grained Control of Image Caption Generation with Abstract Scene Graphs

Faster R-CNN pretrained on VisualGenome This repository modifies maskrcnn-benchmark for object detection and attribute prediction on VisualGenome data

Shizhe Chen 7 Apr 20, 2021
Official Repository for our ICCV2021 paper: Continual Learning on Noisy Data Streams via Self-Purified Replay

Continual Learning on Noisy Data Streams via Self-Purified Replay This repository contains the official PyTorch implementation for our ICCV2021 paper.

Jinseo Jeong 22 Nov 23, 2022
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
Visyerres sgdf woob - Modules Woob pour l'intranet et autres sites Scouts et Guides de France

Vis'Yerres SGDF - Modules Woob Vous avez le sentiment que l'intranet des Scouts

Thomas Touhey (pas un pseudonyme) 3 Dec 24, 2022
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

Ziyue Feng 72 Dec 09, 2022
This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | 简体中文 RetinaFace in PyTorch Chinese detailed blog:https://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text"

Iconary This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text". It includes the

AI2 6 May 24, 2022
Contenido del curso Bases de datos del DCC PUC versión 2021-2

IIC2413 - Bases de Datos Tabla de contenidos Equipo Profesores Ayudantes Contenidos Calendario Evaluaciones Resumen de notas Foro Política de integrid

54 Nov 23, 2022