Graph4nlp is the library for the easy use of Graph Neural Networks for NLP

Overview

logo

Last Commit pypi Contributors Contributing License Issues Fork Star

Graph4NLP

Graph4NLP is an easy-to-use library for R&D at the intersection of Deep Learning on Graphs and Natural Language Processing (i.e., DLG4NLP). It provides both full implementations of state-of-the-art models for data scientists and also flexible interfaces to build customized models for researchers and developers with whole-pipeline support. Built upon highly-optimized runtime libraries including DGL , Graph4NLP has both high running efficiency and great extensibility. The architecture of Graph4NLP is shown in the following figure, where boxes with dashed lines represents the features under development. Graph4NLP consists of four different layers: 1) Data Layer, 2) Module Layer, 3) Model Layer, and 4) Application Layer.

architecture
Figure: Graph4NLP Overall Architecture

new Graph4NLP news

06/05/2021: The v0.4.1 release. Try it out!

Quick tour

Graph4nlp aims to make it incredibly easy to use GNNs in NLP tasks (check out Graph4NLP Documentation). Here is an example of how to use the Graph2seq model (widely used in machine translation, question answering, semantic parsing, and various other NLP tasks that can be abstracted as graph-to-sequence problem and has shown superior performance).

We also offer other high-level model APIs such as graph-to-tree models. If you are interested in DLG4NLP related research problems, you are very welcome to use our library and refer to our graph4nlp survey.

from graph4nlp.pytorch.datasets.jobs import JobsDataset
from graph4nlp.pytorch.modules.graph_construction.dependency_graph_construction import DependencyBasedGraphConstruction
from graph4nlp.pytorch.modules.config import get_basic_args
from graph4nlp.pytorch.models.graph2seq import Graph2Seq
from graph4nlp.pytorch.modules.utils.config_utils import update_values, get_yaml_config

# build dataset
jobs_dataset = JobsDataset(root_dir='graph4nlp/pytorch/test/dataset/jobs',
                           topology_builder=DependencyBasedGraphConstruction,
                           topology_subdir='DependencyGraph')  # You should run stanfordcorenlp at background
vocab_model = jobs_dataset.vocab_model

# build model
user_args = get_yaml_config("examples/pytorch/semantic_parsing/graph2seq/config/dependency_gcn_bi_sep_demo.yaml")
args = get_basic_args(graph_construction_name="node_emb", graph_embedding_name="gat", decoder_name="stdrnn")
update_values(to_args=args, from_args_list=[user_args])
graph2seq = Graph2Seq.from_args(args, vocab_model)

# calculation
batch_data = JobsDataset.collate_fn(jobs_dataset.train[0:12])

scores = graph2seq(batch_data["graph_data"], batch_data["tgt_seq"])  # [Batch_size, seq_len, Vocab_size]

Overview

Our Graph4NLP computing flow is shown as below.

logo

Graph4NLP Models and Applications

Graph4NLP models

  • Graph2Seq: a general end-to-end neural encoder-decoder model that maps an input graph to a sequence of tokens.
  • Graph2Tree: a general end-to-end neural encoder-decoder model that maps an input graph to a tree structure.

Graph4NLP applications

We provide a comprehensive collection of NLP applications, together with detailed examples as follows:

  • Text classification: to give the sentence or document an appropriate label.
  • Semantic parsing: to translate natural language into a machine-interpretable formal meaning representation.
  • Neural machine translation: to translate a sentence in a source language to a different target language.
  • summarization: to generate a shorter version of input texts which could preserve major meaning.
  • KG completion: to predict missing relations between two existing entities in konwledge graphs.
  • Math word problem solving: to automatically solve mathematical exercises that provide background information about a problem in easy-to-understand language.
  • Name entity recognition: to tag entities in input texts with their corresponding type.
  • Question generation: to generate an valid and fluent question based on the given passage and target answer (optional).

Performance

Task Dataset GNN Model Graph construction Evaluation Performance
Text classification TRECT
CAirline
CNSST
GAT Dependency Accuracy 0.948
0.769
0.538
Semantic Parsing JOBS SAGE Constituency Execution accuracy 0.936
Question generation SQuAD GGNN Dependency BLEU-4 0.15175
Machine translation IWSLT14 GCN Dynamic BLEU-4 0.3212
Summarization CNN(30k) GCN Dependency ROUGE-1 26.4
Knowledge graph completion Kinship GCN Dependency MRR 82.4
Math word problem MAWPS
MATHQA
SAGE Dynamic Solution accuracy
Exact match
76.4
61.07

Installation

Currently, users can install Graph4NLP via pip or source code. Graph4NLP supports the following OSes:

  • Linux-based systems (tested on Ubuntu 18.04 and later)
  • macOS (only CPU version)
  • Windows 10 (only support pytorch >= 1.8)

Installation via pip (binaries)

We provide pip wheels for all major OS/PyTorch/CUDA combinations. Note that we highly recommend Windows users refer to Installation via source code due to compatibility.

Ensure that at least PyTorch (>=1.6.0) is installed:

Note that >=1.6.0 is ok.

$ python -c "import torch; print(torch.__version__)"
>>> 1.6.0

Find the CUDA version PyTorch was installed with (for GPU users):

$ python -c "import torch; print(torch.version.cuda)"
>>> 10.2

Install the relevant dependencies:

torchtext is needed since Graph4NLP relies on it to implement embeddings. Please pay attention to the PyTorch requirements before installing torchtext with the following script! For detailed version matching please refer here.

pip install torchtext # >=0.7.0

Install Graph4NLP

pip install graph4nlp${CUDA}

where ${CUDA} should be replaced by the specific CUDA version (none (CPU version), "-cu92", "-cu101", "-cu102", "-cu110"). The following table shows the concrete command lines. For CUDA 11.1 users, please refer to Installation via source code.

Platform Command
CPU pip install graph4nlp
CUDA 9.2 pip install graph4nlp-cu92
CUDA 10.1 pip install graph4nlp-cu101
CUDA 10.2 pip install graph4nlp-cu102
CUDA 11.0 pip install graph4nlp-cu110

Installation via source code

Ensure that at least PyTorch (>=1.6.0) is installed:

Note that >=1.6.0 is ok.

$ python -c "import torch; print(torch.__version__)"
>>> 1.6.0

Find the CUDA version PyTorch was installed with (for GPU users):

$ python -c "import torch; print(torch.version.cuda)"
>>> 10.2

Install the relevant dependencies:

torchtext is needed since Graph4NLP relies on it to implement embeddings. Please pay attention to the PyTorch requirements before installing torchtext with the following script! For detailed version matching please refer here.

pip install torchtext # >=0.7.0

Download the source code of Graph4NLP from Github:

git clone https://github.com/graph4ai/graph4nlp.git
cd graph4nlp

Configure the CUDA version

Then run ./configure (or ./configure.bat if you are using Windows 10) to config your installation. The configuration program will ask you to specify your CUDA version. If you do not have a GPU, please type 'cpu'.

./configure

Install the relevant packages:

Finally, install the package:

python setup.py install

Major Releases

Releases Date Features
v0.4.1 2021-06-05 - Support the whole pipeline of Graph4NLP
- GraphData and Dataset support

New to Deep Learning on Graphs for NLP?

If you want to learn more on applying Deep Learning on Graphs techniques to NLP tasks, you can refer to our survey paper which provides an overview of this existing research direction. If you want detailed reference to our library, please refer to our docs.

Contributing

Please let us know if you encounter a bug or have any suggestions by filing an issue.

We welcome all contributions from bug fixes to new features and extensions.

We expect all contributions discussed in the issue tracker and going through PRs.

Citation

If you found this code useful, please consider citing the following papers.

@article{wu2021graph,
  title={Graph Neural Networks for Natural Language Processing: A Survey},
  author={Lingfei Wu and Yu Chen and Kai Shen and Xiaojie Guo and Hanning Gao and Shucheng Li and Jian Pei and Bo Long},
  journal={arXiv preprint arXiv:2106.06090},
  year={2021}
}

@inproceedings{chen2020iterative,
  title={Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings},
  author={Chen, Yu and Wu, Lingfei and Zaki, Mohammed J},
  booktitle={Proceedings of the 34th Conference on Neural Information Processing Systems},
  month={Dec. 6-12,},
  year={2020}
}

@inproceedings{chen2020reinforcement,
  author    = {Chen, Yu and Wu, Lingfei and Zaki, Mohammed J.},
  title     = {Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation},
  booktitle = {Proceedings of the 8th International Conference on Learning Representations},
  month = {Apr. 26-30,},
  year      = {2020}
}

@article{xu2018graph2seq,
  title={Graph2seq: Graph to sequence learning with attention-based neural networks},
  author={Xu, Kun and Wu, Lingfei and Wang, Zhiguo and Feng, Yansong and Witbrock, Michael and Sheinin, Vadim},
  journal={arXiv preprint arXiv:1804.00823},
  year={2018}
}

@inproceedings{li-etal-2020-graph-tree,
    title = {Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem},
    author = {Li, Shucheng  and
      Wu, Lingfei  and
      Feng, Shiwei  and
      Xu, Fangli  and
      Xu, Fengyuan  and
      Zhong, Sheng},
    booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
    month = {Nov},
    year = {2020}
}

@inproceedings{huang-etal-2020-knowledge,
    title = {Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward},
    author = {Huang, Luyang  and
      Wu, Lingfei  and
      Wang, Lu},
    booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
    month = {Jul},
    year = {2020},
    pages = {5094--5107}
}

@inproceedings{wu-etal-2018-word,
    title = {Word Mover{'}s Embedding: From {W}ord2{V}ec to Document Embedding},
    author = {Wu, Lingfei  and
      Yen, Ian En-Hsu  and
      Xu, Kun  and
      Xu, Fangli  and
      Balakrishnan, Avinash  and
      Chen, Pin-Yu  and
      Ravikumar, Pradeep  and
      Witbrock, Michael J.},
    booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
    pages = {4524--4534},
    year = {2018},
}

@inproceedings{chen2020graphflow,
  author    = {Yu Chen and
               Lingfei Wu and
               Mohammed J. Zaki},  
title     = {GraphFlow: Exploiting Conversation Flow with Graph Neural Networks
               for Conversational Machine Comprehension},
  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
               Artificial Intelligence, {IJCAI} 2020},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  pages     = {1230--1236},
  year      = {2020}
} 
  
@inproceedings{shen2020hierarchical,
  title={Hierarchical Attention Based Spatial-Temporal Graph-to-Sequence Learning for Grounded Video Description},
  author={Shen, Kai and Wu, Lingfei and Xu, Fangli and Tang, Siliang and Xiao, Jun and Zhuang, Yueting},
  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
               Artificial Intelligence, {IJCAI} 2020},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  pages     = {941--947},
  year      = {2020}
}  

@inproceedings{ijcai2020-419,
  title     = {RDF-to-Text Generation with Graph-augmented Structural Neural Encoders},
  author    = {Gao, Hanning and Wu, Lingfei and Hu, Po and Xu, Fangli},
  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
               Artificial Intelligence, {IJCAI-20}},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  pages     = {3030--3036},
  year      = {2020}
}


Team

Graph4AI Team: Lingfei Wu (team leader), Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Saizhuo Wang and Xiao Liu. We are passionate in developing useful open-source libraries which aim to promote the easy use of various Deep Learning on Graphs techniques for Natural Language Processing. Our team consists of research scientists, applied data scientists, and graduate students from a variety of groups, including JD.COM Sillicon Valley Research Center (Lingfei Wu, Xiaojie Guo), JD.COM and Zhejiang University (Kai Shen), Facebook AI (Yu Chen), Tongji University (Hanning Gao), Nanjing University (Shucheng Li), HKUST (Saizhuo Wang).

Contact

If you have any technical questions, please submit new issues.

If you have any other questions, please contact us: Lingfei Wu [[email protected]] and Xiaojie Guo [[email protected]].

License

Graph4NLP uses Apache License 2.0.

Comments
  • Model diagrams for the GNN examples

    Model diagrams for the GNN examples

    ❓ Questions and Help

    This repo presents couple of nice examples for the GNN.

    I am particularly interested about:

    Do you have the model architecture described somewhere as part of the tutorial or documentation? Alternately do you have a canonical architecture described somewhere for these Graph2Seq based models? Is the model same as the Graph2Seq: A Generalized Seq2Seq Model for Graph Inputs?

    opened by code-rex1 16
  • Can we create a graph model for all of our contextual data and then use the same graph to train all our downstream models?

    Can we create a graph model for all of our contextual data and then use the same graph to train all our downstream models?

    In the examples mentioned in the demo, we saw that graph construction was done while training the specific use case like classification. Is there a way, we can construct graph on whole of our contextual data and then use that graph for our downstream applications like classification, knowledge graph completion etc.?

    opened by dbanka 16
  • How to use the Graph2Seq model with multiple GPUs?

    How to use the Graph2Seq model with multiple GPUs?

    How to train the Graph2Seq model in a multiple-GPU environment? As an example, there is an NMT example here: https://github.com/graph4ai/graph4nlp/tree/master/examples/pytorch/nmt

    The model is built here: https://github.com/graph4ai/graph4nlp/blob/master/examples/pytorch/nmt/build_model.py

    Could this be extended to be trained in a multiple GPU environment?

    opened by nashid 15
  • How to build a custom dataset for the graph2seq model?

    How to build a custom dataset for the graph2seq model?

    ❓ Questions and Help

    I want to use the graph2seq model that would encode an input source code AST as a graph and would decode to a target sequence.

    So far I am building the graph model using the GraphData API to build source code AST as a graph. However, how would I feed the sequence data to the decoder?

    I don’t find one complete example showing the whole flow. Can anyone point me in the right direction?

    opened by nashid 13
  • Preprocess data for inference on Graph2Tree

    Preprocess data for inference on Graph2Tree

    Hello everyone, I want to ask something that is related to https://github.com/graph4ai/graph4nlp/issues/316#issue-918894490. I cannot fully understand how I can pass the data to the model for inference. In particular, for the evaluation in tutorial Graph2Tree about jobs data, I see that the model retrieves the data from the test data-loader, which is generate from JobsDatasetForTree, which gets data from text file and preprocess them. This is different from my goals: infact, after training, I save the model and I load it, after that I want to do inference on unseen data; so, I want to pass to model a list of strings, preprocess these strings (according to the method that I used in the training for the graph construction, i.e. Dependency or Constituency or something else) and do inference on this data. Can you explain me how i can do that? Thanks

    opened by maurovit 13
  • Error while running NER

    Error while running NER

    🐛 Bug

    To Reproduce

    Steps to reproduce the behavior:

    1. Run python examples/pytorch/name_entity_recognition/main.py --graph_type dependency_graph --gpu 0 --init_hidden_size 400 --hidden_size 128 --lr 0.01 --batch_size 100 --gnn_type graphsage --direction_option undirected

    2. Getting TypeError: Can't instantiate abstract class ConllDataset with abstract methods download

    Expected behavior

    Environment

    • Graph4NLP Version (e.g., 0.4.1):
    • Backend Library & Version (e.g., PyTorch 1.6.0): 1.8.1
    • OS (e.g., Linux): Windows 10
    • How you installed Graph4NLP (pip, source): source
    • Build command you used (if compiling from source): python setup.py install
    • Python version: 3.6.5
    • CUDA/cuDNN version (if applicable): CPU
    • GPU models and configuration (e.g. 2080Ti):
    • Any other relevant information:

    Additional context

    bug 
    opened by yogeshhk 12
  • How to add an edge with attribute value?

    How to add an edge with attribute value?

    ❓ Questions and Help

    How to add an edge with attribute value?

        def add_edge(self, src: int, tgt: int):
            """
            Add one edge to the graph.
    
            Parameters
            ----------
            src : int
                Source node index
            tgt : int
                Target node index
    

    Currently, I can only add an edge between two nodes using their node index. But how to set the edge_type? I cant find any API like add_edge(self, src: int, tgt: int, edge_type: str).

    Can anyone please provide a pointer or code snippet?

    opened by nashid 11
  • Using BERT as pretrained model

    Using BERT as pretrained model

    ❓ Questions and Help

    Hello, I changed the pretrained model to 'bert' but I got the following error:

    File "/home/hadi/miniconda3/envs/ongpu/lib/python3.9/site-packages/transformers-4.12.5-py3.9.egg/transformers/file_utils.py", line 1818, in get_list_of_files
       model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
     File "/home/hadi/miniconda3/envs/ongpu/lib/python3.9/site-packages/huggingface_hub-0.1.2-py3.9.egg/huggingface_hub/hf_api.py", line 585, in model_info
       r.raise_for_status()
     File "/home/hadi/miniconda3/envs/ongpu/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
       raise HTTPError(http_error_msg, response=self)
    requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/None
    
    

    It seems to be due to huggingface servers, but how can I fix it? If I want to download pretrained model manually, where should I put the file?

    opened by icocoder 11
  • Summarization example

    Summarization example

    ❓ Questions and Help

    Hi, in the summarization example, I want to try other graph neural networks (replacing GCN with GraphSAGE, GAT, ...) How can I do it with minimal change to the original code?

    documentation 
    opened by mjzohrabi 11
  • using copy mechanism

    using copy mechanism

    ❓ Questions and Help

    Came across this fantastic library just a week back. This library is looking great 🥇

    I will be mostly using the example for machine translation. Will also use the **text summarization at a later point.

    I am currently using opennmt for machine translation and use the copy_mechanism as my source and target has same vocabulary. Now I will be moving to your library for this NMT task so that my input could leverage the graph structure.

    At the same time, I need to use the copy_mechanism for OOV issue. My questions:

    1. It appears to me, I just need to set the use_copy as true for the decoder_args and that should do for the NMT example. Am I missing something?
    decoder_args:
      rnn_decoder_share:
        rnn_type: "lstm"
        input_size: 300
        hidden_size: 512
        rnn_emb_input_size: 300
        use_copy: true
        use_coverage: true
        graph_pooling_strategy: null
        attention_type: "sep_diff_encoder_type"
        fuse_strategy: "concatenate"
        dropout: 0.3
    
    1. I am using the Graph2Seq model as per the NMT example. In the NMT example, you have used GCN. For the copy mechanism to be more effective does it has a relation with what GNN I use?

    You have four GNN models - GCN, GAT, GGNN, GraphSage. Do you think copy mechanism would be more effective for a specific GNN? Or it does not matter?

    1. Would enabling the copy mechanism lower the accuracy/BLEU score of the translation task?
    opened by code2graph 10
  • Running machine translation using different GNNs

    Running machine translation using different GNNs

    ❓ Questions and Help

    I am running the NMT example on the same dataset with GNN variants:

    • GCN
    • GGNN
    • GraphSage

    While the execution runs with GCN, I get Out-of-Memory (OOM) for GGNN and GraphSage. Can anyone help me with this query?

    opened by smith-co 10
  • Rgcn semantic parsing

    Rgcn semantic parsing

    Description

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [ ] The PR title starts with [$CATEGORY] (such as [Doc], [Feature]])
    • [ ] Changes are complete (i.e. I finished coding on this PR)
    • [ ] All changes have test coverage
    • [ ] Code is well-documented
    • [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [ ] Related issue is referred in this PR
    • [ ] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    opened by schenglee 0
  • [Feature] Add RGCN for text classification

    [Feature] Add RGCN for text classification

    Description

    Add RGCN for text classification.

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [x] The PR title starts with [$CATEGORY] (such as [Doc], [Feature]])
    • [x] Changes are complete (i.e. I finished coding on this PR)
    • [x] All changes have test coverage
    • [x] Code is well-documented
    • [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    opened by hugochan 0
  • Could you please provide a simple example to use the NodeEmbeddingBasedRefinedGraphConstruction class?

    Could you please provide a simple example to use the NodeEmbeddingBasedRefinedGraphConstruction class?

    ❓ Questions and Help

    I am trying to come up with a simple example of how to use the NodeEmbeddingBasedRefinedGraphConstruction class, but I am getting the following error. could you please let me know how to congif the class to make it work?

    from graph4nlp.pytorch.modules.graph_construction.node_embedding_based_refined_graph_construction import NodeEmbeddingBasedRefinedGraphConstruction
    raw_data = "James went to the corner-shop. And bought some eggs."
    gl = NodeEmbeddingBasedRefinedGraphConstruction(alpha_fusion= 0.2, input_size=(len(raw_data)))
    graphdata = gl.init_topology(raw_data, lower_case=True)
    graph = gl.dynamic_topology(graphdata)
    

    But here is the error that I get.

        152 def __repr__(self):
    --> 153     return self._graph._get_batch_node_features()
    
    File ~/.conda/envs/graph4nlp/lib/python3.10/site-packages/graph4nlp-0.5.5-py3.10.egg/graph4nlp/pytorch/data/data.py:930, in GraphData._get_batch_node_features(self, item)
        915 """
        916 Get the batched view of node feature tensors, i.e., tensors in (B, N, D) view
        917 
       (...)
        927     batch-view tensors, or just the specified tensor.
        928 """
        929 if not self._is_batch:
    --> 930     raise Exception("Calling batch_node_features() method on a non-batch graph.")
        931 if item is None:
        932     batch_node_features = dict()
    
    Exception: Calling batch_node_features() method on a non-batch graph.
    

    Any working simple example is highly appreciated.

    opened by smousav9 1
  • Attribute Error in running IEBasedGraphConstruction.topology

    Attribute Error in running IEBasedGraphConstruction.topology

    ❓ Questions and Help

    I am trying to run the following example from the documentation (https://graph4ai.github.io/graph4nlp/guide/construction/iegraphconstruction.html), but I am getting an attribute error. I was wondering what is the correct version of running "IEBasedGraphConstruction" to construct a graph?

    raw_data = ('James is on shop. He buys eggs.')
    
    nlp_parser = StanfordCoreNLP('http://localhost', port=9000, timeout=300000)
    
    props_coref = {
                    'annotators': 'tokenize, ssplit, pos, lemma, ner, parse, coref',
                    "tokenize.options":
                        "splitHyphenated=true,normalizeParentheses=true,normalizeOtherBrackets=true",
                    "tokenize.whitespace": False,
                    'ssplit.isOneSentence': False,
                    'outputFormat': 'json'
                }
    
    props_openie = {
        'annotators': 'tokenize, ssplit, pos, ner, parse, openie',
        "tokenize.options":
            "splitHyphenated=true,normalizeParentheses=true,normalizeOtherBrackets=true",
        "tokenize.whitespace": False,
        'ssplit.isOneSentence': False,
        'outputFormat': 'json',
        "openie.triple.strict": "true"
    }
    
    processor_args = [props_coref, props_openie]
    
    graphdata = IEBasedGraphConstruction.topology(raw_data, nlp_parser,
                                                  processor_args=processor_args,
                                                  merge_strategy=None,
                                                  edge_strategy=None)
    print(graphdata)
    

    Error:

    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    Input In [8], in <cell line: 26>()
         14 props_openie = {
         15     'annotators': 'tokenize, ssplit, pos, ner, parse, openie',
         16     "tokenize.options":
       (...)
         21     "openie.triple.strict": "true"
         22 }
         24 processor_args = [props_coref, props_openie]
    ---> 26 graphdata = ConstituencyBasedGraphConstruction.topology(raw_data, nlp_parser,
         27                                               processor_args=processor_args,
         28                                               merge_strategy=None,
         29                                               edge_strategy=None)
         30 print(graphdata)
    
    AttributeError: type object 'ConstituencyBasedGraphConstruction' has no attribute 'topology'
    
    

    Thank you

    opened by smousav9 1
  • [Feature] Rgcn integration

    [Feature] Rgcn integration

    Description

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [x] The PR title starts with [$CATEGORY] (such as [Doc], [Feature]])
    • [x] Changes are complete (i.e. I finished coding on this PR)
    • [x] All changes have test coverage
    • [x] Code is well-documented
    • [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [x] Related issue is referred in this PR
    • [ ] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    opened by xiao03 0
  • Add the AMR graph constrction and RGCN in example

    Add the AMR graph constrction and RGCN in example

    Description

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [ ] The PR title starts with [$CATEGORY] (such as [Doc], [Feature]])
    • [ ] Changes are complete (i.e. I finished coding on this PR)
    • [ ] All changes have test coverage
    • [ ] Code is well-documented
    • [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [ ] Related issue is referred in this PR
    • [ ] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    opened by cminusQAQ 0
Releases(v0.5.5)
  • v0.5.5(Jan 20, 2022)

    • Support model.predict API by introducing wrapper functions.
    • Introduce Three new inference_wrapper functions: classifier_inference_wrapper, generator_inference_wrapper, generator_inference_wrapper_for_tree.
    • Add the inference and inference_advance examples in each application.
    • Separate the graph topology and graph embedding process.
    • Renew all the graph construction functions.
    • Module graph_embedding is divided into graph_embedding_initialization and graph_embedding_learning.
    • Unify the parameters in Dataset. We remove the ambiguous parameter graph_type and introduce graph_name to indicate the graph construction method and static_or_dynamic to indicate the static or dynamic graph construction type.
    • New: The dataset now can automatically choose the default methods (e.g., topology_builder) by only one parameter graph_name.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1-alpha(Sep 30, 2021)

    • Lint the codes
    • Support testing with users' own data
    • Fix the bug: The word embedding size was hard-coded in the 0.4.1 version. Now it is equal to "word_emb_size" parameter.
    • Fix the bug: The build_vocab() is called twice in the 0.4.1 version.
    • Fix the bug: The two main files of knowledge graph completion example missed the optional parameter "kg_graph" in ranking_and_hits() when resuming training the model.
    • Fix the bug: We have fixed the preprocessing path error in KGC readme.
    • Fix the bug: We have fixed embedding construction bug when setting emb_strategy to 'w2v'.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1-alpha(Jun 15, 2021)

Owner
Graph4AI
Graph4AI
hashily is a Python module that provides a variety of text decoding and encoding operations.

hashily is a python module that performs a variety of text decoding and encoding functions. It also various functions for encrypting and decrypting text using various ciphers.

DevMysT 5 Jul 17, 2022
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
Official Stanford NLP Python Library for Many Human Languages

Official Stanford NLP Python Library for Many Human Languages

Stanford NLP 6.4k Jan 02, 2023
Text-to-Speech for Belarusian language

title emoji colorFrom colorTo sdk app_file pinned Belarusian TTS 🐸 green green gradio app.py false Belarusian TTS 📢 🤖 Belarusian TTS (text-to-speec

Yurii Paniv 1 Nov 27, 2021
In this project, we compared Spanish BERT and Multilingual BERT in the Sentiment Analysis task.

Applying BERT Fine Tuning to Sentiment Classification on Amazon Reviews Abstract Sentiment analysis has made great progress in recent years, due to th

Alexander Leonardo Lique Lamas 5 Jan 03, 2022
MEDIALpy: MEDIcal Abbreviations Lookup in Python

A small python package that allows the user to look up common medical abbreviations.

Aberystwyth Systems Biology 7 Nov 09, 2022
AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems

AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems

Microsoft 37 Nov 29, 2022
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 884 Nov 11, 2022
Pipeline for chemical image-to-text competition

BMS-Molecular-Translation Introduction This is a pipeline for Bristol-Myers Squibb – Molecular Translation by Vadim Timakin and Maksim Zhdanov. We got

Maksim Zhdanov 7 Sep 20, 2022
Optimal Transport Tools (OTT), A toolbox for all things Wasserstein.

Optimal Transport Tools (OTT), A toolbox for all things Wasserstein. See full documentation for detailed info on the toolbox. The goal of OTT is to pr

OTT-JAX 255 Dec 26, 2022
Auto_code_complete is a auto word-completetion program which allows you to customize it on your needs

auto_code_complete is a auto word-completetion program which allows you to customize it on your needs. the model for this program is one of the deep-learning NLP(Natural Language Process) model struc

RUO 2 Feb 22, 2022
Stuff related to Ben Eater's 8bit breadboard computer

8bit breadboard computer simulator This is an assembler + simulator/emulator of Ben Eater's 8bit breadboard computer. For a version with its RAM upgra

Marijn van Vliet 29 Dec 29, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022
🐍 A hyper-fast Python module for reading/writing JSON data using Rust's serde-json.

A hyper-fast, safe Python module to read and write JSON data. Works as a drop-in replacement for Python's built-in json module. This is alpha software

Matthias 479 Jan 01, 2023
Python interface for converting Penn Treebank trees to Stanford Dependencies and Universal Depenencies

PyStanfordDependencies Python interface for converting Penn Treebank trees to Universal Dependencies and Stanford Dependencies. Example usage Start by

David McClosky 64 May 08, 2022
Named Entity Recognition API used by TEI Publisher

TEI Publisher Named Entity Recognition API This repository contains the API used by TEI Publisher's web-annotation editor to detect entities in the in

e-editiones.org 14 Nov 15, 2022
Code for lyric-section-to-comment generation based on huggingface transformers.

CommentGeneration Code for lyric-section-to-comment generation based on huggingface transformers. Migrate Guyu model and code (both 12-layers and 24-l

Yawei Sun 8 Sep 04, 2021
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System Authors: Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai

Amazon Web Services - Labs 124 Jan 03, 2023
This is my reading list for my PhD in AI, NLP, Deep Learning and more.

This is my reading list for my PhD in AI, NLP, Deep Learning and more.

Zhong Peixiang 156 Dec 21, 2022
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器

ja-timex 自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器 概要 ja-timex は、現代日本語で書かれた自然文に含まれる時間情報表現を抽出しTIMEX3と呼ばれるアノテーション仕様に変換することで、プログラムが利用できるような形に規格化するルールベースの解析器です。

Yuki Okuda 116 Nov 09, 2022