GNPy: Optical Route Planning and DWDM Network Optimization

Overview

GNPy: Optical Route Planning and DWDM Network Optimization

Install via pip Python versions Documentation status GitHub Workflow Status Gerrit Contributors Code Quality via LGTM.com Code Coverage via codecov DOI

GNPy is an open-source, community-developed library for building route planning and optimization tools in real-world mesh optical networks. We are a consortium of operators, vendors, and academic researchers sponsored via the Telecom Infra Project's OOPT/PSE working group. Together, we are building this tool for rapid development of production-grade route planning tools which is easily extensible to include custom network elements and performant to the scale of real-world mesh optical networks.

GNPy with an OLS system

Quick Start

Install either via Docker, or as a Python package. Read our documentation, learn from the demos, and get in touch with us.

This example demonstrates how GNPy can be used to check the expected SNR at the end of the line by varying the channel input power:

Running a simple simulation example

GNPy can do much more, including acting as a Path Computation Engine, tracking bandwidth requests, or advising the SDN controller about a best possible path through a large DWDM network. Learn more about this in the documentation.

Comments
  • testing gnpy after fresh install: error running examples

    testing gnpy after fresh install: error running examples

    2018-04-16: after testing the installation using pip install I realized that timestamps in the local folder gnpy were older than the master (see separate issues)

    1. Then I deleted an older gnpy folder and started afresh, cloning from master.
    2. the timestamps of files were actual (as expected)
    3. running the first example from Readme --> no success
    4. running second example from Readme --> no success

    IOW I wasn't able to run the examples from Readme

    ggrammel-mbp:github ggrammel$ git clone https://github.com/Telecominfraproject/gnpy**
    Cloning into 'gnpy'...
    remote: Counting objects: 694, done.
    remote: Compressing objects: 100% (8/8), done.
    remote: Total 694 (delta 1), reused 3 (delta 0), pack-reused 685
    Receiving objects: 100% (694/694), 349.50 KiB | 501.00 KiB/s, done.
    Resolving deltas: 100% (365/365), done.
    
    ggrammel-mbp:github ggrammel$ cd gnpy
    ggrammel-mbp:gnpy ggrammel$ python setup.py install
    /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
      warnings.warn(msg)
    running install
    running bdist_egg
    running egg_info
    creating gnpy.egg-info
    writing gnpy.egg-info/PKG-INFO
    writing dependency_links to gnpy.egg-info/dependency_links.txt
    writing requirements to gnpy.egg-info/requires.txt
    writing top-level names to gnpy.egg-info/top_level.txt
    writing manifest file 'gnpy.egg-info/SOURCES.txt'
    reading manifest file 'gnpy.egg-info/SOURCES.txt'
    writing manifest file 'gnpy.egg-info/SOURCES.txt'
    installing library code to build/bdist.macosx-10.6-intel/egg
    running install_lib
    running build_py
    creating build
    creating build/lib
    creating build/lib/gnpy
    copying gnpy/__init__.py -> build/lib/gnpy
    creating build/lib/gnpy/core
    copying gnpy/core/__init__.py -> build/lib/gnpy/core
    copying gnpy/core/elements.py -> build/lib/gnpy/core
    copying gnpy/core/execute.py -> build/lib/gnpy/core
    copying gnpy/core/info.py -> build/lib/gnpy/core
    copying gnpy/core/network.py -> build/lib/gnpy/core
    copying gnpy/core/node.py -> build/lib/gnpy/core
    copying gnpy/core/units.py -> build/lib/gnpy/core
    copying gnpy/core/utils.py -> build/lib/gnpy/core
    creating build/bdist.macosx-10.6-intel
    creating build/bdist.macosx-10.6-intel/egg
    creating build/bdist.macosx-10.6-intel/egg/gnpy
    copying build/lib/gnpy/__init__.py -> build/bdist.macosx-10.6-intel/egg/gnpy
    creating build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/__init__.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/elements.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/execute.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/info.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/network.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/node.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/units.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    copying build/lib/gnpy/core/utils.py -> build/bdist.macosx-10.6-intel/egg/gnpy/core
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/__init__.py to __init__.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/__init__.py to __init__.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/elements.py to elements.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/execute.py to execute.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/info.py to info.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/network.py to network.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/node.py to node.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/units.py to units.cpython-36.pyc
    byte-compiling build/bdist.macosx-10.6-intel/egg/gnpy/core/utils.py to utils.cpython-36.pyc
    creating build/bdist.macosx-10.6-intel/egg/EGG-INFO
    copying gnpy.egg-info/PKG-INFO -> build/bdist.macosx-10.6-intel/egg/EGG-INFO
    copying gnpy.egg-info/SOURCES.txt -> build/bdist.macosx-10.6-intel/egg/EGG-INFO
    copying gnpy.egg-info/dependency_links.txt -> build/bdist.macosx-10.6-intel/egg/EGG-INFO
    copying gnpy.egg-info/requires.txt -> build/bdist.macosx-10.6-intel/egg/EGG-INFO
    copying gnpy.egg-info/top_level.txt -> build/bdist.macosx-10.6-intel/egg/EGG-INFO
    zip_safe flag not set; analyzing archive contents...
    creating dist
    creating 'dist/gnpy-0.1.3-py3.6.egg' and adding 'build/bdist.macosx-10.6-intel/egg' to it
    removing 'build/bdist.macosx-10.6-intel/egg' (and everything under it)
    Processing gnpy-0.1.3-py3.6.egg
    Copying gnpy-0.1.3-py3.6.egg to /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Adding gnpy 0.1.3 to easy-install.pth file
    
    Installed /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy-0.1.3-py3.6.egg
    Processing dependencies for gnpy==0.1.3
    Searching for xlrd==1.1.0
    Best match: xlrd 1.1.0
    Adding xlrd 1.1.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for urllib3==1.22
    Best match: urllib3 1.22
    Adding urllib3 1.22 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for sphinxcontrib-websupport==1.0.1
    Best match: sphinxcontrib-websupport 1.0.1
    Adding sphinxcontrib-websupport 1.0.1 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for sphinxcontrib-bibtex==0.3.6
    Best match: sphinxcontrib-bibtex 0.3.6
    Adding sphinxcontrib-bibtex 0.3.6 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for Sphinx==1.6.6
    Best match: Sphinx 1.6.6
    Adding Sphinx 1.6.6 to easy-install.pth file
    Installing sphinx-apidoc script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing sphinx-autogen script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing sphinx-build script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing sphinx-quickstart script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for snowballstemmer==1.2.1
    Best match: snowballstemmer 1.2.1
    Adding snowballstemmer 1.2.1 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for six==1.11.0
    Best match: six 1.11.0
    Adding six 1.11.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for scipy==1.0.0
    Best match: scipy 1.0.0
    Adding scipy 1.0.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for requests==2.18.4
    Best match: requests 2.18.4
    Adding requests 2.18.4 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for PyYAML==3.12
    Best match: PyYAML 3.12
    Adding PyYAML 3.12 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for pytz==2017.3
    Best match: pytz 2017.3
    Adding pytz 2017.3 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for python-dateutil==2.6.1
    Best match: python-dateutil 2.6.1
    Adding python-dateutil 2.6.1 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for pytest==3.3.2
    Best match: pytest 3.3.2
    Adding pytest 3.3.2 to easy-install.pth file
    Installing py.test script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing pytest script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for pyparsing==2.2.0
    Best match: pyparsing 2.2.0
    Adding pyparsing 2.2.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for Pygments==2.2.0
    Best match: Pygments 2.2.0
    Adding Pygments 2.2.0 to easy-install.pth file
    Installing pygmentize script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for pybtex-docutils==0.2.1
    Best match: pybtex-docutils 0.2.1
    Adding pybtex-docutils 0.2.1 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for pybtex==0.21
    Best match: pybtex 0.21
    Adding pybtex 0.21 to easy-install.pth file
    Installing pybtex script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing pybtex-convert script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing pybtex-format script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for py==1.5.2
    Best match: py 1.5.2
    Adding py 1.5.2 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for pluggy==0.6.0
    Best match: pluggy 0.6.0
    Adding pluggy 0.6.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for oset==0.1.3
    Best match: oset 0.1.3
    Adding oset 0.1.3 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for numpy==1.13.3
    Best match: numpy 1.13.3
    Adding numpy 1.13.3 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for networkx==2.0
    Best match: networkx 2.0
    Adding networkx 2.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for matplotlib==2.1.0
    Best match: matplotlib 2.1.0
    Adding matplotlib 2.1.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for MarkupSafe==1.0
    Best match: MarkupSafe 1.0
    Adding MarkupSafe 1.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for latexcodec==1.0.5
    Best match: latexcodec 1.0.5
    Adding latexcodec 1.0.5 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for Jinja2==2.10
    Best match: Jinja2 2.10
    Adding Jinja2 2.10 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for imagesize==0.7.1
    Best match: imagesize 0.7.1
    Adding imagesize 0.7.1 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for idna==2.6
    Best match: idna 2.6
    Adding idna 2.6 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for docutils==0.14
    Best match: docutils 0.14
    Adding docutils 0.14 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for decorator==4.1.2
    Best match: decorator 4.1.2
    Adding decorator 4.1.2 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for cycler==0.10.0
    Best match: cycler 0.10.0
    Adding cycler 0.10.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for chardet==3.0.4
    Best match: chardet 3.0.4
    Adding chardet 3.0.4 to easy-install.pth file
    Installing chardetect script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for certifi==2017.11.5
    Best match: certifi 2017.11.5
    Adding certifi 2017.11.5 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for Babel==2.5.3
    Best match: Babel 2.5.3
    Adding Babel 2.5.3 to easy-install.pth file
    Installing pybabel script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for attrs==17.4.0
    Best match: attrs 17.4.0
    Adding attrs 17.4.0 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for alabaster==0.7.10
    Best match: alabaster 0.7.10
    Adding alabaster 0.7.10 to easy-install.pth file
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Searching for setuptools==38.5.1
    Best match: setuptools 38.5.1
    Adding setuptools 38.5.1 to easy-install.pth file
    Installing easy_install script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    Installing easy_install-3.6 script to /Library/Frameworks/Python.framework/Versions/3.6/bin
    
    Using /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
    Finished processing dependencies for gnpy==0.1.3
    
    ggrammel-mbp:gnpy ggrammel$ python examples/transmission_main_example.py
    Traceback (most recent call last):
      File "examples/transmission_main_example.py", line 90, in <module>
        exit(main(args))
      File "examples/transmission_main_example.py", line 41, in main
        network = network_from_json(json_data)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/network.py", line 29, in network_from_json
        g.add_node(getattr(elements, el_config['type'])(el_config))
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/elements.py", line 222, in __init__
        super().__init__(config)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/node.py", line 48, in __init__
        self.config = ConfigStruct(**config)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/node.py", line 31, in __init__
        json_config = load_json(config['config_from_json'])
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/utils.py", line 20, in load_json
        with open(filename, 'r') as f:
    FileNotFoundError: [Errno 2] No such file or directory: 'edfa_config.json'
    
    ggrammel-mbp:gnpy ggrammel$ python examples/transmission_main_example.py examples/coronet_conus_example.json
    Traceback (most recent call last):
      File "examples/transmission_main_example.py", line 90, in <module>
        exit(main(args))
      File "examples/transmission_main_example.py", line 42, in main
        build_network(network)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/network.py", line 119, in build_network
        network = split_fiber(network, fiber)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/network.py", line 86, in split_fiber
        network = add_egress_amplifier(network, prev_node)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/network.py", line 108, in add_egress_amplifier
        new_edfa = Edfa(config)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/elements.py", line 222, in __init__
        super().__init__(config)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/node.py", line 48, in __init__
        self.config = ConfigStruct(**config)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/node.py", line 31, in __init__
        json_config = load_json(config['config_from_json'])
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/gnpy/core/utils.py", line 20, in load_json
        with open(filename, 'r') as f:
    FileNotFoundError: [Errno 2] No such file or directory: 'edfa_config.json'
    ggrammel-mbp:gnpy ggrammel$ 
    ggrammel-mbp:gnpy ggrammel$ ls -l examples/
    total 296
    -rw-r--r--   1 ggrammel  wheel  132604 Apr 16 19:19 coronet_conus_example.json
    drwxr-xr-x   7 ggrammel  wheel     238 Apr 16 19:19 edfa
    -rw-r--r--   1 ggrammel  wheel    9973 Apr 16 19:19 edfa_config.json
    drwxr-xr-x  12 ggrammel  wheel     408 Apr 16 19:19 edfa_model
    -rw-r--r--   1 ggrammel  wheel    3241 Apr 16 19:19 transmission_main_example.py
    ggrammel-mbp:gnpy ggrammel$ 
    
    bug 
    opened by ggrammel 19
  • Variable gain amp model documentation and modification

    Variable gain amp model documentation and modification

    The current VGA model for estimating NF at intermediate points is based on an assumption that is overly aggressive for many amplifiers. As well, there isn't any documentation on the derivation of the existing model. The attached slide deck outlines the existing model derivation, plus suggested modifications to better fit actual amplifiers, with an example. It also outlines why the intermediate variable delta_p is not required and has no impact on the resulting nf fit. (It has an impact on practical amp design, but not the model).
    GNPY_variable_gain_amp_model.pptx

    Additional context Add any other context or screenshots about the feature request here. Updated slide deck to correct an equation and to illustrate GNPY advanced amplifier model fits GNPY_variable_gain_amp_model_v2.pptx

    opened by cgkelly 15
  • Build no amp in roadm and add roadm restrictions on amps

    Build no amp in roadm and add roadm restrictions on amps

    This PR adds the amp restriction in Roadms (booster and preamp). to use it with xls input file -user may enter (valid) restrictions in eqpt_config.json and all roadms will have same restriction -user may add per Roadm restriction in Nodes sheet (column 9 and 10 for booster and preamp, respectively). this will override eqpt_config general input for this particular node to use it with json:

    • user may enter (valid) restrictions in eqpt_config.json and all roadms will have same restriction
    • user must add a valid restriction field in the json roadm element. this will override general input Note that restrictions apply to the entire roadm, and not per degree.

    a special case without booster is enabled via the xls eqpt sheet: if a Roadm amp is marked fused, this will add a Fused element instead of an amp, enabling special design of roadms without booster. This can be applied per degree

    This PR refers to #211 discussion. @jeanluc-auge, @ojnas , @jktjkt could you please have a look and tell me how to improve ? Thanks !

    opened by EstherLerouzic 14
  • YANG models for inventory management and topology description

    YANG models for inventory management and topology description

    Right now, the documentation and code disagree on which values are mandatory, what exactly they mean, etc. Let's try to use a set of YANG models for verification.

    FIXME: What is missing

    • [x] ROADMs
    • [ ] EDFA parameters
      • [x] TODO: implement DGT (et al) indirection
      • [x] check that this set of EDFA parameters makes sense for each possible simulation parameter set
      • [x] rework for #227
      • [x] EDFA maximal gain -- this one doesn't appear to be used by the code anywhere!, which confuses me
      • [x] Raman support (preliminary, on-par with what we already have now in devel)
    • [x] Fibers
      • [ ] Fiber.dispersion and Fiber.gamma range validation. Also, how do we describe Fiber type's gamma?
      • [x] Fiber operational data
    • [x] Transponders
      • [x] Transceiver's tx roll-off; how exactly is this defined? I only understand it "intuitively".
      • [x] minimal required OSNR (#298)
    • [x] topology
    • [x] replace SpectralInfo
      • [x] default transponder for transmission_main_example
      • [x] unclear how to reconcile the f_min/f_max of the SI and TX
      • [x] default spectrum allocation (autodesign assumes full allocation -> cannot use transponders)
    • [ ] CI coverage:
      • [x] YANG validation & linting
      • [ ] positive and negative JSON samples for constraint testing
    • [x] input validation
      • this one should be in Python, not via yanglint for dependency reasons
    • [ ] conversion scripts
    • [ ] use this as an input format
    • [ ] generate docs from YANG descriptions

    Equipment Library

    The first step in adding YANG description for gnpy's input is the equipment library (tip-photonic-equipment). It contains data about all defined EDFA and Fiber types. This is supposed to be functionally equivalent to the equpt_config.json. The JSON structure has changed because I (much) value YANG readability over being directly compatible with our previous format (there will be a conversion script anyway).

    The core idea of this model is to describe capabilities of the simulation engine as it exists, which means that the individual choice/case statements mirror our different "simulation input parameters". The user is not expected to do any augmentations -- just describe the amplifiers, fiber, etc, with data.

    The pre-YANG code actually split stuff from equpt_config.json into additional JSON files for "fancy bits", such as the DGT LUT. That's something that, IMHO, does not make sense when we're willing to ship with machine-validation of the complete input set. So instead of deferring to another JSON file for the NF-/gain-ripple/DGT, let's move everything in-line into the input data. This has one obvious downside in making the amplifier data a bit too verbose. There were several options:

    • Ignore the human-friendliness and push everything into the amplifier description. This is nice and self-contained, but the data are going to be very, very long, and the majority of the WG was worried that it would make human editing too difficult.

    • Move everything to a side-loaded JSON file. This option separates out some numerical parameters from the equipment library, and therefore splits the configuration into two places. One of these places would be exempt from the YANG validation, and loaded via unspecified means. That's a no-go.

    • Put stuff into a YANG model, but use one level of indirection between the amplifier description and the numerical data.

    As a compromise, we've chosen the last option.

    In the real world, some "common fiber types" are well-defined by ITU, such as the SSMF. Esther tried to model this via a set of identities and YANG identityrefs. I think that there's no disadvantage in shipping these data as a default content of the YANG-formatted datastore, similarly to how the equipment library used to be structured prior to this patch. Once again, I'm following the pattern where the user can change any data without augmenting the YANG model. The only reason for editing/augmenting the (equipment) YANG model should be changes in our simulation engine, such as when adding different input parameters for NF calculations, or adding Raman amplification, etc.

    The amplifier model has been reworked a bit (only in the YANG model for now, though). I've reduced the number of available "simulation parameters" to a reasonable minimum as suggested by Jean-Luc (cf. issue #227):

    • a polynomial NF model
    • a polynomial OSNR model ("OpenROADM")
    • a simplified model for operators with NF_min and NF_max
    • a dual-stage amplifier using any of the above

    Topology Model

    The topology model (tip-photonic-topology) uses leafrefs for "instantiating" actual nodes from models defined in the equipment library. The topology is unidirectional.

    At first, I used an ad-hoc, custom topology for simplicity. This was changed in response to Jonas' comment; there's clearly no need to reinvent this particular wheel. Now the model builds on top of RFC8345.

    Both ietf-network-topology and the associated ietf-network are needed. The augmentations make it a bit harder to see the YANG rendering of the resulting modules, but that's just how the IETF model was designed. There's nothing to fix on our side.

    Unlike the current JSON, Fibers are not nodes anymore. They are implemented as nt:link augmentations.

    Sample Data

    To see how the (total, complete) input data to GNPy might look like, I'm including a simple JSON file which defines some amplifiers, fiber, transponders, etc., and uses them to build a very simple path. I think that there's plenty of stuff which is missing, but it shows something which passes the YANG validation.

    YANG trees

    Equipment Inventory

    module: tip-photonic-equipment
      +--rw amplifier* [type]
      |  +--rw type              string
      |  +--rw (simulation)
      |  |  +--:(polynomial-NF)
      |  |  |  +--rw polynomial-NF
      |  |  |     +--rw gain-min                      gain
      |  |  |     +--rw gain-flatmax                  power
      |  |  |     +--rw gain-max-extended             power
      |  |  |     +--rw max-power-out                 power
      |  |  |     +--rw gain-ripple*                  db-ratio
      |  |  |     +--rw nf-ripple*                    db-ratio
      |  |  |     +--rw dynamic-gain-tilt*            db-ratio
      |  |  |     +--rw nf-polynomial-coefficients
      |  |  |        +--rw a    polynomial-coefficient
      |  |  |        +--rw b    polynomial-coefficient
      |  |  |        +--rw c    polynomial-coefficient
      |  |  |        +--rw d    polynomial-coefficient
      |  |  +--:(polynomial-OSNR)
      |  |  |  +--rw polynomial-OSNR
      |  |  |     +--rw gain-min                        gain
      |  |  |     +--rw gain-flatmax                    power
      |  |  |     +--rw gain-max-extended               power
      |  |  |     +--rw max-power-out                   power
      |  |  |     +--rw gain-ripple*                    db-ratio
      |  |  |     +--rw nf-ripple*                      db-ratio
      |  |  |     +--rw dynamic-gain-tilt*              db-ratio
      |  |  |     +--rw osnr-polynomial-coefficients
      |  |  |        +--rw a    polynomial-coefficient
      |  |  |        +--rw b    polynomial-coefficient
      |  |  |        +--rw c    polynomial-coefficient
      |  |  |        +--rw d    polynomial-coefficient
      |  |  +--:(min-max-NF)
      |  |  |  +--rw min-max-NF
      |  |  |     +--rw gain-min             gain
      |  |  |     +--rw gain-flatmax         power
      |  |  |     +--rw gain-max-extended    power
      |  |  |     +--rw max-power-out        power
      |  |  |     +--rw gain-ripple*         db-ratio
      |  |  |     +--rw nf-ripple*           db-ratio
      |  |  |     +--rw dynamic-gain-tilt*   db-ratio
      |  |  |     +--rw nf-min               noise-figure
      |  |  |     +--rw nf-max               noise-figure
      |  |  +--:(raman)
      |  |  |  +--rw raman
      |  |  |     +--rw gain    gain
      |  |  |     +--rw nf      noise-figure
      |  |  +--:(dual-stage)
      |  |     +--rw dual-stage!
      |  |        +--rw preamp      -> /tip-photonic-equipment:amplifier/type
      |  |        +--rw booster     -> /tip-photonic-equipment:amplifier/type
      |  |        +--rw gain-min    power
      |  +--rw frequency-min?    frequency <191.35>
      |  +--rw frequency-max?    frequency <196.1>
      |  +--rw has-output-voa?   boolean <false>
      +--rw fiber* [type]
      |  +--rw type          string
      |  +--rw dispersion    decimal64
      |  +--rw gamma         decimal64
      +--rw transceiver* [type]
         +--rw type             string
         +--rw frequency-min?   frequency <191.35>
         +--rw frequency-max?   frequency <196.1>
         +--rw mode* [name]
            +--rw name             string
            +--rw baud-rate        uint64
            +--rw required-osnr    db-ratio
            +--rw tx-osnr          db-ratio
            +--rw grid-spacing     frequency
            +--rw tx-roll-off?     decimal64
            +--rw cost?            uint32 <1>
    

    Topology

    module: tip-photonic-topology
    
      augment /ietf-network:networks/ietf-network:network/ietf-network:network-types:
        +--rw photonic-topology!
      augment /ietf-network:networks/ietf-network:network/ietf-network-topology:link:
        +--rw fiber
           +--rw type      -> /tip-photonic-equipment:fiber/tip-photonic-equipment:type
           +--rw length    decimal64
      augment /ietf-network:networks/ietf-network:network/ietf-network:node:
        +--rw (element)
        |  +--:(amplifier)
        |  |  +--rw amplifier
        |  |     +--rw model          -> /tip-photonic-equipment:amplifier/tip-photonic-equipment:type
        |  |     +--rw gain-target    tip-photonic-equipment:gain
        |  |     +--rw tilt-target    tip-photonic-equipment:db-ratio
        |  +--:(concentrated-loss)
        |  |  +--rw concentrated-loss
        |  |     +--rw attenuation?   tip-photonic-equipment:db-ratio <0.0>
        |  +--:(transceiver)
        |     +--rw transceiver
        |        +--rw model    -> /tip-photonic-equipment:transceiver/tip-photonic-equipment:type
        +--rw geo-location
        |  +--rw reference-frame
        |  |  +--rw alternate-system?    string {ietf-geo-location:alternate-systems}?
        |  |  +--rw astronomical-body?   string <earth>
        |  |  +--rw geodetic-system
        |  |     +--rw geodetic-datum?    string <wgs-84>
        |  |     +--rw coord-accuracy?    decimal64
        |  |     +--rw height-accuracy?   decimal64
        |  +--rw (location)?
        |  |  +--:(ellipsoid)
        |  |  |  +--rw latitude     ietf-geo-location:degrees
        |  |  |  +--rw longitude    ietf-geo-location:degrees
        |  |  |  +--rw height?      decimal64
        |  |  +--:(cartesian)
        |  |     +--rw x    decimal64
        |  |     +--rw y    decimal64
        |  |     +--rw z?   decimal64
        |  +--rw velocity
        |  |  +--rw v-north?   decimal64
        |  |  +--rw v-east?    decimal64
        |  |  +--rw v-up?      decimal64
        |  +--rw timestamp?         ietf-yang-types:date-and-time
    
    WIP 
    opened by jktjkt 12
  • Advanced amp model .dgt vector reversal

    Advanced amp model .dgt vector reversal

    The example .dgt vector in the Juniper std_medium_gain_advanced_config.json file appears to be reversed. The documentation on how to use is sparse, and the code does not appear to account for the possible optimization of the amp design for minimum ripple at a tilt value other than zero.

    1. Current Usage: The .dgt vector appears to define the gain ripple as a function of tilt: Ripple = .dgt x tilt. To accommodate amplifiers that are designed for minimum ripple at a non zero tilt, this should be modified to ripple = .dgt x (tilt-flat gain tilt).

    2. The sign convention for amplifier tilt needs to be defined/documented. My assumption is fiber tilt due to SRS is positive (lower loss or positive "gain" at higher wavelengths, thus, to counteract this, amplifiers require a negative tilt).

    3. In trying to understand how to use the .dgt vector, it became apparent that the example vector is reversed. The attached slides illustrate this, and also help clarify how this vector is derived and its usage.
      It also suggests that if unknown, a default set of vector coefficients can be derived, as the expected shape from simple fixed gain C band EDFA simulations match the (reversed) Juniper .dgt vector quite well, once any intrinsic tilt within the .dgt vector itself is removed.

    GNPY_dgt_v1.pptx

    bug documentation 
    opened by cgkelly 10
  • power mode and set_roadm_loss

    power mode and set_roadm_loss

    I'm very confused by the set_roadm_loss in function in network.py, which makes the power out from a ROADM independent from input power (in power mode). So if power_dbm (in eqpt_config.json for example) is increased, the ROADM loss is also increased which means the OSNR contribution from the ROADM is constant. What is the point of that?

    enhancement question backlog 
    opened by ojnas 10
  • .dgt vector not properly read or interpolated

    .dgt vector not properly read or interpolated

    I have defined .dgt vectors for the advanced edfa models, and also assigned non-zero gain tilts. I expected the gain ripple to translate into OSNR ripple at the output of a path, but did not see any ripple.

    With some help from Esther, I added some lines to cli-examples.py to attempt to debug this (is there a better way to get more information about gain/OSNR evolution along a path, for such debugging purposes? )

    around line 238, after 'for elem in path:', added if (hasattr(elem,'effective_gain')):
    print(vars(elem))

    I used the hasattr to check if the element is an amp, to avoid an excessive data dump. The result shows that the GNPY used .dgt vector starts with a very similar (but not exact) value to the first one defined in my .dgt vector, but the remaining values did not vary much, and did not reflect my .dgt vector.

    Going through the closed issues, it appears there was an issue in interpolation when freq. was defined in THz instead of Hz (or vice versa); my amp models have start/stop freq's in Hz, and so does the SI data in my eqpt_config file, so it appears this is correct. Comparison graph attached.

    defined vs GNPY used .dgt vectors.pptx

    opened by cgkelly 9
  • ROADM: input channel power for boosters on different degrees of a ROADM

    ROADM: input channel power for boosters on different degrees of a ROADM

    Following up from today's coders call with @EstherLerouzic, @ojnas and @ggrammel. The problem is that we want to have different input powers for different boosters within a single ROADM. This is important for integration with SDN controllers, for modelling of disaggregated ROADMs, and has been also seen in practical day-to-day operation by Esther's colleagues. Right now they are implementing manual workarounds, including manual tweaking of the ROADMs' output power once per each degree, etc.

    Existing code

    Today, we're using this topology: image

    Unfortunately it is not possible to have different power at the wide colored links because in our current code, the propagate method and the __call__ operator operate on one node only. There's a proposal to extend __call__ with another option, next_node, but only for ROADMs.

    Per-degree "output stage"

    One proposal (which I mistakenly attributed to Jonas, but in the end it's mine misunderstanding) was to do it like this:

    image

    That way we can control the input power to each booster. The cost of this solution is that it does not reflect the way how ROADMs are built. Suddenly there are multiple links from a preamp. That does not reflect reality; it implies that there's, maybe, a splitter, but the power is actually not being split. Also, for the route-and-select ROADM design, it moves out all of the route-side WSSes into the select-side box, which is once again rather confusing.

    Per-degree "ROADM degree"

    An abstraction like this won't work, unfortunately: image If this needs, say, -20dBm per channel at point A, and this means that there will also be exactly -20dBm per channel at points B, C and D (because there's no next_node for Element.__call__). If "Booster 2" needs, say, -15dBm per channel at point E, then we have a discrepancy with reality because "Degree 2" suddenly has to "create power". That would be rather confusing when shown to the user.

    Complex topology with "ROADM stage" per ingress and per egress of every degree

    We could have one explicit GNPy-level Element for the ingress and egress stages of the express path, like this: image This way per-degree output power is made explicit by setting it at the egress part of each ROADM degree. The drawback is that this model requires 2×N ROADM elements per N-degree real-world ROADM node, and plenty of internal links.

    There's a question if we can effectively hide this to the user, so that they do not have to deal with the complex topology, and that the amount of explicit boilerplate is reduced. One option is splitting the internal representation within GNPy (the Python code) from the input JSON files (existing "legacy ones" as I'm calling them, and also the new, YANG-based inputs). From my point of view that's special-purpose code. The other option is using cross-layer adaptations from ietf-netowrk-topology YANG model. Essentially, this is about defining "higher level views" of the low-level network by aggregating certain network elements together, and offering the user a choice of seeing either the raw data, or a higher-level view.

    Needs decision by the coders team

    We have to decide what to do:

    • We can take Esther's patch which adds per-degree per-channel TX power as a solution for this problem.
    • We can say that the official solution is to use one ROADM GNPy element per degree. If we do that, then:
      • We can ignore users' concern about complexity, and expose all of the low-level elements and links to the users.
      • Or we can implement some element aggregation:
        • Perhaps at the YANG level via the ietf-network-topology cross-layer tie-ins. That could work for collapsing OMS, and presenting a high-level map in the UI showing "just the PoPs".
        • We could combine all ROADMs together "as a rule". That way, the text UI can (with a flag?) silently combine all "ROADM building blocks" that are propagated through into one virtual enity.
        • The configuration can be, possibly, decoupled from the in-memory data structures. That way, the configuration could contain one "ROADM" super-node with per-degree channel TX power, and this can be translated into the complex topology (one node per ingress and one node per egress per each degree) over which the Python code calls propagate.
    opened by jktjkt 9
  • What's the meaning of delta_p?

    What's the meaning of delta_p?

    After reading the source code, i am confused by delta_p. At the beginning, i thought that delta_p is related to the intial channel power pref_span0. However, in some files I found that the value of a ROADM's target_pch_out_db is directly assigned to its dp. In my opinion, target_pch_out_db equals dp only when initial channel power is 0 dBm. Therefore, the meanning of delta_p (dp) confused me a lot. Can anyone explain the meanging of dp and the benefits introducing it? I would be appreciate if someone could help :)

    opened by Tobelightbeam 9
  • Refactor printing

    Refactor printing

    (This replaces #261 by me and #293 by Jonas.)

    Remove debug printing from propagate()

    This is inspired by #293. The original issue was that the transponder OSNR was not accounted for. Rather than making the propagate() and propagate2() more complex, let's move the actual path printing to the demo code. There's no secret sauce in there, it's just a simple for-each-print thingy, so let's remove it from the library code.

    fixes #262

    doc: fiber length summary in km, not meters

    Reading 80000m is a bit more complex than just 80 km. Also let's add a space between the numebr and the unit for better readability.

    examples: color highlighting of the "most interesting output"

    I think that this SNR value represents the most important output of this example. There's plenty of debugging info on display already, so let's make this one more prominent.

    I was thinking about moving the highlighting to elements' each __str__() function, but that felt a bit like layering violation to me. I could have also changed just the transponder's __str__(). In the end, I think that repeating just the final GSNR at the link-end transponder makes most sense here. This is aligned with what we talked about at yesterday's (on 2019-09-18 -- note that this is a backport from #261) demo for Microsoft, after all.

    bug enhancement 
    opened by jktjkt 9
  • Roadm equalization

    Roadm equalization

    major refactor:

    • auto-design improvement for amplifier selection
    • new dual stage amplis: replace hybrid raman implem
    • read operational power settings gain settings are still read in gain mode (power_mode = False in eqpt_config.json)
    • roadm architecture change: target_pch_out_db sets the roadm output power (before booster/egress amp) in gain mode and power mode. roadm_loss is suppressed.
    • power_mode reads operational power settings or calculates them. Operational power settings defines a delta power wrto the nominal eqpt_config.json['SI'] power. New delta p column in Eqpt sheet for the xls parser
    • in gain mode the power can no longer be set: it is the result of amplifier gain configuration. Gain mode should not be used in auto-design
    • all carriers power (ase+nli+signal) are equalized to target_pch_out_db: tilt and ripple is removed after roadms. Because the equalization is made on signal+noise, there is a small signal depletion that is thought to reflect real systems implementation.
    opened by jeanluc-auge 9
  • Customized branch for use in LYNX (leave PR open, need not merge)

    Customized branch for use in LYNX (leave PR open, need not merge)

    Some changes are as follows:

    • a different format for response generation
    • comment out console prints
    • another json function for elements, to view metrics as the signal is propagated.

    This is an opinionated customization specifically for LYNX and will probably include more changes as we need them. Any suggestions are welcome.

    opened by mohammadsaadraza 0
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi Telecominfraproject/oopt-gnpy!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • Reduce size by removing scipy

    Reduce size by removing scipy

    Adding gnpy (using pip install) when e.g. building an application as docker containers increases the size of the application by a few 100's of MBs, which can sometimes be a bit problematic. The main reason is the requirements, which includes some very large packages, the largest one being scipy. Searching through the code, I can only find two uses of scipy: 1) constants (which could easily be replaced by internal definitions) and 2) interp1d from scipy.interpolate (for which there is a practically equivalent function in numpy). Could removing the dependency on scipy be an option?

    Another large dependency is pandas, which only seems to be used in a couple of tests.

    opened by ojnas 4
  • Meaning of `f_min` in the `SpectralInformation`

    Meaning of `f_min` in the `SpectralInformation`

    Right now in the JSON format, the first carrier is placed at f_min + spacing. That's not intuitive because users would typically expect them to start at f_min as the first central frequency (or, possibly, as the lowest passband edge frequency as the docs mention the edge at some places). However, doing that in the existing legacy-JSON would lead to backward-incompatible change.

    References:

    • https://review.gerrithub.io/c/Telecominfraproject/oopt-gnpy/+/507044/comment/ec702729_30517a01/
    • https://review.gerrithub.io/c/Telecominfraproject/oopt-gnpy/+/535245/comments/c81cf92d_ab0feba9
    opened by jktjkt 0
  • RFC: YANG: taming the autodesign

    RFC: YANG: taming the autodesign

    Today, it might be difficult to represent a network as-is in GNPy, that is, to completely switch off any autodesign-driven transformations. Here's a YANG proposal which (I hope) will make this easier:

    module: ietf-network
      +--rw networks
         +--rw network* [network-id]
            + ...
            +--rw node* [node-id]
            |  + ...
            |  +--rw (tip-topo:element)
            |     +--:(tip-topo:amplifier-placeholder)
            |     |  +--rw tip-topo:amplifier-placeholder    empty
            |     +--:(tip-topo:amplifier)
            |     |  +--rw tip-topo:amplifier
            |     |     +--rw tip-topo:model             -> /tip-pe:amplifier/tip-pe:type
            |     |     +--rw tip-topo:gain-target?      tip-pe:gain
            |     |     +--rw tip-topo:out-voa-target?   tip-pe:db-ratio
            |     |     +--rw tip-topo:tilt-target?      tip-pe:db-ratio
            |     |     +--rw tip-topo:delta-p?          tip-pe:db-ratio
            |     +--:(tip-topo:attenuator)
            |     |  +--rw tip-topo:attenuator
            |     |     +--rw tip-topo:attenuation?   tip-pe:db-ratio
            |     +--:(tip-topo:transceiver)
            |     |  +--rw tip-topo:transceiver
            |     |     +--rw tip-topo:model    -> /tip-pe:transceiver/tip-pe:type
            |     +--:(tip-topo:roadm)
            |        +--rw tip-topo:roadm
            |           +--rw tip-topo:model                              -> /tip-pe:roadm/tip-pe:type
            |           +--rw tip-topo:target-egress-per-channel-power?   tip-pe:power
            +--rw nt:link* [link-id]
               + ...
               +--rw nt:source
               |  +--rw nt:source-node?   -> ../../../nw:node/nw:node-id
               |  + ...
               +--rw nt:destination
               |  +--rw nt:dest-node?   -> ../../../nw:node/nw:node-id
               |  + ...
               +--rw (tip-topo:link-type)?
                  +--:(tip-topo:tentative-link)
                  |  +--rw tip-topo:tentative-link
                  |     +--rw tip-topo:type      -> /tip-pe:fiber/tip-pe:type
                  |     +--rw tip-topo:length    decimal64
                  +--:(tip-topo:fiber)
                  |  +--rw tip-topo:fiber
                  |     +--rw tip-topo:type              -> /tip-pe:fiber/tip-pe:type
                  |     +--rw tip-topo:length            decimal64
                  |     +--rw tip-topo:loss-per-km?      decimal64
                  |     +--rw tip-topo:attenuation-in?   tip-pe:db-ratio
                  |     +--rw tip-topo:conn-att-in?      tip-pe:db-ratio
                  |     +--rw tip-topo:conn-att-out?     tip-pe:db-ratio
                  |     +--rw tip-topo:raman!
                  |        +--rw tip-topo:temperature    uint16
                  |        +--rw tip-topo:pump* [frequency]
                  |           +--rw tip-topo:frequency    tip-pe:frequency-raman-pump
                  |           +--rw tip-topo:power        tip-pe:power
                  |           +--rw tip-topo:direction    enumeration
                  +--:(tip-topo:patch)
                     +--rw tip-topo:patch
                        +--rw tip-topo:roadm-target-egress-per-channel-power?   tip-pe:power
    

    As usual, the network consists of nodes and links. Unlike today's GNPy, though, some interesting "GNPy elements" are no longer represented as nodes, but they become links. A prime example is the Fiber element.

    Some constructs are wroth a special attention. The fiber element now corresponds to a real-world fiber, i.e., something which is already known to exist. Its length and attenuation is already known, and GNPy won't be allowed to split this fiber into smaller chunks. If the automatic splitting is desirable, then there are two options:

    • The tentative-link, where only the length is known, and amplifiers can be auto-inserted at will. Think of this as a "link intention": I know I want to connect these cities which are 5000km apart, but I have no clue how many amps I need for that. GNPy, please help me design my network.
    • The amplifier-placeholder, on the other hand, connects two physical fiber instances together (or a fiber and tentative-link, or even two tentative-links; there's probably no need for an artificial limitation). Think of it as a hut along the fiber path; GNPy can either replace that by a specific amplifier, or it can use this location to splice the two fibers. That's done via the attenuator node.

    Another difference compared to today's GNPy-JSON is fiber vs. patch. The features currently supported by the Fused GNPy element (which is a node in GNPy's internal network representation) will be handled either by the patch (as a graph edge, for joining, e.g., two ROADMs together), or by the the attenuator node.

    This proposal does not require ports, but there are some hacks (such as the ROADM's per-degree options which are presently implemented inside a patch, which looks a bit ugly) which might warrant using ports. If we do that, though, we might as well introduce "bidirecitonal nodes" (i.e., a node which has both east-to-west and west-to-east EDFA).

    Also, this will need a set of heuristics when opening a legacy-style JSON file. I'm trying to solve some ambiguities in the existing format, and that might make it rather hard to support a full legacy-JSON → YANG-JSON → legacy-JSON round trip.

    Thoughts, opinions?

    Cc: @sme791, @AndreaDAmico

    opened by jktjkt 0
Releases(v2.6)
  • v2.6(Sep 19, 2022)

    GNPy 2.6 — ECOC 2022

    Greetings from a sunny day in Basel, Switzerland. This is a general bugfix release with some preparations for the upcoming features (mixed-rate simulations and the YANG interface). Please stay tuned while we stabilize these, and try out our patches under review:

    https://review.gerrithub.io/q/project:Telecominfraproject/oopt-gnpy

    Source code(tar.gz)
    Source code(zip)
  • v2.5(Mar 8, 2022)

  • v2.4(Sep 15, 2021)

    Hailing from the sunny Bordeaux, France, where ECOC 2021 is taking place, here's a new version of GNPy, an optical route planning library.

    Released just three months after the v2.3, we've improved support of OpenROADM networks, fixed bugs, and extended our test suite. As was previously announced, this is also the first release to require a more recent Python, the 3.8.

    If you're interested in what's coming next, be sure to check the patches and changes that we are currently working on.

    Source code(tar.gz)
    Source code(zip)
  • v2.3(Jun 6, 2021)

    Hello from the virtual OFC'2021. A fresh release of GNPy, a transmission quality estimator for DWDM optical networks, is here.

    In this release, we added support for modeling of OpenROADM networks. Example:

    gnpy-transmission-example \
      -e gnpy/example-data/eqpt_config_openroadm.json \
      gnpy/example-data/Sweden_OpenROADM_example_network.json
    

    We have improved our documentation, so that it is hopefully easier to navigate and covers more advanced topics. We've also extended the docs with information targeted at vendors (or others) who are willing to contribute equipment datasheets to GNPy.

    ROADMs can now specify different target per-channel launch powers for different directions (degrees). We've also fixed various bugs; please refer to the git changelog for a detailed list.

    Internally, we've extended the test coverage of our code base, improved the CI infrastructure, and addressed some technical debt. However, we've grown limited by supporting Python 3.6, so this is the last release which works with that version. The next release will require Python 3.8.

    There's much more brewing in our development branches, so I'm already looking forward to the next release. Take a look behind the curtain:

    https://review.gerrithub.io/q/project:Telecominfraproject/oopt-gnpy

    Source code(tar.gz)
    Source code(zip)
  • v2.2(Jun 18, 2020)

    There are many user-facing changes in this release. We've changed the way how examples and default data are shipped:

    • examples/transmission_main_example.py is replaced by gnpy-transmission-example
    • examples/path_requests_run.py is replaced by gnpy-path-request
    • Default example data, such as the equipment library, the CORONET sample topology, etc, are now shipped as a part of the Python library. You can request their location by running gnpy-example-data.
    • There's a new tool for converting XLS data into JSON: gnpy-convert-xls.

    The installation process got much easier, it should be enough to just run pip install gnpy once you've installed your Python 3.6 (or newer).

    For those who use GNPy as a Python library, there were many changes as we have moved different parts of code to better places. We no longer put everything into the gnpy.core.* Python module hierarchy, now there is:

    • gnpy.core which implements signal propagation,
    • gnpy.topology tracks requests for spectrum,
    • gnpy.tools provides miscellaneous tools for, e.g., dealing with JSON I/O, XLS parsing, as well as the example frontends.

    GNPy now also tracks chromatic dispersion (CD) and polarization mode dispersion (PMD) of a signal as it propagates through fiber and all active nodes.

    Under the hood, we've adjusted our development process a bit. We are using Gerrit (via GerritHub.io) for code review, and Zuul (via VexxHost) for Continuous Integration (CI).

    Source code(tar.gz)
    Source code(zip)
  • v2.1(Jan 14, 2020)

  • v2.0(Nov 13, 2019)

    Released not that long after ECOC'19, this release brings in bugfixes and usability improvements. It also paves the way towards enabling using GNPy as a backend for path feasibility computation.

    You can see where we are going at the ONF booth at the conference venue.

    Source code(tar.gz)
    Source code(zip)
  • v1.8(Sep 23, 2019)

  • v1.2(May 17, 2019)

  • v1.1(Jan 30, 2019)

  • v1.0(Oct 17, 2018)

    Version 1.0 Release.

    • first "production"-ready release
    • open network element model (EDFA, GN-model)
    • auto-design functionality
    • path request functionality
    Source code(tar.gz)
    Source code(zip)
BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis

Bilateral Denoising Diffusion Models (BDDMs) This is the official PyTorch implementation of the following paper: BDDM: BILATERAL DENOISING DIFFUSION M

172 Dec 23, 2022
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Ibai Gorordo 42 Oct 07, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
Code base for the paper "Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation"

This repository contains code for the paper Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiati

8 Aug 28, 2022
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

AICITY2021_Track2_DMT The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop. Introduction

Hao Luo 91 Dec 21, 2022
Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Set Recognition"

Adversarial Reciprocal Points Learning for Open Set Recognition Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Se

Guangyao Chen 78 Dec 28, 2022
This project intends to use SVM supervised learning to determine whether or not an individual is diabetic given certain attributes.

Diabetes Prediction Using SVM I explore a diabetes prediction algorithm using a Diabetes dataset. Using a Support Vector Machine for my prediction alg

Jeff Shen 1 Jan 14, 2022
Analysis code and Latex source of the manuscript describing the conditional permutation test of confounding bias in predictive modelling.

Git repositoty of the manuscript entitled Statistical quantification of confounding bias in predictive modelling by Tamas Spisak The manuscript descri

PNI - Predictive Neuroimaging Lab, University Hospital Essen, Germany 0 Nov 22, 2021
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation Ported from https://github.com/hzwer/arXiv2020-RIFE Dependencies NumPy

49 Jan 07, 2023
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
Gesture-controlled Video Game. Just swing your finger and play the game without touching your PC

Gesture Controlled Video Game Detailed Blog : https://www.analyticsvidhya.com/blog/2021/06/gesture-controlled-video-game/ Introduction This project is

Devbrat Anuragi 35 Jan 06, 2023
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

38 Oct 21, 2022
[ACM MM2021] MGH: Metadata Guided Hypergraph Modeling for Unsupervised Person Re-identification

Introduction This project is developed based on FastReID, which is an ongoing ReID project. Projects BUC In projects/BUC, we implement AAAI 2019 paper

WuYiming 7 Apr 13, 2022
A decent AI that solves daily Wordle puzzles. Works with different websites with similar wordlists,.

Wordle-AI A decent AI that solves daily "Wordle" puzzles. Works with different websites with similar wordlists. When prompted with "Word:" enter the w

Ethan 1 Feb 10, 2022
training script for space time memory network

Trainig Script for Space Time Memory Network This codebase implemented training code for Space Time Memory Network with some cyclic features. Requirem

Yuxi Li 100 Dec 20, 2022
Liver segmentation using MONAI and pytorch

Machine Learning use case in the field of Healthcare. In this project MONAI and pytorch frameworks are used for 3D Liver segmentation.

Abhishek Gajbhiye 2 May 30, 2022
Cross-modal Retrieval using Transformer Encoder Reasoning Networks (TERN). With use of Metric Learning and FAISS for fast similarity search on GPU

Cross-modal Retrieval using Transformer Encoder Reasoning Networks This project reimplements the idea from "Transformer Reasoning Network for Image-Te

Minh-Khoi Pham 5 Nov 05, 2022
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
Training PSPNet in Tensorflow. Reproduce the performance from the paper.

Training Reproduce of PSPNet. (Updated 2021/04/09. Authors of PSPNet have provided a Pytorch implementation for PSPNet and their new work with support

Li Xuhong 126 Jul 13, 2022