当前位置:网站首页>Programmable, protocol independent software switch (read the paper)

Programmable, protocol independent software switch (read the paper)

2022-06-23 19:07:00 Bachuan Xiaoxiaosheng

PISCES: programmable , Protocol independent software switch

Abstract

The virtual machine monitor uses a software switch to guide packets in and out of the virtual machine (vm). These switches often need to be upgraded and customized , To support new protocol headers or encapsulation for tunneling and coverage , To improve the measurement and commissioning functions , Even add functions similar to the middle box . Software switches are usually based on a lot of code , Including kernel code , Changing switches is a daunting task , Need to be proficient in the design and development of network protocols 、 test , And maintaining a large and complex code base . Changing the way software switches forward packets does not require an in-depth understanding of the current situation . contrary , It should be possible to specify how to use high-level domain specific languages (DSL)( Such as P4) Process and forward packets , And compile it to run on the software switch . We introduced PISCES, It's a software switch , Derive from Open vSwitch (OVS), A hard wired hypervisor Switch , Its behavior is to use P4 custom .PISCES Not hardwired to a specific protocol ; This independence makes it easy to add new features . We'll also show how the compiler analyzes high-level specifications to optimize forwarding performance . Our assessment shows that ,PISCES Performance and OVS Quite a ,PISCES The length of the program is about OVS Source code 1/40.

brief introduction

Software switches , Such as Open vSwitch (OVS), Plays a key role in modern data centers : With a few exceptions , Every virtual machine in and out (VM) All data packets go through the software switch . Besides , In this environment , The number of servers greatly exceeds that of physical switches . therefore , Data centers full of servers running virtual machine monitor software also contain far more software switches than hardware switches . Again , Since each virtual machine monitor hosts multiple virtual machines , Such data centers have more virtual Ethernet ports than physical Ethernet ports .

One of the main advantages of software virtual machine monitor switch is , It's easier to upgrade than a hardware switch . therefore , The virtual machine monitor switch supports the new encapsulation header 、 Improved troubleshooting and debugging features , And functions similar to middleware , Such as load balancing 、 Address virtualization and encryption . future , As data center owners customize and optimize their infrastructure , They will continue to add features to the virtual machine monitor switch .

Each new feature needs to customize the switching of virtual machine monitor , But making these customizations is much more difficult than it seems . First , Most of the mechanisms supporting fast packet forwarding are located in the kernel . Writing kernel code requires domain expertise that most network operators lack , Therefore, it brings great obstacles to the development and deployment of new features . Recent technologies can accelerate packet forwarding in user space , But these technologies still require a lot of software development expertise , And be familiar with a large number of 、 Complex and complex code base . Besides , Customization requires more than just incorporating changes into the switch code , You also need to maintain these customizations as the underlying software evolves , This may require a lot of resources .

To change the way software switches forward packets , You don't need to know how the switch is implemented . contrary , It should be possible to use domain specific languages (DSL)( Such as P4) Specify a custom network protocol , Then compile it into custom code for virtual machine monitor switching . In this way DSL The forwarding behavior of custom switches will be supported , There is no need to change the underlying switch implementation . Decoupling the custom protocol implementation from the underlying switch code also makes it easier to maintain these customizations , Because they are still independent of the underlying switch implementation . Use standardized DSL, Customization can also be transplanted to other hardware or software switches that support the same language .

A key point to borrow from similar trends in hardware switches is , The bottom switch should be a base plate , It can handle high-speed packets well , But not bound to a specific protocol . In extreme cases , Switches are considered protocol independent , This means that it receives instructions on how to process packets ( adopt DSL) Before , It doesn't know what the agreement is . let me put it another way , The agreement was created by the author of the agreement DSL The program written in represents .

We apply similar concepts to software switches . We assume that DSL The program written in specifies the structure of the packet header to be parsed and the matching operation table ( for example , Which header field to match and what actions to perform on the matched header ). The underlying software substrate is a general engine , Optimized to parse in the form specified by the program 、 Match and process packet headers .

However , stay DSL Expressing these customizations from DSL To the code running in the switch . Compared with the manual implementation of the switch , This protocol compilation process will reduce the efficiency of the underlying implementation , This reduces performance . The compilation process is different from the hardware switch , In a hardware switch , Given limited resources , The goal is to optimize the area 、 Delay, power and other indicators , Meet resource constraints at the same time . Our goal in this article is (1) Quantification in such DSL The additional cost of expressing custom agreements in ;(2) Design and evaluate domain specific compiler optimizations , Minimize performance overhead . Last , We prove , Optimize with appropriate compiler , Performance of protocol independent software switches —— At advanced level DSL Switches that support custom protocol specifications without directly modifying low-level source code —— Close to the performance of native virtual machine monitor software switch . Our results are very promising , Especially considering our basic code OVS Not designed to support protocol independence . however , Our results show that , In the virtual machine monitor switch , The cost of programmability is negligible . We hope that our results will inspire the design of new protocol independent software switches , Run at a higher speed .

Our contribution is as follows :

  • PISCES Design and implementation of , The first one is allowed in advanced DSL Software switch with customized protocol specification in , There is no need to directly modify the switch source code .
  • In a GitHub In public PISCES Open source implementation . The implementation is a protocol independent software switch , It comes from OVS, from ( be called P4 Of ) senior DSL Programmed .
  • Domain specific optimization and a back-end optimizer , To reduce the use of P4 customized OVS Performance overhead . We also P4 Two new annotations have been introduced to help optimize
  • Yes PISCES The code complexity of the program and the evaluation of its forwarding performance . Our assessment shows that ,PISCES The program average is better than the equivalent OVS Source code changes are about 40 times , And forwarding performance ( Throughput ) The cost is only 2% about .

Firstly, we describe the real use cases from the operation network to lead to the demand for customizable virtual machine monitor software switch , And introduce P4 and OVS Background information of .

Protocol independent switch requirements

We said PISCES Is a protocol independent software switch , Because it doesn't know what the agreement is , I don't know how to handle packets on behalf of the protocol , Until the programmer specifies it . for example , If we want to PISCES Handle IPv4 Data packets , So we need to describe P4 How to deal with IPv4 Data packets . stay P4 In the program ( for example IPv4. P4), We need to describe IPv4 Format and field of header , Include IP Address 、 agreement ID、TTL、 The checksum 、 Signs, etc . We also need to specify a lookup table IPv4 Prefix , And the longest matching prefix we search . We also need to describe how to reduce TTL, How to update checksums , wait .P4 The program captures the entire package processing pipeline , Compile it to OVS Source code ,OVS Specifies the matching of the switch 、 Operational and analytical capabilities .

Protocol independent switches have many benefits :

Add a new standard or private protocol header

The supplier has been proposing new protocol headers , Especially for data centers . Private 、 Proprietary protocols have also been added , To provide competitive advantage , for example , By creating better isolation between applications , Or by introducing new congestion markers . in many instances , Before deploying a new agreement , All hardware and software switches must be upgraded , To identify headers and handle them correctly . For hardware switches , Data center owners must provide demand to their chip suppliers , And wait three to four years , If the supplier agrees to add new features . In case of software switching , They must wait for the next major revision 、 Test and deployment cycle . Even modifying open source software switches is not a panacea , Because once data center owners directly modify open source software switches to add their own custom protocols , These changes still need to be maintained and synchronized with the mainline code base , With the continuous development of the original open source switch , Introduces a lot of code maintenance overhead . Can be directed to P4 Data center owners who add new protocols to the program can compile and deploy new protocols faster

Delete the standard protocol header

There are usually fewer protocols than traditional campus networks and enterprise networks , Part of the reason is that most traffic is machine to machine , Many traditional protocols do not require ( For example, multicast ). for example , Amazon Web Services (AWS) It is said that only forward IPv4 Header packet . therefore , Removing unused protocols completely benefits data center owners , This eliminates any concerns about interacting with the sleep implementation of the old protocol . It's bad enough to have to support many agreements ; What's worse is , The interaction and meaning of protocols that the operator does not intend to use must be understood . therefore , Data center owners often want from their switches 、 Eliminate unused protocols in network cards and operating systems . Removing protocols from traditional switches is difficult ; For hardware , This means waiting for a new chip , For software transformation , This means extracting specific protocols through a large number of code bases . stay PISCES In the network , Deleting unused protocols is as simple as deleting unused parts of the protocol specification and recompiling the switch source code .

Add better visibility

With the increasing size of the data center , More and more applications , Understanding the behavior and operation of the network is becoming more and more important . Failure can result in a huge loss of revenue , As the network gets bigger 、 More complicated , Long debugging time will aggravate this loss . People are more and more interested in how to make it easier for people to see what the network is doing . Improving network visibility may require support for new statistics , Generate a new probe package , Or add new protocols and actions to collect switch status . Users will want to see how queues evolve , How the delay changes , Whether the tunnel is terminated correctly , And whether the link is still normal . Usually , In an emergency , Users want to quickly add visibility properties . Make them ready to deploy , Or it can quickly modify the forwarding and monitoring logic , It can reduce the time of diagnosing and repairing network faults .

Add new features

If users and network owners can modify forwarding behavior , They can even add new features . for example , as time goes on , We will undertake more complex routing switches , New congestion control mechanism , Trace route , New load balancing algorithm , New ways to alleviate DDoS, New " fictitious - Physics " Gateway function . If the network owner can upgrade the infrastructure to achieve greater utilization or more control , Then they will know how to do . If there is an upgrade DSL( Such as P4) Write a program to add new features to the switch , We can expect network owners to improve their networks faster .

background

PISCES It's a software switch , Its forwarding behavior is specified in domain specific languages .PISCES be based on Open vSwitch (OVS) Software switches , Use P4 Domain specific language . We describe below P4 and OVS

Domain specific language :P4

P4 Is a domain specific language , It represents how the pipeline of network forwarding elements should process packets using the abstract forwarding model shown in the figure below . In this model , Each packet first passes through a programmable parser , The parser extracts the header .P4 The program specifies the structure of each possible header , And an analytic diagram showing sorting and dependencies . then , The packet passes through a series of matching action tables (MATs).P4 The program specifies each mat Possible matching fields and the control flow between them , The allowable range of each operation table . At run time ( namely , When the switch forwards packets ), The controller software can be added according to specific matching operation rules 、 Delete and modify table entries , These rules comply with P4 Procedure specification . Last , Before sending the header field to the appropriate port , The sender will write the header field back to the packet .

 Insert picture description here

P4 Abstract forwarding model

We choose P4 Because its abstract switch model is similar to OVS Built in language OpenFlow Model of , It allows us to compare with and without P4 Front-end OVS Make a simple comparison . We considered other alternative foundations , For our purposes ,P4 Enough to make the expected comparison . There is a common way to represent forwarding across all pipeline switches in the network , And has code that can be ported from one switch to another , It's good . therefore , It makes sense to do these experiments in the same language .

Software switches : Open vSwitch

Open vSwitch (OVS) Widely used in data centers , As a software switch running inside the virtual machine monitor . In this environment ,OVS Messages can be exchanged between virtual interfaces and physical interfaces .OVS Realize the common Ethernet 、GRE、IPv4 Such agreement , And newer protocols in the data center .

Open vSwitch Virtual switches have two important parts , Called slow path and fast path ( Data path ), As shown in the figure below . Slow path is a user space program ; It provides OVS Most of our intelligence . Fast path as a cache layer , Contains only the code needed to achieve maximum performance . It is worth noting that , The fast path must pass any packets that cause cache loss to the slow path , To get instructions for further processing .OVS Including individual for different environments 、 Portable slow path and multiple fast path implementation : One is based on Linux The kernel module , The other is based on Windows The kernel module , The other is based on Intel DPDK User space forwarding .DPDK Fast paths can produce the highest performance , So we use it in our work ; With extra effort , Our work can be extended to other fast paths .

 Insert picture description here

OVS Forwarding model

As SDN Switch ,OVS Rely on the instructions of the controller to determine its behavior , The specific use is OpenFlow agreement .OpenFlow Specify the behavior based on the set of matching operation tables , Each table contains many entries called streams . In turn, , The flow is matched by ( According to the header and metadata )、 Indicates that the switch has a matching result of true The action of what to do and a numerical priority . When a package arrives at a specific matching operation table , The switch finds a matching stream and performs its operations ; If there are multiple streams matching the message , Then the stream with the highest priority takes precedence .

Software switches that fully implement the above behavior cannot achieve high performance , because OpenFlow Data packets often go through several matching operation tables , Each table requires a common packet classification . therefore ,OVS Rely on cache to achieve good forwarding performance . The main OVS Cache is its megflow Torrent cache , Its structure is very much like a OpenFlow surface . The idea behind big stream caching is , In theory , You can calculate their cross product , The package is traversed OpenFlow All matching operation tables accessed during pipeline are merged into a single table . But it's not feasible , because k A table and n 1 . . . . n k n_{1}....n_{k} n1....nk The cross product of a rule may contain n 1 . . . . n k n_{1}....n_{k} n1....nk The rules . The function of huge flow cache is a bit like the cross product of delay calculation : When a package arrives , If it doesn't match any existing large stream cache entries , The slow path calculates a new entry , Corresponds to a line in the cross product of the theory , And insert it into the cache .OVS Many techniques are used to improve the performance and hit rate of high traffic cache

When a packet hits in a very large traffic cache , The switch handles it much faster than the round trip from fast to slow required for cache loss . However , As a general grouping and classification step , Huge stream cache lookup still has a big cost . therefore ,Open vSwitch The fast path implementation also includes a microflow cache , A hash table that maps from a five tuple packet to a very large stream cache entry . The result of microfluidic cache lookup can only be a hint , Because torrents usually match more fields , Not just five tuples , Therefore, in the best case, the microfluidic cache entry can point to the most likely match . therefore , The fast path must verify that the macroflow cache entry does match the packet . If the match , Then the lookup cost is only the lookup cost of a single hash table . This search cost is usually much lower than the general packet classification , Therefore, for 、 For the traffic pattern of stable packet flow , It is an important optimization . If it doesn't match , Then the packet continues to perform the usual megastream cache lookup process , Skip the items it has checked .

PISCES Prototype

our PISCES The prototype is OVS A modified version of , analysis 、 Matching and operation codes are used by our P4 Generated by the compiler C Code replacement . The workflow is as follows : First , The programmer creates a P4 Program , And use P4 Compiler PISCES Version is OVS Generate a new parse 、 Match and operation code . secondly , compile OVS( Use regular C compiler ) To create a protocol dependent switch , The switch handles P4 The packet described in the program . To modify the agreement , The user needs to modify P4 Program , The program will be compiled into a new virtual machine monitor switching binary .

We use OVS As PISCES The basis of , Because it is widely used , It also includes some basic functions of programmable switches , Therefore, it allows us to focus only on the parts of the switch that need to be customized ( That is, analysis 、 Matching and operation ). The code structure is good , Easy to modify , And the test environment already exists . It also allows comparison : We can compare unmodified OVS and P4 The number of lines of the program , We can also compare their performance .

PISCES Of P4-OVS compiler

P4 The compiler has two parts : Front end will be P4 Transcoding to a target independent intermediate representation (IR), Backend will IR Map to target . In this case , The back-end operates through IR To optimize CPU Time 、 Delays or other objectives , Then generate C Code to replace OVS Parsing in 、 Match and operation code , As shown in the figure below .P4-OVS Compiler output C Source code , The source code realizes everything needed to compile the corresponding switch executable . The compilation process also generates an independent type checker , The executable program uses this program to ensure that any run-time configuration instructions from the controller ( for example , Insertion of flow rules ) accord with P4 Protocol specified in the program .

 Insert picture description here

PISCES Of P4-OVS compiler

analysis

C The code replaces the original with a structural flow OVS Parser ,C structure OVS Used to track protocol header fields , Include the members specified in each field P4 Program , And generate code to extract the header field from the data packet to the structure flow .

matching

OVS Use a general classifier data structure , Realize matching based on meta space search . To perform a custom match , We don't need to modify the data structure or the code that manages it . contrary , The control plane can simply fill the classifier with the new packet header field at run time , This automatically makes these fields available for package matching .

action

The back end of the compiler supports custom operations by automatically generating code , We statically compile the code into OVS Binary . Custom actions can be in OVS Execute in slow or fast path ; The compiler decides where to run a particular operation , To ensure that the switch performs these operations effectively . Some actions can be performed in any component . Programmers can provide some hints to the compiler , Tell it whether the slow path implementation or the fast path implementation of the action is the most appropriate .

control flow

In the switch , The control flow of a packet is the sequence of matching operation tables traversed by the packet . And in the P4 in , The control flow must be specified at compile time of the program , And in the OVS in , The control flow is specified at run time , Through stream entries , This makes it more flexible . therefore , Our compiler backend can be changed without OVS In the case of P4 Control semantics

Optimize IR

The compiler back end includes an optimizer to check and modify IR, In order to generate high-performance C Code . for example ,P4 The program may contain a complete IP The checksum , But the optimizer can convert this operation into an incremental IP The checksum , To make it faster . The compiler is also right IR Perform data flow analysis , Allow it to merge and specialize C Code . The optimizer also determines when and where to edit the packet header in the packet processing pipeline . Some hardware switches delay editing until the end of the pipeline , Software switches usually edit header files at each stage of the pipeline . if necessary , The optimizer will IR Convert to inline editing .

And others P4 The same goes for compilers ,P4- OVS The compiler also generates... For matching operation tables API, And expanded OVS Command line tools to handle new fields .

improvement OVS

We need to be right about OVS Make three modifications , Enable it to achieve any P4 The forwarding behavior described in the program .

Arbitrary encapsulation and unpacking .

OVS I won't support it P4 Any encapsulation and de encapsulation that the program may need . Every OVS Fast path provides customized support for various fixed forms of encapsulation .Linux Kernel fast path and DPDK The fast path realizes GRE、VXLAN、STT Equal package . The metadata required to encapsulate and de encapsulate packets for the tunnel is statically configured . The switch uses the entry port of the packet to map it to the appropriate tunnel ; When the message goes out of the interface , According to this static tunnel configuration, the message is encapsulated in the corresponding tunnel IP In the headlines . therefore , We are OVS Two new primitives have been added to , add to header() And delete header(), Perform encapsulation and unpacking respectively , And perform these operations in the fast path .

Conditions based on header field comparison .

OpenFlow Only the bitwise equality test of header field is directly supported . Related tests , Such as < Sum ratio ; take k The potential field is compared with the constant , It can be expressed as having at most k A rule that uses bit equality matching . Two k Test the relationship between bit fields , for example x <Y, need k(k + 1)=2 One such rule . In order to test two separate applications at the same time n1 and n2 The condition of the rule , We need to n1 and n2 The rules .P4 Directly support such tests , But in OpenFlow It's too expensive to implement them in this way , So we are OVS Added direct support for them , Operate as a condition , One for OpenFlow Operation of the if sentence . for example , Our extension allows P4 The compiler issues a formal action , If x <Y, Go to table 2, Otherwise go to table 3.

General checksum verification / to update .

IP The router should verify the checksum at the entry , Then recalculate at the exit , Most hardware switches do this . Software routers usually skip checksum verification on entry to reduce CPU cycle . contrary , If it changes any fields ( for example TTL), It just incrementally updates the checksum ,OVS Only incremental checksum is supported , But we want to support other uses of checksums in the way programmers want . therefore , We added incremental checksum optimization . The effectiveness of this optimization depends on P4 Whether the switch is the forwarding element of a given packet or the end host ( If it's a terminal host ), Then it must verify the checksum , So it needs P4 Programmer's comments .

Compiler's back-end optimizer

The ultimate impact of software switch on forwarding performance is two aspects :(1) Cost per package for fast path processing ( increase 100 Cycle costs reduce switching costs s The throughput is about 500 Mbps), and (2) Number of packets sent, slow path , With 50 + Double cycle fast path processing packets . The following table lists the optimizations we have implemented , And whether the optimization reduces the travel of the slow path 、 Fast track CPU cycle , Or both . The rest of this section details these optimizations .

 Insert picture description here

Back end optimizations and how they improve performance

Inline editor vs. Pipeline post editing

OVS Fast path performs inline editing , Apply package modifications now ( Do some simple optimization for the slow path , To avoid redundancy or unnecessary modifications ). If many header fields are modified , Delete or insert , It can become expensive to move and adjust packet data dynamically . contrary , Delay editing until the header file is processed ( As hardware switches usually do ) It will be more effective . Optimizer analysis IR, To determine how many times a package may need to be modified in the pipeline . If the value is below a certain threshold , Then the optimizer performs inline editing ; otherwise , It will perform post pipeline editing . We allow programmers to use a pragma Instruction to override this heuristic .

Incremental checksum .

By using high-level programs to describe ( Such as P4) To represent the checksum operation , Programmers can provide the compiler with the necessary context information , So as to realize checksum more effectively . for example , The programmer can notify the compiler through comments , The checksum of each package can be incrementally calculated , then , The optimizer can perform data flow analysis , To determine which packet header field has changed , Thus, the recalculation of checksum is more effective .

Parser specialization

Protocol independent software switch can optimize the implementation of packet parser , Because the customized package handles the pipeline ( In high-level languages such as P4 It is specified in ) Provides specific information about which fields in the package are modified or used as the basis for forwarding decisions . for example , Layer 2 switches that do not make forwarding decisions based on the information of other layers can avoid parsing packet header fields in those layers . Specifying forwarding behavior in a high-level language can provide the compiler with information that can be used to optimize the parser .

Movement specialization

OVS Inline editing in the fast path combines related fields that are often set at the same time . for example ,OVS A single fast path action is realized , Set up IPv4 Source 、 Purpose 、 Service types and TTL value . When multiple fields are updated at the same time , This method is effective , If only one field is updated , The marginal cost is small .IPv4 There are many other fields , But the fast path cannot set any of them .

OVS Design in this area requires domain expertise : Its designers know which areas are important to be able to change quickly .P4 The compiler doesn't have expert knowledge of which fields to put together , This can lead to the cost of combining too few or too many fields into a single operation . Fortunately, , High level of matching operation control flow P4 Description allows the optimizer to use optimizations such as dead code elimination to identify and eliminate redundant checks in fast path setting operations . such , The optimizer only checks what will actually be set in the matching operation control flow set Those fields in the operation .

Action merge

Through analysis P4 Control flow and matching operation processing in the program , The compiler can find out which fields have actually been modified , And can generate an effective 、 A single operation to update these fields directly . therefore , If a rule modifies two fields , So the optimizer is OVS Only one operation is installed in .

Cache field modification

The network protocol data plane rarely needs to perform arithmetic operations on the header field .TTL Decrement operation is the most obvious counterexample ; The checksum discussed above is another problem . therefore ,OVS Fast path does not include general arithmetic operations . in fact , Nor do they include special-purpose TTL Decrement operation . contrary , To achieve dedicated OpenFlow Action to reduce TTL, The slow path depends on the fact that : Most packages from a given source have the same TTL. therefore , It will issue a cache entry , With the packet it is forwarding TTL Values match , And overwrite the value with a value smaller than the observed value , This method is called match-and-set. about TTL The reduction of , This solution is acceptable , because OVS Designers know that this caching method will produce a high hit rate in practice

match-and-set Not always appropriate . Consider updating IPv4 or IPv6 The checksum of some other given IP Field changes . Use matching and setting methods , Cache entries must match every field that contributes to the checksum , Per one IP Field , This will reduce the hit rate of cache entries to almost zero . about P4 Arithmetic support is also simpler , For example, increase or decrease the field value , Last ,PISCES It is impossible to know in a given situation match-and-set Is it appropriate? .

PISCES The solution used is , This is achieved by automatically generating fast path operations P4 Specific arithmetic operations required by the program , Avoid matching and setting when possible . for example , If the program adds a specific field ,PISCES A fast path operation is generated to add this field . This is in P4 The program is effective when it blindly performs arithmetic operations , Otherwise, it will not match the value of the modified field . If the program does match it , that , According to the usual caching rules , Cache entries must match fields , Therefore, you need to match and set methods .

Phased assignment algorithm

OVS Realize segmented search , To reduce the number of times to slow paths . Segmented lookup divides fields into an ordered list of groups , It's called a phase . Each stage is cumulative , Therefore, each stage after the first stage contains all the fields of the previous stage and other fields . The last stage contains each field .OVS In its tuple space search classifier, each stage is implemented as a separate hash table . Classifier lookup will search each stage in order . If any search doesn't match , The entire search will terminate , Only the fields contained in the last stage must match in the cache entry .

OVS Four stages are used : The first stage is the metadata field ( Such as the incoming port of the message ), The second stage is metadata and layer-2 fields , The third stage is to add three-tier fields , The fourth stage is all fields ( Metadata 、 On the second floor 、 Three layers 、 On the second floor 、 On the second floor ). This order is based on the principle : For network , When the order corresponds to the increasing order of the entropy of the field observation , Order is the most effective . for example , In general , Cache entries that match only metadata may have a higher hit rate than cache entries that match only layer 4 fields , So metadata first appeared at an earlier stage ( The first stage ), Instead of the fourth level field ( The last stage ).

Piecewise search can be extended to any P4 Program . This order cannot be from P4 Inferred from the program , therefore PISCES Need help to choose the right stage . We've enhanced P4 Language , Enables the user to annotate each header file with a phase number . The number of stages is the same as the number of header files .

原网站

版权声明
本文为[Bachuan Xiaoxiaosheng]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/174/202206231737022666.html