-
Notifications
You must be signed in to change notification settings - Fork 156
Building Packet Processing Graph
There are two main steps of developing network function using NFF-GO
- Packet processing graph construction - covered by this chapter
- Implementing user-defined functions inside processing graph - covered by next chapter - helper API
Construction of packet processing graph (and also all side NFF-GO actions) should done between SystemInit and SystemStart function calls. All construction functions are defined in main "flow" package - "github.com/intel-go/nff-go/flow".
SystemInit - func SystemInit(args *Config) error - gets configuration structure as a parameter. This structure can be nil or filled with required specification. Unset parameters will get default values. SystemInit initializes all NFF-GO internal structures as well as underline DPDK framework. SystemInit will return any errors at init stage. If an error happens inside DPDK initialization - first check that application is launched under sudo rights.
Following example will initialize NFF-GO system for using ten cores
config := flow.Config{
CPUList: "0-9",
}
flow.SystemInit(&config)
SystemStart - func SystemStart() error - checks graph correctness and starts packet processing. SystemStart function uses infinite loop. This loop shouldn't be stopped other than using SystemStop or SystemReset functions, because otherwise all packet processing functions will continue working. SystemStart can be called inside new goroutine after graph construction.
Following example is the smallest possible example of application using NFF-GO
package main
import "github.com/intel-go/nff-go/flow"
func main() {
flow.SystemInit(nil)
flow.SystemStart()
}
All NFF-GO errors are returned silently as NFError type which contains message and error code. Developer can implement any error handler or use default one
CheckFatal - func CheckFatal(err error) - checks input error and if it exists prints error message and terminates whole network function.
Previous example can be extended to
package main
import "github.com/intel-go/nff-go/flow"
func main() {
flow.CheckFatal(flow.SystemInit(nil))
flow.CheckFatal(flow.SystemStart())
}
Packet processing graph is an oriented graph without cycles where vertexes are represented by Flow Functions (FF) and edges represent abstract packet flows. Packet flow is a path which connects two or more FFs. All meaningful NFF-GO applications should contain packet processing graph as a main processing engine. Construction of this graph is done between SystemInit and SystemStart functions.
FFs are added to packet processing graph by notation of "Set" functions. Note that graph construction is not thread safe, so "Set" functions should be called exactly sequential. Also note, that each flow function use at least one CPU core from the set of cores that was given to SystemInit. Exceptions are explicitly indicated.
Input flow functions don't have incoming packet flow. Instead they open outgoing new packet flow, get packets from external resources and place them in this packet flow. Vice versa output flow functions don't have outgoing packet flows. Instead they close incoming packet flow, get packets from it and send these packets outside network function. Will consider functions in pairs based on source of packets. (See below about specific capabilities - IP reassembly, statistic counters, jumbo frames).
Using DPDK pool mode drivers is the fastest way for sending and receiving packets. Beforehand user should attach some NIC ports to DPDK drivers (See Bare metal deployment chapter (TODO empty)). After this DPDK will sequentially number these ports and operating system and its tools will not see these ports anymore. Due to DPDK limitations only one application can use DPDK port simultaneously.
SetReceiver - func SetReceiver(portId uint16) (OUT *Flow, err error) - adds receive FF to building graph. It gets DPDK port as a parameter. SetReceiver adds several receive queues to the specified port based on NIC capabilities and scaling tasks, switches RSS (Receive Side Scaling) on. Also at least one send queue will be added to all receive ports to have an opportunity to answer ARP requests. Assigning multiple receives to one port is forbidden and will lead to an error.
SetSender - func SetSender(IN *Flow, portId uint16) error - adds send FF to building graph. It gets incoming flow and DPDK port number as parameters. SetSender adds several send queues to the specified port. It is allowed to assign several sends to one port. In this case they will be merged to one send and use one CPU core together.
Following example shows forwarding packets from zero DPDK port to first DPDK port
flow.SystemInit(nil)
inputFlow, _ := SetReceiver(0)
SetSender(inputFlow, 1)
flow.SystemStart()
DPDK ports are fastest mean of receiving and sending packets, however in some situations it is impossible to use DPDK drivers (no DPDK at system, binding external drivers is forbidden, need to share ports for multiple applications). In these circumstances NFF-GO provides functions that will receive and send packets from standard OS interfaces.
SetReceiverOS - func SetReceiverOS(device string) (*Flow, error) - adds receiveOS FF to building graph. It gets OS device name as a parameter. SetReceiverOS tries to create raw socket connected with given interface. Assigning two receives to one device is undefined behavior.
SetSenderOS - func SetSenderOS(IN *Flow, device string) error - adds sendOS FF to building graph. It gets incoming flow and OS device name as parameters. SetSenderOS tries to create raw socket connected with given interface. Assigning two sends to one device is undefined behavior.
Following example shows forwarding packets from eth5 interface to eth7 interface
flow.SystemInit(nil)
inputFlow, _ := SetReceiverOS("eth5")
SetSenderOS(inputFlow, "eth7")
flow.SystemStart()
Currently this functionality is not supported. It will be supported after release of Linux 5.1.
NFF-GO provides capabilities to dump packets to PCAP trace files and gets packets from PCAP files. These functions are not optimized for performance and should be used for debug purposes.
SetReceiverFile - func SetReceiverFile(filename string, repcount int32) (OUT *Flow) - adds read FF to building graph. It gets the filename of PCAP trace file and a number of reads from it as parameters. PCAP file will be read "number" times (or infinitely if number = -1). SetReceiverFile doesn't check for file existence at building stage. Instead there will be Fatal error after starting the system.
SetSenderFile - func SetSenderFile(IN *Flow, filename string) error - adds write FF to building graph. It gets incoming flow and filename of output PCAP trace file as parameters. Write will dump each packet from incoming flow to PCAP file.
"Second.pcap" file will contain four copies of "First.pcap" file after following example. (Note, that example will not finished after copying)
flow.SystemInit(nil)
inputFlow, _ := SetReceiverFile("First.pcap", 4)
SetSenderFile(inputFlow, "Second.pcap")
flow.SystemStart()
NFF-GO provides capability to send packets (and receive) to virtual OS device via DPDK KNI feature - Kernel Network Interface. It is useful for making "shading" of routing tables or so, because OS doesn't see packets (also ARP and ICMP requests) going through DPDK interfaces. Firstly developer needs to enable KNI. It is done by setting NeedKNI option of config structure of SystemInit to true. Secondly developer needs to create virtual devise:
CreateKniDevice - func CreateKniDevice(portId uint16, name string) (*Kni, error) - creates and returns KNI device. Gets virtual port number (can be the same as real port) and name of device as parameters. Name of KNI device can be seen in ifconfig list of devices.
Then developer will be able to use KNI FFs:
SetReceiverKNI - func SetReceiverKNI(kni *Kni) (OUT *Flow) - adds receive KNI FF to building graph. It gets KNI device which is created by CreateKniDevice function as parameter.
SetSenderKNI - func SetSenderKNI(IN *Flow, kni *Kni) error - adds send KNI FF to building graph. It gets KNI device which is created by CreateKniDevice function and incoming flow as parameters.
Using above functions use three CPU cores: one for KNI device - OS handing, one for send and one for receive. Developer can reduce core usage by using following function:
SetSenderReceiverKNI - func SetSenderReceiverKNI(IN *Flow, kni *Kni, linuxCore bool) (OUT *Flow, err error) - adds send receive KNI FF to building graph. It gets incoming flow for send part, KNI device which is created by CreateKniDevice function and linuxCore Boolean flag as parameters. Returns new opened flow from receive part (incoming flow is closed). This function combines send and receive KNI capabilities at one core. Also if linuxCore is true KNI device itself will be handled at the same core (however in this case performance will dropped dramatically). With this function developer can use one or two cores for KNI processing.
config := flow.Config{
NeedKNI: true,
}
flow.SystemInit(&config)
kni, _ := flow.CreateKniDevice(uint16(*kniport), "myKNI")
inputFlow, _ := flow.SetReceiver(0)
flow.SetSenderKNI(inputFlow, kni)
fromKNIFlow := flow.SetReceiverKNI(kni)
//OR
fromKNIFlow, _ := flow.SetSenderReceiverKNI(inputFlow, kni, false)
flow.SetSender(fromKNIFlow, 1)
flow.SystemStart()
There are four functions for unconditional manipulating with packet flows:
SetPartitioner - func SetPartitioner(IN *Flow, N uint64, M uint64) (OUT *Flow, err error) - adds partition FF to building graph. It gets input flow and two integer numbers "N" and "N" as parameters. Partition opens new outgoing flow, incoming flow remains open. "N" packets which arrive through incoming flow are remained in this flow, following "M" packets are passed to the new opened flow, following "N" packets remain, etc. If exact numbers are not important it is better to use big values of "N" and "M" instead of small ones for performance reasons. For example it is better to use SetPartitioner(input, 300, 300) instead of SetPartitioner(input, 1, 1) to divide input flow to two parts with half number of packets each.
Following example shows dumping each 1000 packet to file instead of sending it further
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
dumpFlow, _ := flow.SetPartitioner(inputFlow, 1000, 1)
flow.SetSenderFile(dumpFlow, "sample.pcap")
flow.SetSender(inputFlow, 1)
flow.SystemStart()
SetCopier - func SetCopier(IN *Flow) (OUT *Flow, err error) - adds copy FF to building graph. It gets incoming flow as a parameter. SetCopier opens new outgoing flow, incoming flow remains open. Each incoming packet is copied, original remains in incoming flow, copy is passed to new opened flow.
In the following example first DPDK port and eth7 OS devise sends equal packets outside
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
OSFlow, _ := flow.SetCopier(inputFlow)
flow.SetSenderOS(OSFlow, "eth7")
flow.SetSender(inputFlow, 1)
flow.SystemStart()
SetMerger - func SetMerger(InArray ...*Flow) (OUT *Flow, err error) - adds merge FF to building graph. It gets variable number of incoming flows as a parameter. All incoming flows are closed, one new outgoing flow is opened. Each packet which arrives through one of the incoming flows is passed to output flow. Internally merge is implemented via changing output buffers of previous flow functions. It means that it doesn't require any computation - neither cores number nor performance penalties.
In the following example zero DPDK port will send any receiving packet twice
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
copyFlow, _ := flow.SetCopier(inputFlow)
outputFlow := flow.SetMerger(inputFlow, copyFlow)
flow.SetSender(outputFlow, 0)
flow.SystemStart()
Note that the same effect will be with two sends and no merges because two sends to one DPDK port will be merged implicitly.
SetStopper - func SetStopper(IN *Flow) error - adds stop FF to building graph. It gets incoming flow. Stop releases memory of each packet which arrives through incoming flow. Stop is internally implemented as merge with trash buffer. So any number of stops require only one CPU core for freeing trash buffer.
In the following example half (not each second) of incoming packet will be removed
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
halfFlow, _ := flow.SetPartitioner(inputFlow, 300, 300)
flow.SetStopper(halfFlow)
flow.SetSender(inputFlow, 1)
flow.SystemStart()
Instead of unconditionally receive, send and shuffle packets user can define its own algorithms for packet processing called User Defined Functions (UDF). UDFs are functions which deal with one packet or vector of packets in opposite to FFs which deal with packets flows. UDFs are defined by the developer and are inserted into packet processing graph with the help of special FFs. These FFs implicitly extract single packet or vector of packets from a flow and pass a pointer to it to the UDF as an argument. UDF type is strictly controlled by FF that adds it to a graph.
Developer can define any number of UDFs and pass them to corresponding FFs. UDF which is passed to FF will be applied to all packets which flow through this FF. Each piece of raw data will be automatically extracted from incoming flow, transformed to a packet and passed to UDF. After UDF this piece will be automatically passed next to specified flow, the developer should not worry about this. The developer can initialize, read and modify packet data inside UDF, but not delete them. SetStopper or HandleDrop should be used for these purposes.
All UDFs can proceed packets in vector mode. This means that UDF will receive a slice of packet pointers for SIMD processing as well as mask that controlled required packets. See vector variants for details.
All UDFs use additional context parameter (can be nil). This parameter is used for some global environment and is passed to UDF as well as packet data. Context is an interface defined by developer, which should have Copy and Delete methods. Copy method is called before any FF cloning to make sure that all clones will have separate copies of context. Delete method is called after stopping any clone at its context.
All UDFs (and also some functions described above) can be cloned. See chapter about scaling (TODO empty).
All packet representation which UDFs work with is defined in "packet" package - "github.com/intel-go/nff-go/package". Examples will use pseudo code versions of UDFs. For detailed information about creating UDFs you should see next chapter - Helper API. You will find packet representation, packet parsing functions, access control list rules, multiple supported protocols and other information with corresponding examples there.
Besides unconditional partition FF, NFF-GO provides two conditional UDFs for division a flow.
SetSeparator - func SetSeparator(IN *Flow, separateFunction SeparateFunction, context UserContext) (OUT *Flow, err error) - adds separate FF to building graph. It gets input flow, UDF which returns Boolean value and context. SetSeparator opens new outgoing flow for rejected packets (incoming flow remains open). Each packet which arrives through incoming flow remains in it if it is accepted (UDF returns true) and is sent to new opened flow if it is rejected (UDF returns false).
In the following example all IPv6 packets will be sent to second DPDK port
package main
import "github.com/intel-go/nff-go/flow"
import "github.com/intel-go/nff-go/packet"
func main() {
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
IPv6Flow, _ := flow.SetSeparator(inputFlow, checkIPv6Packets, nil)
flow.SetSender(inputFlow, 1)
flow.SetSender(IPv6Flow, 2)
flow.SystemStart()
}
func checkIPv6Packets (current *packet.Packet, c *flow.Context) bool {
if current packet is IPv6 {
return false
}
return true
}
Vector version of separate FF will be discussed later.
SetSplitter - func SetSplitter(IN *Flow, splitFunction SplitFunction, flowNumber uint, context UserContext) (OutArray , err error) - adds split FF to building graph. It gets input flow, UDF which returns an unsigned integer value and a number of output flows. SetSplitter creates requested number of outgoing flows (incoming flow is closed). Each packet which arrives through incoming flow is sent to one of the created flows based on UDF output value which is treated as a number of next flow. Returning value greater than requested number of flows is forbidden but unchecked for performance reasons.
The following example shows splitting input packets based on their IPv4 addresses
func main() {
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
outputFlows, _ := flow.SetSplitter(inputFlow, checkIPv4, nil, 4)
flow.SetStopper(outputFlows[0])
for i := 1; i < 4; i++ {
flow.SetSender(outputFlows[i], i)
}
flow.SystemStart()
}
func checkIPv4 (current *packet.Packet, c *flow.Context) uint {
if current packet is not IPv4 {
return 0
} else if current packet ipv4 > 111.000.000.000
return 1
} else if current packet ipv4 > 222.000.000.000
return 2
}
return 3
}
Vector version of split FF will be discussed later.
Two UDFs are provided for handling packets without changing packet flows.
SetHandler - func SetHandler(IN *Flow, handleFunction HandleFunction, context UserContext) error - adds handle FF to building graph. It gets incoming flow, UDF which returns nothing simply handle packet and context. Each packet which arrives through incoming flow is handled inside UDF and then passed next through the same (incoming) flow. If the developer needs to drop some packets while processing SetHandlerDrop function should be used instead.
Following example shows functionality for encapsulating each arriving packet
func main() {
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
flow.SetHandler(inputFlow, encap, nil)
flow.SetSender(inputFlow, i)
flow.SystemStart()
}
func encap (current *packet.Packet, c *flow.Context) {
encapsulate current packet
}
Vector version of handle FF will be discussed later.
SetHandlerDrop - func SetHandlerDrop(IN *Flow, separateFunction SeparateFunction, context UserContext) error - adds handle drop FF to building graph. It gets incoming flow, UDF which returns Boolean value and context. Each packet which arrives through incoming flow is handled inside UDF and then dropped or passed next through the same (incoming) flow. UDF should return false for dropping ans true for remaining packets.
Following example shows functionality for decrease time to live counter for each arriving packet
func main() {
flow.SystemInit(nil)
inputFlow, _ := flow.SetReceiver(0)
flow.SetHandlerDrop(inputFlow, ttl, nil)
flow.SetSender(inputFlow, i)
flow.SystemStart()
}
func ttl (current *packet.Packet, c *flow.Context) bool {
if current packet ttl == 0 {
return false
} else {
decrease current packet ttl
return true
}
}
Vector version of handle drop FF will be discussed later.
User can generate packets with the help of next two functions.
SetGenerator - func SetGenerator(f GenerateFunction, context UserContext) (OUT *Flow) - adds "generate" FF to building graph. It gets UDF which receives an empty packet at each iteration and fills it according to developer requirements and user context. It can be used for pings, chats or other non performance tasks. It will not be cloned and can include sleeps inside.
SetFastGenerator - func SetFastGenerator(f GenerateFunction, targetSpeed uint64, context UserContext) (OUT *Flow, tc chan uint64, err error) - adds fast generate FF to building graph. It gets UDF which receives an empty packet at each iteration and fills it according to developer requirements, target speed measured in packets per second and user context. At any time scheduler tries to achieve requested speed (from both sides - not slower, not faster). Should not contain any "wait" functionality because it will be cloned. SetFastGenerator returned uint64 channel which can be used to change current target speed. As speed is changed iteratively expect some delays. Also expect that at first moment function will work with fastest possible speed so if you request slow speed it will be above expectations for first second.
The following example shows high performance packet generator with slow sporadic generator for ping information
func main() {
flow.SystemInit(nil)
fastFlow, _, _:= flow.SetFastGenerator(perf, 10000, nil)
slowFlow := flow.SetGenerator(gen, nil)
flow.SetSender(fastFlow, 0)
flow.SetSenderOS(slowFlow, "eth7")
flow.SystemStart()
}
func perf (current *packet.Packet, c *flow.Context) {
current = random data
}
func gen (current *packet.Packet, c *flow.Context) {
sleep(5 seconds)
current = "Everything successful"
}
Vector version of fast generate FF will be discussed later.
Besides these functions developer can generate packets inside all other UDF functions and send them directly to DPDK port to answer ARP or ICMP requests. These functions are not connected with the graph and are covered in next chapter - Helper API
Developer can get profit from SIMD instructions using vector versions of UDFs. These versions gets vector of packets per one time. For example developer can encrypt multiple packets at one time using vector CBC. Two main points about vector versions.
-
Vector will maximum contain 32 packets. This number is regulated by vBurstSize variable, but it can't be changed. At each particular call user function can get any number of packets from 1 to 32 inclusive. It depends on the number of packets in a buffer before corresponding FF. If it contains less than 32 packets user function will get less than 32. Developer is expected to handle these cases. For example use scalar processing function for unexpected packet numbers.
-
Active packets are regulated by mask (vector of Boolean) provided for each call of user function. If mask value for a packet is false it means that either there are less packets in a vector than 32 (it is guaranteed that all existing packets will be at the beginning of packet vector) or that this packet was switched off by previous vector separate or other flow division function. If developer needs representation as continuous vector it can be done by FillSliceFromMask function. It is not expected to change input slice of packets, only packets themselves.
SetVectorHandler - func SetVectorHandler(IN *Flow, vectorHandleFunction VectorHandleFunction, context UserContext) error - adds vector handle FF to building graph. Packets shouldn't be dropped inside UDF - use SetVectorHandlerDrop function for this.
User function should have the following signature: func vectorHandler(currentPackets []*packet.Packet, mask *[32]bool, context flow.UserContext).
SetVectorHandlerDrop - func SetVectorHandlerDrop(IN *Flow, vectorSeparateFunction VectorSeparateFunction, context UserContext) error - adds vector handle drop FF to building graph.
User function should have the following signature: func vectorHandlerDrop(currentPackets []*packet.Packet, mask *[32]bool, notDrop *[32]bool, context flow.UserContext) notDrop mask should be set to true for packets that shouldn't be automatically dropped after this function.
SetVectorSeparator - func SetVectorSeparator(IN *Flow, vectorSeparateFunction VectorSeparateFunction, context UserContext) (OUT *Flow, err error) - adds vector separate FF to building graph.
User function should have the following signature: func vectorSeparator(currentPackets []*packet.Packet, mask *[32]bool, answer *[32]bool, context flow.UserContext) answer mask should be set to true for packets that should go to new opened flow and to false for packets that should remain in existing flow.
SetVectorSplitter - func SetVectorSplitter(IN *Flow, vectorSplitFunction VectorSplitFunction, flowNumber uint, context UserContext) (OutArray , err error) adds vector splitter FF to building graph.
User function should have the following signature: func vectorSeparator(currentPackets []*packet.Packet, mask *[32]bool, answer *[32]uint, context flow.UserContext) answer should be set to any value between 0 and flowNumber. Packets will be passed to corresponding new opened flows.
flow.SetVectorHandler(inputFlow, vectorEncrypt, nil)
func vectorEncrypt(currentPackets []*packet.Packet, mask *[32]bool, context flow.UserContext) {
Make chunks of 8 packets from incoming vector
Proceed packets in a chunk simultaneously with vector version of CBC.
}
SetVectorFastGenerator - func SetVectorFastGenerator(f VectorGenerateFunction, targetSpeed uint64, context UserContext) (OUT *Flow, tc chan uint64, err error) - adds vector fast generate FF to building graph. Vector generate doesn't contain mask because it is always 32 packets in input slice.
fastFlow, _, _:= flow.SetVectorFastGenerator(vectorPerf, 10000, nil)
func vectorPerf (current *[]packet.Packet, c *flow.Context) {
for i := range(current) {
current[i] = random data
}
}
The next step is implementing chosen UDFs. It should be done according to particular tasks with the help of provided helper functions - see next chapter - Helper API