Brno, Czech Republic – May 7, 2026 – Next week at DPDK Summit 2026 in Stockholm, Pavlina Patova will present the results of a direct hardware comparison focused on a question that does not appear on most benchmark dashboards: how do three different NIC platforms actually behave when you push rte_flow rules into edge cases that standard tests do not cover?
The platforms:
Silicom N6010 running DYNANIC firmware (nfb PMD)

Intel E810-CQDA2 (ice PMD)
NVIDIA ConnectX-5 (mlx5 PMD)
All testing was conducted under DPDK 22.11 in Q3–Q4 2025. This blog post walks through the key findings with the actual testpmd traces from the session.
Why edge cases matter more than peak throughput
Synthetic benchmarks are useful for establishing ceilings. They are not useful for predicting what happens when a developer writes an rte_flow rule with an implicit header assumption, inserts a duplicate pattern into a flow table, or wires up a COUNT action across multiple rules using the same identifier.
These are not contrived scenarios. They occur during integration, during debugging, and in production when traffic patterns shift. The question is whether you discover the behavior from documentation or from an unexpected queue assignment at 3 a.m.
The comparison below focuses on that second category.
Pattern parser strictness
The first thing tested was how strictly each PMD requires flow patterns to be specified. Specifically, what happens when headers are omitted or provided out of order.
Can you omit the Ethernet header?
ConnectX-5 accepts IPv4 rules both with and without the explicit eth header:
flow create 0 ingress group 0 pattern eth / ipv4 dst is 192.168.1.1 / end actions queue index 4 / end
flow create 0 ingress group 0 pattern ipv4 dst is 192.168.1.1 / end actions queue index 4 / end
Both succeed. However, if you attempt to match on L4 without specifying L3, the driver returns an error:
flow create 0 ingress group 0 pattern udp src is 1111 / end actions queue index 0 / end
port_flow_complain(): Caught PMD error type 13 (specific pattern item): L3 is mandatory to filter on L4: Invalid argument
Adding the L3 layer resolves it:
flow create 0 ingress group 0 pattern ipv4 / udp src is 1111 / end actions queue index 0 / end
DYNANIC applies no such constraint. A bare L4 match is accepted without any L3 declaration:
flow create 0 ingress group 0 pattern udp src is 1111 / end actions queue index 0 / end
The same flexibility applies to ordering. If layers are listed out of sequence — UDP before IPv4 — the parser reorders them automatically and constructs the correct match. When the same layer appears twice in a rule, the last occurrence takes effect:
flow create 0 ingress pattern ipv4 dst is 2.2.2.2 / ipv4 src is 1.1.1.1 / end actions queue index 5 / end
Only packets with source address 1.1.1.1 are matched.
Intel E810 is the most restrictive of the three. Generic protocol matches without at least one concrete field value are rejected:
flow create 0 ingress pattern eth / ipv4 / udp / end actions queue index 0 / end
ICE_DRIVER: ice_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 10 (item specification): Invalid input set: Invalid argument
You must specify at least one field:
flow create 0 ingress pattern eth / ipv4 / udp dst is 1111 / end actions queue index 0 / end
This applies across protocols. It is not specific to UDP. Additionally, only a defined set of pattern combinations is supported, distributed across the driver’s internal engines (FDIR, SWITCH, ACL).
Identical and overlapping rules
What happens when the same pattern is inserted twice? Or when one packet matches two different rules?
ConnectX-5: all matching rules execute
With identical patterns and different FATE actions, ConnectX-5 executes all of them:
flow create 0 ingress group 0 pattern ipv4 dst is 192.168.1.1 / end actions jump group 1 / end
flow create 0 ingress group 1 pattern ipv4 dst is 192.168.1.1 / end actions queue index 4 / end
flow create 0 ingress pattern ipv4 dst is 192.168.1.1 / end actions rss func toeplitz types ipv4 end queues 0 1 2 end / end
flow create 0 ingress group 0 pattern ipv4 dst is 192.168.1.1 / end actions queue index 4 / end
flow create 0 ingress group 0 pattern ipv4 dst is 192.168.1.1 / end actions queue index 5 / end
flow create 0 ingress pattern ipv4 dst is 192.168.1.1 / end actions rss func toeplitz types ipv4 end queues 9 end / end
Forward Stats for RX Port= 0/Queue= 1 → RX-packets: 1
Forward Stats for RX Port= 0/Queue= 4 → RX-packets: 2
Forward Stats for RX Port= 0/Queue= 5 → RX-packets: 1
Forward Stats for RX Port= 0/Queue= 9 → RX-packets: 1
The packet hits queues 1, 4 (twice – once from the direct rule, once via the jump into group 1), 5, and 9. However, once a QUEUE or RSS action has been confirmed for a given pattern, adding a rule with a different action type for the same pattern will be rejected:
flow create 0 ingress group 0 pattern ipv4 dst is 192.168.1.1 / end actions queue index 4 / end
Flow rule #0 created
flow create 0 ingress group 0 pattern ipv4 dst is 192.168.1.1 / end actions jump group 1 / end
port_flow_complain(): hardware refuses to create flow: Invalid argument
DYNANIC: last rule wins
DYNANIC uses a straightforward last-inserted-rule-wins model. The most recently created rule matching a given packet determines the outcome. Regardless of how many earlier rules would also match. The default action in each group is RSS.
Intel E810: engine-dependent behavior
For ACL engine, any number of identical patterns can be inserted and the most recently added rule applies. For SWITCH engine, the driver does not store the duplicate — it attempts to offload it to the next available engine:
flow create 0 ingress pattern eth / ipv4 dst is 192.168.1.9 / end actions queue index 8 / end
Flow rule #0 created
flow create 0 ingress pattern eth / ipv4 dst is 192.168.1.9 / end actions queue index 8 / end
Flow rule #1 created
flow create 0 ingress pattern eth / ipv4 dst is 192.168.1.9 / end actions queue index 8 / end
ICE_DRIVER: ice_flow_create(): Failed to create flow
port_flow_complain(): Rule already exists!: File exists
Two identical SWITCH rules can coexist because SWITCH and FDIR share pattern compatibility. A third attempt fails when no compatible engine remains. Within FDIR, duplicate patterns are rejected outright.
Count action behavior
ConnectX-5: one COUNT per pattern in the root table
In the root table, only a single rule with a COUNT action is permitted per pattern. Any attempt to add a second — regardless of whether the identifier or the FATE action changes — is rejected:
flow create 0 ingress pattern ipv4 dst is 192.168.1.1 / end actions count identifier 5 / queue index 5 / end
Flow rule #0 created
flow create 0 ingress pattern ipv4 dst is 192.168.1.1 / end actions count identifier 5 / queue index 5 / end
port_flow_complain(): hardware refuses to create flow: Invalid argument
flow create 0 ingress pattern ipv4 dst is 192.168.1.1 / end actions count identifier 4 / queue index 4 / end
port_flow_complain(): hardware refuses to create flow: Invalid argument
When the same COUNT identifier is shared across rules in a non-root group, only the first rule matched increments the counter. The identifier does not aggregate.
DYNANIC: shared identifier, independent counters
DYNANIC allows the same COUNT identifier to appear in multiple rules. Each rule tracks counter by its identifier:
flow create 0 ingress group 0 pattern ipv4 dst is 1.1.1.1 / end actions count identifier 1 / queue index 0 / end
flow create 0 ingress group 0 pattern ipv4 dst is 2.2.2.2 / end actions count identifier 1 / queue index 1 / end
flow query 0 0 count → hits: 1
flow query 0 1 count → hits: 1
Both rules report a match even though only one packet was sent. Using different identifiers produces the expected exclusive behavior. Each rule counts its own matched traffic only.
Intel E810: up to 256 COUNT rules
E810 supports up to 256 rules with COUNT actions. Identifier behavior mirrors ConnectX-5. Shared identifiers reflect only the first matched rule’s result.
Jump action and loop behavior
ConnectX-5: loops are possible and produce asymmetric counter behavior
ConnectX-5 permits jumps to any non-root group, including backward jumps that create loops. We tested this:
flow create 0 ingress group 0 pattern void / end actions jump group 1 / end
flow create 0 ingress group 0 pattern void / end actions jump group 2 / end
flow create 0 ingress group 1 pattern void / end actions jump group 1 / count identifier 1 / end
flow create 0 ingress group 2 pattern void / end actions jump group 2 / count identifier 2 / end
flow query 0 2 count → hits: 0
flow query 0 3 count → hits: 987,283,996 (and climbing)
Rule 3 increments at wire speed, Rule 2’s counter stays at zero. When we reversed the group insertion order, the opposite occurred. Rule 2’s counter climbed, Rule 3’s stayed at zero. This implies that rules are evaluated beginning with the most recently inserted one, not by group number order. The asymmetry is reproducible and consistent.
DYNANIC: forward-only jumps
DYNANIC enforces forward-only jumps at the hardware level. A rule cannot direct a packet to a group with an equal or lower index. This eliminates loop formation entirely and makes group traversal paths straightforward to reason about.
Intel E810: no explicit JUMP action
E810 does not expose JUMP as an rte_flow action. Traffic is steered across internal engines (SWITCH, FDIR, ACL) automatically based on pattern compatibility and engine priority. The interaction between engines can produce behavior that appears inconsistent: a packet matched by both a SWITCH rule and an FDIR rule simultaneously may increment the FDIR counter while the SWITCH engine’s queue index is used. This suggests engine evaluation order does not follow insertion order from the application’s perspective.
Mark action
ConnectX-5: MARK persists across group traversals
A MARK set in group 0 remains attached to the packet as it passes through subsequent groups. When multiple MARK actions are encountered during traversal, the last one encountered wins:
flow create 0 ingress group 0 pattern void / end actions jump group 1 / mark id 4 / end
flow create 0 ingress group 1 pattern void / end actions queue index 2 / mark id 2 / end
# MetadataViewer output:
Packet: 0 — QUEUE: 2 — MARK ID: 2
DYNANIC: MARK is scoped to the matched rule
DYNANIC scopes MARK to the rule in which it is defined. If the last-matched rule does not set a MARK, the packet carries no mark. Even if an earlier matching rule would have set one:
flow create 0 ingress pattern void / end actions queue index 2 / mark id 2 / end
flow create 0 ingress pattern void / end actions queue index 3 / end
# MetadataViewer output:
Packet: 0 — QUEUE: 3 — (no mark)
Intel E810: MARK is FDIR-scoped, but persists even when FDIR is not the final FATE
E810 restricts MARK to the FDIR engine. A notable behavior: a packet matched by both an FDIR rule (with MARK) and a SWITCH rule (with a different queue assignment) ends up in the SWITCH rule’s queue. But it still carries the FDIR rule’s mark value:
flow create 0 ingress pattern eth / ipv4 dst is 198.168.1.0 / end actions queue index 1 / count identifier 1 / mark id 1 / end
flow create 0 ingress pattern eth / ipv4 dst is 198.168.1.0 / end actions queue index 2 / end
# MetadataViewer output:
Packet: 0 — QUEUE: 2 — MARK ID: 1
The packet lands in queue 2 (the SWITCH rule’s assignment), but carries mark ID 1 from the FDIR match.
VLAN header depth
We sent packets carrying 0 to 100 VLAN headers and checked how many were correctly matched against an IPv4 source address rule.
ConnectX-5: All 100 packets matched correctly, regardless of stack depth. VLAN-specific pattern matching is available for the outermost header.
DYNANIC: Correctly matched packets carrying up to 5 VLAN headers. Packets with deeper stacks defaulted to the RSS queue rather than the matched rule’s queue. VLAN-specific pattern matching is not available in the current trial design but can be requested as a configurable capability.
Intel E810: Correctly matched only packets with a single VLAN header. VLAN matching is available but requires a specific pattern sequence and is confined to the SWITCH engine.
Other features: priorities and rule persistence
| Feature | Intel E810 | NVIDIA ConnectX-5 | DYNANIC |
|---|---|---|---|
| Supported priority values | 0 and 1 | Non-root: up to 21,844 / Root: up to 4 | No restriction |
| Rule persistence | Explicit delete required | Deleted on port stop | Explicit delete required |
DYNANIC’s unrestricted priority range is particularly useful in complex filtering pipelines where fine-grained ordering of rules across multiple groups matters. The explicit-delete persistence model (shared with E810) means flow tables survive port restarts, which is relevant in environments where ports cycle during maintenance windows.
Key takeaways
Each platform reflects a distinct design philosophy, and none is universally better. The right choice depends on the use case and integration context.
Intel E810 enforces the most explicit contract: patterns must be precise, engine constraints are well-defined, and behavior is predictable within those boundaries. The tradeoff is higher rule-authoring friction and per-engine limits on duplicate handling.
NVIDIA ConnectX-5 offers the most permissive multi-rule execution model — useful when multiple overlapping rules need to fire simultaneously. But the behavior of loops, shared COUNT identifiers, and the interaction between groups requires careful mapping before deployment.
DYNANIC prioritizes determinism at the rule level: last-wins semantics, forward-only jumps, and flexible parsing reduce ambiguity in rule outcomes. The VLAN depth limit in the current trial design is a known constraint, and extended capabilities are available on request.
The full testpmd traces, counter behavior analysis, and the test environment configuration from the DPDK Summit session are available on request. If you are evaluating SmartNIC platforms for network acceleration, packet filtering, or DDoS mitigation workloads, reach out to the DYNANIC team:

