[Bug 773] New: iptables performance limits on # of rules using ipset

bugzilla-daemon at bugzilla.netfilter.org bugzilla-daemon at bugzilla.netfilter.org
Tue Feb 28 23:15:10 CET 2012


http://bugzilla.netfilter.org/show_bug.cgi?id=773

           Summary: iptables performance limits on # of rules using ipset
           Product: ipset
           Version: unspecified
          Platform: All
        OS/Version: All
            Status: NEW
          Severity: enhancement
          Priority: P5
         Component: default
        AssignedTo: netfilter-buglog at lists.netfilter.org
        ReportedBy: aas029 at yahoo.com
   Estimated Hours: 0.0


Observing significant degradation in latency and packet loss of pass-through
traffic (FORWARD chain) when the number of iptables rules that use ipprotiphash
ipset matching exceeds 24 rules. This happens even when the ipsets themselves
are empty or have just a few entries each. 

The following is a striped down example to demonstrate the potential issue:
- create X number of ipportiphash ipsets:
ipset -N UDP-x ipportiphash --network 129.129.0.0/22
where x is from 1 to X
- add X number of iptables rules each of which matching on one of the above
created ipsets:
iptables -I FORWARD 1 -m set --match-set UDP-x src,src,dst -j ACCEPT;
where x is from 1 to X

When X is up to 24 (i.e. 24 ipsets and corrensponding 24 iptable rules), the
average latency of packets forwarded thought the system is in the order of
100us (micro seconds) and no packets are dropped. This result is close to the
case where the are no iptables rules at all. 

When X is around 29 or above, the latency of the very same system (no other
changes) is 5ms i.e. 50 times larger and 4% or more of the packets are dropped
by the system. 

The above is true even if the ipsets contain only a few entries each or are
completely empty. Somehow the number of iptables rules using ipportiphash ipset
above a given threshold (around 24 rules in my system) causes a huge
performance degradation. On the other hand, for X < 24, the results don't seem
to depend on X and moreover, the number of entries in the ipsets doesn't seem
to matter much either. For examples, average latency results stay <150us even
when the ipsets are full (contain 64k entries each) as long as there is no more
than 24 iptables rules referring to these largest size ipsets. 

Note that the traffic forwarded through the system in the above tests does NOT
match any of the rules (when all ipsets are empty, this is obviously true
regardless of the src and dst IPs in the packets). This 'dummy' config/behavior
is intended in order to force the system to match against all rules (X of them
to be more specific) and be able measure/determine the latency cost of matching
on X number of rules. 

As a way of comparison, the same system works fine with 1000 iptables rules
that do NOT use ipsets like the following:
iptables -I FORWARD 1 -s 129.129.0.0/24 -d 29.30.0.0/16 -j ACCEPT
The rules like the above one were also chosen so that the dst IP of test
traffic never matches any of the rules' specified destination.

So to summarize, the system is capable of processing 1000 of 'normal' iptables
rules (that use -s and -d matching) like the above for each packet but on the
other hand has significant performance degradation in processing just 29 rules
that use ipportiphash ipset even when the ipsets used are empty. 


Version information is below:
ipset v4.5, protocol version 4.
Kernel module protocol version 4.
iptables v1.4.7
Linux 2.6.32

-- 
Configure bugmail: http://bugzilla.netfilter.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are watching all bug changes.



More information about the netfilter-buglog mailing list