[Bug 751] New: IPv6 bridging bug
bugzilla-daemon at bugzilla.netfilter.org
bugzilla-daemon at bugzilla.netfilter.org
Tue Sep 27 09:37:43 CEST 2011
http://bugzilla.netfilter.org/show_bug.cgi?id=751
Summary: IPv6 bridging bug
Product: iptables
Version: unspecified
Platform: x86_64
OS/Version: Gentoo
Status: NEW
Severity: normal
Priority: P3
Component: ip6tables
AssignedTo: netfilter-buglog at lists.netfilter.org
ReportedBy: david at commroom.net
Estimated Hours: 0.0
First and foremost, many kind thanks to all of the developers and maintainers
of IPtables and netfilter. Your work is much appreciated and I thank you for
it.
I am really hoping that somebody can shed some light and help me to discover
what I am doing wrong. I am really stumped and cannot seem to figure out why I
am unable to get this working. I do have another slightly older box sitting
here that is running a very similar (similar!) setup from a few kernels back (a
SUSE box, with 2-3.4) and everything works great.
I really want to apologize in advance if this bug is a duplicate, but I have
searched vigorously throughout this bugzilla and using many other sources but I
cannot seem to come up with an answer. Finally, please accept my apologies if
this bug report should be destined for iptables-kernel [or bridging] instead of
this section.
I am trying to write some IPv6 firewall rules for some HVM virtual machines I
am running on under a Xen kernel. I am using ip6tables v.1.4.12.1 compiled from
gentoo e-build iptables-1.4.12.1-r1.ebuild. I am running this on a Xen kernel.
Output for uname -srvmpio is as follows:
Linux 2.6.38-xen #1 SMP Mon Sep 26 09:46:29 PDT 2011 x86_64 Pentium(R)
Dual-Core CPU E6700 @ 3.20GHz GenuineIntel GNU/Linux
What I am doing is creating a bridge, br0, and allowing virtual HVM clients
hosted under a Xen machine to use this bridge. When I mention "HVM", what I
mean, is, these virtual machines are not paravirtualized. The are fully
emulated x86 hosts. This is achieved using the vif-bridge script that
accompanies the xen-tools (I have emerged the xen-tools-3.4.2-r1.ebuild ebuild
provided by gentoo portage). Each HVM client increments a new tapx.x interface,
and also a corresponding vifx.x interface. I am not using the standard xenbr0
configuration that everybody else is talking about and using. My bridge looks
something like the following:
(cloud)
|
[wired-eth0]-----|
|---------[br0 on XEN DOM0 HOST]
[wired-eth1]-----| |
|---[vifx.x/tapx.x]----[eth0-HVM-virtmach.]
wired-eth0 is the only real interface connected to a real wire. eth1 is part of
the bridge, but not connected to anything.
I have configured my ip6tables policies with a default ACCEPT policy, and then
explicitly specified ACCEPT targets for the INPUT, OUTPUT, and FORWARD chains.
I have tried the following:
# ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere
Because vifx.x and tapx.x interfaces are being used by the Xen host to add the
HVM hosts to the network bridge, I also tried the following configuration:
# ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-in
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-bridged
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-out
Finally, I have tried them both combined, this way:
# ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-in
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-out
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-bridged
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-out
ACCEPT all anywhere anywhere PHYSDEV match
--physdev-is-in
Finally, since IPv6 has some multicast dependencies, I also fiddled around with
allowing ff00::/8 in my [-d] policy rules, but that didn't make a difference or
change anything. I also messed around with allowing fe80::/64 as well because
this is part of IPv6; I didn't expect it to work or change anything because I
had "ACCEPT" targets in my chains anyway and that should have taken precedence
anyway (ff00::/8 and fe80::/64 or not). But I still tried it.
When I set /proc/sys/net/bridge/bridge-nf-call-ip6tables to 1 (e.g. echo 1 >
/proc/sys/net/bridge/bridge-nf-call-ip6tables) the HVM clients that are
connected to the br0 bridge cannot send or receive any IPv6 at all. No IPv6
router advertisements can be seen by the HVM clients; no IPv6 multicast can be
seen by the HVM clients. The HVM clients cannot send or receive any ICMPv6, not
even to link-local addresses in the same broadcast domain.
Wireshark and TCPdump confirm to me that no IPv6 packets are passing across
neither the tapx.x or vifx.x bridged interfaces for the HVM virtual machines.
The DOM0 machine's IPv6 configuration works perfectly. RADVD advertisements are
seen and IPv6 stateless autoconfiguration works perfectly. IPv6 layer7
connectivity works perfectly with local destinations and also with the IPv6
global unicast for the DOM0 (the xen host system). The xen host has perfectly
good layer3, layer7 connectivity, even when subject to more stringent policies,
but the HVM guests on that machine do not have any IPv6 communication passing
at all.
The other odd thing to me is, IPv4 works perfectly with this configuration. For
this machine, I have also set "1" on
/proc/sys/net/bridge/bridge-nf-call-iptables (the ipv4 version). It seems to me
that a "1" has to be in both /proc/sys/net/bridge/bridge-nf-call-iptables in
order to use IPTables with bridging and then
/proc/sys/net/bridge/bridge-nf-call-ip6tables also needs to be set to "1" in
order to use IPv6, ip6tables with bridging so both of these are set to one. For
reference, these are both set to "1":
# for i in /proc/sys/net/bridge/*; do echo $i && cat $i; done
/proc/sys/net/bridge/bridge-nf-call-arptables
0
/proc/sys/net/bridge/bridge-nf-call-ip6tables
1
/proc/sys/net/bridge/bridge-nf-call-iptables
1
/proc/sys/net/bridge/bridge-nf-filter-pppoe-tagged
0
/proc/sys/net/bridge/bridge-nf-filter-vlan-tagged
0
Also, IPv6 forwarding is not enabled (enabling it didn't seem to make a
difference either - for giggles, I tried it both ways).
The IPv4 configuration for this machine and the VM guests works great! And it
does exactly what I want, perfectly as I expected; I even have some
restrictions on the IPv4 protocols with some of my IPv4 policies and everything
still works beautifully with IPv4. VM guests can reach layer3 and layer7 with
no problems.
But absolutely NO IPv6 packets will cross the same bridge, no matter what I do.
I have also used shorewall6 to configure my ip6tables policies, similarly
allowing the same policy (all all ACCEPT) and I end up with the same result (no
IPv6 crossing the bridge over to the VM's and vice-versa). This really has me
stumped, because all of these machines should be in the same "broadcast domain"
and I think that these hosts should be able to communicate using the bridge.
The very instant I echo a "0" to /proc/sys/net/bridge/bridge-nf-call-ip6tables,
IPv6 communication resumes perfectly with the guests. This is the only way I am
able to get IPv6 communication working for the HVM guests on the bridge; but
this isn't good for me because I would rather set some ip6tables policies for
the Xen host and for the VM machines that it is hosting for some better
security.
Like I mentioned above, I have another box sitting close by that is a little
older, that has the same configuration (maybe a little older version of
iptables and a little older kernel, say around 2-6.34) and the IPv6 ip6tables
works perfectly. All of the configuration for that machine is the same
(bridge-nf-call-ip6tables=1, same vif-bridge script, xen-3.4.1 instead of
3.4.2, ip6tables policy is installed and working, VM guests subject to policy
through the bridge, RA and SLAAC work great for the clients and so on).
The thing that has me a little nervous is that I have compiled in most of the
netfilter options, the xtable options, and the ebtables options; I am wondering
if perhaps somebody knows of a newer, more recent, netfilter/iptables/xtables
kernel option that perhaps might be suspect with this problem and hampering my
ability to use ip6tables with this configuration. Does anyone know of any
module or something that maybe I compiled in from iptables/netfilter/xtables
that would certainly break this like this? I can post out my kernel
configuration if somebody wants to have a look at it.
If not, is there anything else I should look for specifically when debugging?
If anybody has any insight or ideas, I would really appreciate some input and
feedback. Thank you for your attention, ideas and thinking about this.
Once again, you have my gratitude for IPtables and netfilter projects. Life
wouldn't be the same without these. Many kind thanks to those working on, and
supporting these projects.
--
Configure bugmail: http://bugzilla.netfilter.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are watching all bug changes.
More information about the netfilter-buglog
mailing list