[Bug 751] IPv6 bridging bug
bugzilla-daemon at bugzilla.netfilter.org
bugzilla-daemon at bugzilla.netfilter.org
Fri Sep 30 06:34:21 CEST 2011
http://bugzilla.netfilter.org/show_bug.cgi?id=751
David Davidson <david at commroom.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |david at commroom.net
--- Comment #2 from David Davidson <david at commroom.net> 2011-09-30 06:34:21 ---
Update:
I did discover something I hadn't noticed before. The DOM0 >CAN< reach the DOMU
guests and the guests can reach the DOM0, using IPv6, when the ip6tables
firewall is enabled (f.e. /proc/sys/net/bridge/bridge-nf-call-ip6tables = 1).
All of this internal communication still works through the bridge with
/proc/sys/net/bridge/bridge-nf-call-ip6tables = 1. It is still very strange to
me that the real hosts external to the DOM0 bridge cannot reach the VM guests
through the bridge, from the same network broadcast segment. For recap:
if /proc/sys/net/bridge/bridge-nf-call-ip6tables = 1, then:
[xen-guest] can reach [xen-dom0] through br0 using tapx.x/vifx.x. using ipv6.
[xen-dom0] can reach [xen-guest] through br0 using ipv6.
[realhosts-same-broadcast-segment] can reach [xen-dom0] through br0 using
eth0/eth1 using ipv6.
[xen-dom0] can reach [realhost-same-broadcast-segment] through br0 using
eth0/eth1 using ipv6.
[realhosts-same-broadcast-segment] CANNOT reach [xen-guest] (through DOM0's br0
interface).
[xen-guest] CANNOT reach [realhosts-same-broadcast-segment] (through DOM0's br0
interface).
For /proc/sys/net/bridge/bridge-nf-call-iptables = 1 (IPv4 version), all of the
above is still true, excluding the last 2 lines - the realhosts CAN reach the
VM's and the VM's CAN reach the realhosts (using IPv4 - everything works). The
policies are very similar, if not identical for both the IPv6 tables and the
IPv4 tables. Here again, even if I make all of the policy targets "ACCEPT", the
VM's are still not accessible using IPv6.
Because communication between the DOM0 and DOMU works, and vice-versa, this
gives me the tendency to believe that the ip6tables code is working, but maybe
perhaps one of the chains isn't correct or maybe the code believes that this
communication should be "routed" or "brouted" even though it is supposed to be
in the same network segment.
The other odd thing is, the /etc/xen/scripts/vif-bridge script installs 2
iptables rules in the forward chain for IPv4 each time it loads a VM. These 2
rules end up being:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state
RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0
ACCEPT all -- anywhere anywhere PHYSDEV match
--physdev-in vif2.0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
This script does not produce any corresponding rules using ipv6tables. If I add
those same types of rules manually to the ipv6 chains, it still doesn't fix the
communication issues I described above with ip6tables.
When /etc/xen/scripts/vif-bridge installs those rules for IPv4, the following
log appears in the syslog messages:
xt_physdev: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains
for non-bridged traffic is not supported anymore.
I did find another report in the mailing list that documents that this feature
was removed in kernel 2.6.20 due to a layering violation problem
(http://lists.netfilter.org/pipermail/netfilter/2007-September/069659.html).
Does this mean that this script is defective and should add
--physdev-is-bridged to these 2 rules? I haven't adjusted them because the IPv4
works great, so I have left the script alone. But this makes me wonder if this
is why the ip6tables communication between the DOMU and real hosts isn't
working.
For the record, this traffic should be "bridged":
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000854b12d04 no eth0
eth1
tap2.0
vif2.0
Again, if I try to add the same rules to the FORWARD chain using the ip6tables,
the result is still the same as above - domu guests still cannot reach
realhosts and realhosts still cannot reach domu guests using IPv6. It doesn't
matter if I use "--physdev-is-bridged" on the 2 rules or not - the result is
the same (I have tried it both ways).
The only way to allow the IPv6 reachability between the realhosts and the VM's
is setting 0 to /proc/sys/net/bridge/bridge-nf-call-ip6tables. Then, everything
works the exact same way that the IPv4 works, but obviously losing the ability
to set stateful rules/restrictions for the IPv6 hosts. As soon as I set this to
0, neighbor discovery kicks off on the VM and the communication immediately
becomes successful.
I hope that describing these items in detail might help to better identify the
symptoms and topology - maybe somebody could explain to me what I have
overlooked, or if I am dealing with a strange bug. Again, I have tested this
with another box I have here running openSUSE, and it works great; it's just a
little bit older. The configuration is almost identical though. Of course the
gentoo box is a little different because of the "gentoo" way of doing things
but for the most part, the configuration and topology are just about the same
thing.
Or, perhaps I have a nasty kernel code conflicting with this - I compiled most
everything into this kernel with a lot of code built-in instead of modules; I
still fear that perhaps I have something nasty in there that might be breaking
this. I attached my kernel configuration in the last post just in case anyone
might know of something that would cause this odd behavior.
Many kind thanks again for any consideration of this and thinking about it.
--
Configure bugmail: http://bugzilla.netfilter.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are watching all bug changes.
More information about the netfilter-buglog
mailing list