[OpenIndiana-discuss] CIFS performance issues

Robin Axelsson gu99roax at student.chalmers.se
Wed Jan 25 20:22:39 UTC 2012


On 2012-01-25 19:03, James Carlson wrote:
> Robin Axelsson wrote:
>> On 2012-01-24 21:59, James Carlson wrote:
>>> Well, unless you get into playing tricks with IP Filter.  And if you do
>>> that, then you're in a much deeper world of hurt, at least in terms of
>>> performance.
>> Here's what the virtualbox manul says about bridged networking:
>>
>> "*Bridged networking*:
>>
>> This is for more advanced networking needs such as network simulations
>> and running servers in a guest. When enabled, VirtualBox connects to one
>> of your installed network cards and exchanges network packets directly,
>> circumventing your host operating system's network stack.
> Note what it says above.  It says nothing about plumbing that interface
> for IP on the host operating system.
>
> I'm suggesting that you should _not_ do that, because you (apparently)
> want to have separate interfaces for both host and the VirtualBox guests.
>
> If that's not what you want, then I think you should clarify.
>
> Perhaps the right answer is to put the host and guests on different
> subnets, so that you have two interfaces with different subnets
> configured on the same physical network.  That can have some risks with
> respect to multicast, but at least it works far better than duplicating
> a subnet.
>
>>> I suspect that the right answer is to plumb only *ONE* of them in the
>>> zone, and then use the other by name inside the VM when creating the
>>> virtual hub.  That second interface should not be plumbed or configured
>>> to use IP inside the regular OpenIndiana environment.  That way, you'll
>>> have two independent paths to the network.
>> Perhaps the way to do it is to create a dedicated jail/zone for
>> VIrtualBox to run in and "plumb the e1000g2" to that zone. I'm a little
>> curious as to how this would affect the performance I'm not sure if you
>> have to split up the CPU cores etc between zones or if that is taken
>> care of as the zones pretty much share the same kernel (and its task
>> scheduler).
> I'm confused.  If VirtualBox is just going to talk to the physical
> interface itself, why is plumbing IP necessary at all?  It shouldn't be
> needed.

Maybe I'm the one being confused here. I just believed that the IP must 
be visible to the host for VirtualBox to be able to find the interface 
in first place but maybe that is not the case. When choosing an adapter 
for bridged networking on my system, the drop-down menu will give me the 
options e1000g1, e1000g2 and rge0. So I'm not sure how or what part of 
the system that gives the physical interfaces those names. I mean if the 
host can't see those interfaces how will VirtualBox be able to see them? 
At least that was my reasoning behind it.

>>> It's possible to have DHCP generate multiple addresses per interface.
>>> And it's possible to use IPMP with just one IP address per interface (in
>>> fact, you can use it with as little as one IP address per *group*).  And
>>> it's possible to configure an IPMP group with some static addresses and
>>> some DHCP.
>> In order to make DHCP generate more IP addresses I guess I have to
>> generate a few (virtual) MAC addresses. Maybe ifconfig hadles this
>> internally.
> You don't have to work that hard.  You can configure individual IPv4
> interfaces to use DHCP, and the system will automatically generate a
> random DHCPv4 "ClientID" value for those interfaces.
>
> For example, you can do this:
>
> 	ifconfig e1000g0:1 plumb
> 	ifconfig e1000g0:2 plumb
> 	ifconfig e1000g0:1 dhcp
> 	ifconfig e1000g0:2 dhcp
>
> Using the old-style configuration interfaces, you can do "touch
> /etc/dhcp.e1000g0:1" to set the system to plumb up and run DHCP on
> e1000g0:1.
>
> There's probably a way to do this with ipadm, but I'm too lazy to read
> the man page for it.  I suggest it, though, as a worthwhile thing to do
> on a lazy Sunday afternoon.

I'll look into it if "all else fails". I see that the manual entry for 
ipadm is missing in OI. I will also see if there is more up-to-date 
documentation on the ipmp. I assume that when a "ClientID" value is 
generated a MAC address also comes with it, at least when it negotiates 
with the DHCP server.

>
>>> But those are just two small ways in which multiple interfaces
>>> configured in this manner are a Bad Thing.  A more fundamental issue is
>>> that it was just never designed to be used that way, and if you do so,
>>> you're a test pilot.
>> This was very interesting and insightful. I've always wondered how
>> Windows tell the difference between two network connections in a
>> machine, now I see that it doesn't. Sometimes this can get corrupted in
>> Windows and sever the internet connection completely. If I understand
>> correctly, the TCP stack in Windows is borrowed from Sun. I guess this
>> is a little OT, it's just a reflection.
> No, I don't think they're related in any significant way.  The TCP/IP
> stack that Sun acquired long, long ago came from Mentat, and greatly
> modified since then.  I suspect that Windows derives from of the BSD
> code, but I don't have access to the Windows internals to make sure.
>
> In any event, they all come from the basic constraints of the protocol
> design itself, particularly RFC 791, and the "weak ES" model.

I'm sure a good book in computer networking would give a proper insight 
into that...

>>>> I will follow these instructions if I choose to configure IPMP:
>>>> http://www.sunsolarisadmin.com/networking/configure-ipmp-load-balancing-resilience-in-sun-solaris/
>>>>
>>> Wow, that's old.  You might want to dig up something a little more
>>> modern.  Before OpenIndiana branched off of OpenSolaris (or before
>>> Oracle slammed the door shut), a lot of work went into IPMP to make it
>>> much more flexible.
>> I'll see if there is something more up-to-date. There are no man entries
>> for 'ipmp' in OI and 'apropos' doesn't work for me.
> Try "man ifconfig", "man ipmpstat", "man if_mpadm".  Those should be
> reasonable starting points.

Thanks, these man pages exists. I saw in the ifconfig that there is some 
info about ipmp although it is brief.

>
>>> In terms of getting the kernel's IRE entries correct, it doesn't matter
>>> so much where the physical wires go.  It matters a whole lot what you do
>>> with "ifconfig."
>> Ok, but when it is not connected it has no IP address (as it is
>> configured over DHCP) that can interfere with the multicast and the IP
>> setup. Maybe this is a problem when the address is static.
> Or if you allow it to get the address first and then yank the cable.  I
> had no idea what your actual test case looked like, so I had to guess.
>
>>>> I looked at the time stamps
>>>> of the entries in the /var/adm/messages and they do not match the
>>>> freeze-ups by the minute.
>>> I assume that refers to the NWAM messages previously reported.  No, I
>>> don't think those are the proximate cause of your problem.
>>>
>> I've been playing around with 'ifconfig<interface>  unplumb' (as a
>> superuser of course) but it doesn't appear to do anything on e1000g2. As
>> a memory refresher, here's the setup:
>>
>> e1000g1: 10.40.137.185, DHCP (the computer name is associated with this
>> address in the /etc/hosts file)
>> e1000g2: 10.40.137.171, DHCP (bridged network of the VM is attached to
>> this port)
>> rge0:<no IP address>, DHCP (no cable attached)
>>
>> No error message comes after that command and when issuing 'ifconfig -a'
>> everything looks the same,
> It's unclear what that means.  My first guess would be that the command
> was somehow misused, but I don't understand how.  Perhaps you've
> unplumbed IPv4 and were looking at IPv6 in the "ifconfig -a" output.
>
> Or perhaps you're running NWAM, and that daemon is undoing your work
> behind your back.  You probably don't want to use NWAM with a reasonably
> complex system configuration like this.

I think it is a bit strange that the changes only apply to the IPv4 
settings but maybe it doesn't matter as the router only uses IPv4 (I 
think). Hmm, I'm starting to wonder how netmasks and subnets work in 
IPv6 as none appears to be specified in ifconfig -a.... I'm starting to 
realize that you don't need nwam for dhcp.

>> and the '/sbin route monitor' doesn't yield
>> any messages.
> That makes no sense.  You should at least get a flurry of messages when
> you unplumb a working interface.
>
> Something's deeply amiss with your system, but I don't know what it
> might be.
>
>> If I do the same on rge0 it will disappear in the IPv4
>> section of the 'ifconfig -a' output but remain in the IPv6 section. I
>> also see messages coming out of the route monitor; RTM_IFINFO,
>> RTM_DELETE and RTM_DELADDR... as a result of the unplumb command on rge0
>> which I think should be expected.
> Yes; exactly.
>
>> I can see that e1000g2 is operating in Promiscuous state (Whatever that
>> means) which the other ethernet connections are not.
> That's driven by applications.  If you have an application (such as,
> probably, VirtualBox) that sets the interface into "promiscuous mode"
> (receive all unicast messages), then that's what you'll see.
>
> Configuring IEEE 802 bridging will set the flag if that's used.  (I
> don't think you're using that here, though.  It's configured using
> "dladm create-bridge".)
>
> Snoop/ethereal/wireshark will also put the interface into promiscuous
> mode when capturing traffic.
>
> IP itself will not.

"dladm" you say. I trust that VirtualBox does what it needs in that 
regard and that I have to worry about it, for now.

>> I tried with 'ifconfig e1000g2 down' and the "UP" flag disappeared from
>> the port in 'ifconfig -a'.
> This is probably confusing here, but those commands probably don't do
> what you may think they do.
>
> The "down" and "up" commands control the IP-layer IFF_UP flag.  This is
> an indication to IP itself and to higher-level applications that the IP
> address itself is administratively available for use.  It says nothing
> about whether the interface is usable or connected, but rather that the
> administrator wants applications to be able to use the address.
>
> That's quite independent from plumbing -- which is the connection
> between IP (or any network layer) and the datalink layer (Ethernet
> driver).  And, for that matter, it's also quite independent from the
> IFF_RUNNING flag, which is set when the datalink layer tells the network
> layer about status changes (such as a physical cable
> connection/disconnection).
>
> ifconfig(1M), unfortunately, comes from BSD which blurred the lines
> between datalink and network layers.  They're really quite distinct in
> OpenIndiana (and Solaris before it), but the confused user interface in
> ifconfig sadly leads users astray.
I see... I read on a forum that when unplumb fails it sometimes helps to 
issue the 'ifconfig <IFP> down' command before unplumbing the 
connection. But that was a BSD (don't know which flavor) forum.

>> Once again this only applies to IPv4, IPv6
>> remains unaffected. The route monitor yielded messages this time. The IP
>> address 10.40.137.171 is still there however and the bridged network
>> connection of the VM seems to be unaffected by this command (which would
>> be desired if that command really severed the IFP from the host).
> It may well be going through the other interface in this case.  I don't
> think I have a good grasp on what you're actually trying to accomplish
> here.  It seems overly complex ...
>
>> If I try to unplumb it again (ifconfig e1000g2 unplumb) I get the error:
>> "ifconfig: cannot unplumb e1000g2: Invalid argument provided"
>> which is a bit strange.
> Indeed.
>
>> The route monitor yields the message (in
>> digested form):
>> RTM_LOSING (Kernel Suspects partitioning)<DST,GATEWAY,NETMASK,IFA>
>> default 10.40.137.1 default default
>> right after this command.
> Yikes!  That's a good symptom that the IREs are broken due to the
> misconfiguration.  Nothing good can come of that message.
>
>> As a comparison; if I do the same on the rge0 which is already unplumbed
>> I get:
>> "ifconfig: cannot unplumb rge0: Interface does not exist"
>> and no messages on the route monitor.
>>
>> Maybe I'm using a bad monkey wrench...
> That seems likely.  I think the whole thing was set in doubt when two
> separate interfaces were configured up on the same subnet.  Everything
> past that point sounds like the experience of a test pilot.  Some of the
> control surfaces departed the craft in flight, but, hey, if that weren't
> one of the possibilities, then we wouldn't need these brave souls.
>

I tried for a change to look into the Graphical Network Adminstration 
tool in Gnome (/usr/lib/nwam-manager-properties) and the e1000g2 
connection was disabled there even though 'ifconfig -a' showed that it 
wasn't. I also verified with ping that this connection was not reponding 
(ping 10.40.137.171) but that could be because I have done 'ifconfig ... 
down' on it.

When I shut down the virtual machine, the e1000g2 connection disappeared 
from 'ifconfig -a'. So it seems that the VM somehow blocked/prevented 
ifconfig to apply the unplumbing and once the VM was shut down the 
unplumbing kicked in first then.



I have now managed to disable the e1000g2 and the rge0 interface 
permanently by the nwam-manager-properties program (by 'Edit"ing  the 
Network Profile). Both connections have now disappeared from 'ifconfig 
-a' (also the IPv6 part is gone as well), even after a reboot, so I 
consider them permanently unplumbed. It looks like this when issuing 
'ifconfig -a':

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 
8232 index 1
         inet 127.0.0.1 netmask ff000000
e1000g1: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 
1500 index 2
         inet 10.40.137.185 netmask ffffff00 broadcast 10.40.137.255
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 
8252 index 1
         inet6 ::1/128
e1000g1: flags=20002004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 2
         inet6 fe80::6a05:caff:fe01:da8e/10

The VirtualBox Qt GUI still recognizes all physical interfaces (e1000g1, 
e1000g2 and rge0) even though they are disabled from the host which is good.

So I guess that once again the bets are on and all is good then...
Robin.




More information about the OpenIndiana-discuss mailing list