[OpenIndiana-discuss] CIFS performance issues

Robin Axelsson gu99roax at student.chalmers.se
Tue Jan 24 19:41:52 UTC 2012


On 2012-01-24 19:14, James Carlson wrote:
> Robin Axelsson wrote:
>> On 2012-01-24 16:52, Gary Mills wrote:
>>> On Tue, Jan 24, 2012 at 04:39:42PM +0100, Robin Axelsson wrote:
>>>>>> ifconfig -a returns:
>>>>>> ...
>>>>>> e1000g1: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4>    mtu
>>>>>> 1500 index 2
>>>>>>           inet 10.40.137.185 netmask ffffff00 broadcast 10.40.137.255
>>>>>> e1000g2: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4>    mtu
>>>>>> 1500 index 3
>>>>>>           inet 10.40.137.196 netmask ffffff00 broadcast 10.40.137.255
>>>>>> rge0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4>    mtu
>>>>>> 1500
>>>>>> index
>>>>>> 4
>>> Do you really have two ethernet ports on the same network?  You can't
>>> do that without some sort of link aggregation on both ends of the
>>> connection.
>> I don't see why not. I've done this before and it used to work just
>> fine. These are two different controllers that work independently and I
>> do it so that the VM(s) could have its own NIC to work with as I believe
>> the virtual network bridge interferes with other network activity.
> It's never worked quite "right" (whatever "right" might mean here) on
> Solaris.
>
> If you have two interfaces inside the same zone that have the same IP
> prefix, then you have to have IPMP configured, or all bets are off.
> Maybe it'll work.  But probably not.  And was never been supported that
> way by Sun.
The idea I have with using two NICs is to create a separation between 
the virtual machine(s) and the host system so that the network activity 
of the virtual machine(s) won't interfere with the network activity of 
the physical host machine.

The virtual hub that creates the bridge between the VM network ports and 
the physical port tap into the network stack of the host machine and I 
suspect that this configuration is not entirely seamless. I think that 
the virtual bridge interferes with the network stack so letting the 
virtual bridge have its own network port to play around with has turned 
out to be a good idea, at least when I was running OSOL b134 - OI148a.

I suppose I could try to configure the IPMP, I guess I will have to 
throw away the DHCP configuration and go for fixed IP all the way as 
DHCP only gives two IP addresses and I will need four of them. But then 
we have the problem with the VMs and how to separate them from the 
network stack of the host.

I will follow these instructions if I choose to configure IPMP:
http://www.sunsolarisadmin.com/networking/configure-ipmp-load-balancing-resilience-in-sun-solaris/

>
>> If we assume that both ports give rise to problems because they run
>> without teaming/link aggregation (which I think not) then there wouldn't
>> be any issues if I only used one network port. I have tried with only
>> one port and the issues are considerably worse in that configuration.
> That's an interesting observation.  When running with one port, do you
> unplumb the other?  Or is "one port" just an application configuration
> issue?
>
> If you run "/sbin/route monitor" when the system is working fine and
> leave it running until a problem happens, do you see any output produced?
>
> If so, then this could fairly readily point the way to the problem.
>
WIth one port I mean that only one port is physically connected to the 
switch, all other ports but one are disconnected. So I guess ifconfig 
<port_id> unplumb would have no effect on such ports.

I managed to reproduce a few short freezes while "/sbin/route monitor" 
was running over ssh but it didn't spit out any messages, perhaps I 
should run it on a local terminal instead. I looked at the time stamps 
of the entries in the /var/adm/messages and they do not match the 
freeze-ups by the minute.





More information about the OpenIndiana-discuss mailing list