[OpenIndiana-discuss] Shell to use?

Bob Friesenhahn bfriesen at simple.dallas.tx.us
Wed Jan 20 16:37:25 UTC 2021


On Wed, 20 Jan 2021, Hung Nguyen Gia via openindiana-discuss wrote:

> Regardless of it's good behavior or not, this does give Linux a huge advantage over us.
> The different is significant.
> If we want to continue to keep our Solaris heritage and continue to ridicule Linux, then OK, it's fine.

I did not see anyone here "ridiculing" Linux.  Different decisions 
were made based on the target market.  Solaris made decisions with a 
priority on robustness and Linux made decisions with a priority to run 
on cheap hardware.

I use Linux on tiny hardware where there is tremendous memory 
over-commit (as much as 120x) and it is a wonder that apps run at all 
(sometimes they run exceedingly slowly).  It is nice that this is 
possible to do.

It is possible to disable over-commit in Linux but then even many 
desktop systems would not succeed with initializing at all.

Memory allocation via mmap() is useful but there is a decision point 
as to whether to allocate backing storage in the swap space or not. 
By default allocated pages are zero and actual memory is not used 
until something writes data to the page (at which point in time there 
is a "page fault" and the kernel allocates a real memory page and 
initializes it with zeroed bytes).  Likewise memory which is 
"duplicated" by fork() and its COW principle is not used until it has 
been modified.  So Linux (by default) is very optimistic and assumes 
that the app will not actually use the memory it requested, or might 
not ever modify memory inherited by the forked process.

If one is running a large database server or critical large apps then 
relying on over-commit is not helpful since once the system even 
slightly runs out of resources, either an app, or the whole system 
needs to die.

IBM's AIX was the earliest system I recall where over-commit was 
common and processes were scored based on memory usage.  When the 
system ran short of memory it would usually kill the largest process.

Linux has followed this same strategy and computes an OOM score for 
each process.  When the system runs out of "already" allocated memory, 
then a process has to die, or the system needs to panic and reboot, or 
new activities must be disallowed.

Bob
-- 
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
Public Key,     http://www.simplesystems.org/users/bfriesen/public-key.txt



More information about the openindiana-discuss mailing list