[OpenIndiana-discuss] what do pkg install phases mean?

Richard L. Hamilton rlhamil at smart.net
Sun Mar 5 17:42:10 UTC 2023



> On Mar 5, 2023, at 12:12, Peter Tribble <peter.tribble at gmail.com> wrote:
> 
> On Sun, Mar 5, 2023 at 4:19 PM Till Wegmüller <toasterson at gmail.com> wrote:
> 
>> Hi
>> 
>> IPS works on images of the OS. And it does so in an Atomic way. Speed is
>> not the main goal. Stability is.
>> 
> 
> As a historical interlude, speed was very much a key focus of creating IPS
> to replace
> SVR4 packaging. One of the key criticisms of the old packaging system was
> that it
> was considered inordinately slow (one manifestation of this was that the
> development
> workflow for Solaris involved installing updated bits, SVR4 was too slow
> for impatient
> engineers, hence bfu as a hack, but that rendered the system unsupportable,
> but is
> what led to onu).
> 
> The emphasis on performance can be seen in several areas of the design -
> downloading
> and updating just the files you need rather than whole packages;
> eliminating the overhead
> of maintaining the shared contents file.
> 
> It's unfortunate that the original aim of improving performance got lost
> along the way.
> 
> -- 
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/


Yes...I wasn't badmouthing IPS, just noting that compared to e.g. well-supported and maintained Linux distros (Kali Linux has impressed me recently), it's slow.

But it is a HUGE improvement on SVR4 packages (and patches!), which were not only extremely slow, but one had to do some degree of sorting out manually, although I recall that someone (Casper?) had created a set of tools for deploying patches on a large scale; I recall using that to make Y2K updates doable at scale (back in the day of lots of Sun workstations; although servers tended to be done by hand when downtime was permissible).

The biggest improvement is that in a sense, speed doesn't matter if a new boot environment is being created, provided the system isn't hurting for performance. One can run the pkg update at a convenient hour, and do the reboot when it can be scheduled to be non-disruptive, with the assurance of a trivial and quick recovery plan if there's a problem (boot again to the previous BE). That satisfies everyone while minimizing lost sleep and/or overtime. :-)

My problem with all the new-fangled stuff is simply that I go back to mainframes and punched cards and Unix v7 on a PDP-11...and still have emulators for those and others (like Apollo workstations, or Multics, or even CP/M), in case I get nostalgic. With all that very slowly fading out of my head, getting something new in can be an uphill battle, esp. retired when I usually just do it if it's fun. :-)

For instance, although I never used them, I wish there was something in C or C++ like PL/I area variables, where one could create a multi-megabyte or nowadays multi-gigabyte variable WITHIN which other variables could be allocated and referenced via offsets rather than pointers, such that the entire area and its contents and internal allocation information and dynamic data structures could be written out to disk, and later read back in again (although the program reading it would of course have to have all the appropriate declarations). Think of that if one wanted to make predigesting a complex human-readable file into cacheable internal form relatively easy...or these days, saving state in something, perhaps a game or long-running modeling program, I suppose. And mainframes had checkpoint/restart support, which mostly one has to roll one's own in a limited (user space) way on Unix/Linux systems.




More information about the openindiana-discuss mailing list