Archive for the ‘Linux’ Category

Yet more thoughts about TMSR OS: OS-Making Exam-Taking

Tuesday, November 26th, 2019

It seems that the implications of the theory that I inadvertently came up with force me to write an article. I could have just declared that other then the hardware as a foundation, the OS must have real business/organization/world-changing usage (even if it is only used to attract people to TMSR), not just some engineers working on self-perpetuation of the OS. And that the prospect of such usage is a necessary part of motivation to start developing an OS. That I did not suggest to start exam-taking the OS development by implementing only the functionality for one use-case, ignoring all the principles. However, now the theory sounds not that deep, and anyway it's more interesting to answer the comments as they are:

Re first question: I could see a BSD-style partitioning of system: all programs separated into core1 and applications; application ebuilds2, with all the dependencies, are moved into a separate repository. OS people take responsibility over the core, perhaps making Linux ABI level of guarantees3, while others are free to do with rest of the software whatever they want. The problem here is that saying that all deployment happens through static binaries built elsewhere is equivalent to saying 'fuck you' to the user. OTOH if the user is supposed to build his software on the core OS using OS-provided tools, would, say, ftjam be part of core or not? I would not expect it to be necessary for bootstrapping... In general, the line between core and applications is rather blurry, and the fact that Linux package managers handle both core and non-core software the same way is stirring the water, so to speak. Or to put it differently, the definition is ambiguous: it does not specify whether this general purpose collection of software is minimal or not; and development tools is the first software set that may or may not get included depending on the answer; OTOH, minimal environments can be too harsh for efficient usage.

Why input from users is required? There is an practically infinite amount of work available for the core, which forces the following question: What things would have to be implemented before people would start switching from whatever they currently use to TMSR OS? IMO it has the following implications:

  • How would the work be prioritized? I do not suggests that only the tasks that are necessary for users get worked on, but given that the workforce is limited4, setting the scope would be useful to get users.
  • Suppose one day you ask 'Can I run my stuff on your OS?', what would be there to say, 'Uhmm, I suppose?'; and if it doesn't work immediately, what would be the answer to 'WTF have you been doing all this time?'. Having real users and knowing what users would want helps a lot by providing a reference point 'OS - works or not?', but again, yes, this creates the opportunity for OS-making exam-taking. Yet, real usage will be a test that the OS should pass.
  • Working on the OS while not having anyone running something on it sounds wrong to me5, and the more intensive/longer/etc. the work without users, the wronger it sounds6.

As far as causes and purposes are concerned, let's try to sort out what possible causes for development of core could be (stated in purpose-like form here):

  • Have the OS under the TMSR control, not somewhere by some out-of-WoT entity. This one may be a hard one - when can developers honestly-to-self declare that the OS is now under control?
  • Stop importing broken software versions at any stage of system deployment.
  • Apply sane security, operation, and development measures that are already in use in TMSR.
  • Standardize the runtime environment for servers, instead of having cuntoo/proto-cuntoo/centos mix.

The hardware is already making a hard split between the server OS and graphics-capable OS. Further splits can be enforced by different architectures (ARM64, etc.): as mentioned by spyked, this will enforce lower bound on the versions of the compiler and kernel. This split won't be as big as the one from graphics. Also, I do not see hardware as a significant cause: if there existed a sane and usable graphics stack that didn't come in the same strand with whole spittoon of ???, there would be no problem with running it on the same OS. However, it neither exists nor is currently within the reach of engineering. IMO it is just a constraint that sometimes (but not always) forces a creation of the second artifact, intended to contain the complexity and brokenness of e.g. Linux graphics stack.

  1. Just enough to be infectious? I would speculate that bootstrapping/package management/development tools are a significant source of bloat; trinque would be in much better position to judge this after finishing his Buildroot experiment. []
  2. Or their equivalents. []
  3. There was an attempt to standardize the Linux ABI (LSB/Carrier Grade Linux), which IIRC took ABI as present in some minimal-to-default installation of RHEL. From the practical point of view, in modern Linux it is mostly irrelevant. []
  4. Though most likely adding more people would not help at all; it's increasing the personhood-percentage of involved parties that would help. []
  5. After there is something that can be called an OS in first place, of course. []
  6. A few years ago, while I was still studying, people from a local embedded OS development company did a guest lecture, mentioning that they did ~PHP development for a few years to feed themselves while developing their own OS, until they got the necessary density of contacts to actually live off it. Getting those contacts often meant 'Will work afterhours to put our OS into your HW for free.' []