The 2024 Wheel Reinvention Jam just concluded. See the results.

The Thirty-Million Line Problem

Andrew Bromage
Now that Casey has given his lecture on "The Thirty-Million Line Problem" again, I'd like to talk about some of the things that came up from an operating system perspective.

x86-64 vs x86

Having written code that bare-metal boots 20 years ago and a few years ago, these are the main differences between booting on the two:

  • Going from 32-bit mode to 64-bit mode is about 10-20 lines of assembly. However, that's not the hard part. The hard part is that there is no such thing as 64-bit "real mode", so you also need to set up page tables. This isn't difficult, and it's reasonably inexpensive now that we have 1G page tables. (But see below the discussion about APs.) Difficulty level: Hey, not too rough.
  • More complicated than this is the move from single core to multicore. I'm going to talk about this in a bit more detail in a moment. Difficulty level: Hurt me plenty.

  • 20 years ago, we had 32-bit plug-and-play BIOS calls to handle most hardware discovery. Today, this has been replaced by ACPI. ACPI is a bunch of data structures written into the BIOS ROM or RAM which the operating syste, reads. No operating system that I'm aware of does this using code rolled by itself. Instead, they use the client library ACPICA, which clocks in at over 125,000 lines of code. Integrating ACPICA is one of the reasons why Homebrew OS is on hiatus. Difficulty level: Nightmare.

Multicore is complicated, but the problem is complicated. There are basically two issues that you need to solve:

  1. Booting a multicore system is tricky, but not as tricky as it could have been. Intel took the decision that all the good hardware should be dedicated to running multicore systems efficiently, at the cost of making them difficult to boot. At the time your kernel starts running, and you are in 64-bit mode running on one core, known as the bootstrap processor or BSP for short. The other cores are known as application processors (APs), and are still in 16-bit mode. So you need to set up enough of the system for the APs to be able to run, and then get them into 64-bit mode. This requires placing their boot code in 16-bit addressable space... yeah. It's tricky.
  2. Interrupt delivery is more complex. The central problem is that a piece of hardware needs to be serviced (e.g. a network packet comes in, a disk has finished reading a block, or something), but which CPU gets the job of servicing it? There is a progammable chip, the IOAPIC, whose job it is to decide that. On server-class hardware, there's more than one of them because CPUs are clustered. Discovering the topology of the system and then doing something appropriate is a little tricky, but more or less straightforward for consumer-class hardware.

There are also a bunch of issues that a running operating system needs to get right once the system is booted (e.g. TLB coherency), but that's a different issue.

Deployment

The principle of every program coming with its own operating system is sort of already happening in the data centre (and, to a lesser extent, on the cloud). If you have a data centre that you built last year, you probably have a bunch of boxes that all run hypervisors, on which you install one virtual machine per service.

Even if you don't do it that way, and install Linux on each machine, your services are probably implemented as containers. A container is essentially its own deployable operating system, sharing only the kernel with its host. Nesting operating systems on top of operating systems is arguably a step backwards as far as source code line count goes, but it illustrates that deploying an operating system to run a program is feasible today because it happens today on the server side.

Research

One thing that industry may not be aware of is that the early-to-mid 1990s, before we ended up with essentially three operating systems in the world, was a fertile time for operating systems research, too, and this research into new basic ideas for operating systems essentially halted 15 years ago.

Many of these research operating systems were made possible due to OSKit, which was a toolkit for building your own kernels. It had its problems, but it made it easy to experiment and got out of the way if you wanted it to.

A lot of it was about anticipated advances in hardware: the move to multicore and network-distribution, for example. Especially the need to run untrusted code downloaded over a network; the Flask design had the idea of nesting virtual machines from a security perspective without sacrificing straight-line code performance.

But here's the one I'd like to highlight: Exokernels. Here's the blurb:

Since its inception, the field of operating systems has been attempting to identify an appropriate structure: previous attempts include the familiar monolithic and micro-kernel operating systems as well as more exotic language-based and virtual machine operating systems. Exokernels dramatically depart from this previous work. An exokernel eliminates the notion that an operating system should provide abstractions on which applications are built. Instead, it concentrates solely on securely multiplexing the raw hardware: from basic hardware primitives, application-level libraries and servers can directly implement traditional operating system abstractions, specialized for appropriateness and speed.

This is from 1995! But it sounds familiar, no?

The frightening thing is that this "research", on new ways to think about operating systems, is now largely been taken up by hobbyists. All due respect to hobbyists, but we can't bet our indstry on this.

The flip side

Finally, I want to briefly mention the flip side. There are essentially two benefits from using a shared operating system as an application platform instead of raw hardware: security and balance.

(Note that I am not claiming that you get either from a modern operating system! Merely that they are much more difficult in a "just multiplex the hardware" environment.)

Security is an obvious one, and Casey touched on this in the Q&A. The job of memory protection is to abstract RAM, so that an application can't read or scribble on memory that it's not supposed to. But the same argument applies to other parts of the computer: If an application has access to the raw disk, it has access to everything, so that clearly needs to be abstracted in some sense. If an application has access to the raw network card, you essentially have no firewall, so that needs to be abstracted in some sense. The more you think along these lines, the more you essentially end up with a modern operating system. So are we really talking about "raw hardware", or are we really talking about "getting the abstraction level right"?

The other benefit is that an operating system which acts as an application platform can balance the needs of competing applications. The Windows NT kernel, for example, goes to a lot of trouble to determine exactly how much RAM each application is really using and adjust the virtual memory settings for the system as a whole to suit. Similarly, the Unix timesharing algorithm dynamically decides if a given program is an "interactive" or "background" task; interactive jobs suspend often (waiting for input/disk/whatever), and should get priority when they wake to make the system seem more responsive. Background tasks don't need to run as often, but should get a big timeslice when they do.

The flipside of that, of course, is something that my postgrad supervisor (who taught me most of what I know about modern operating systems) pointed out: the Unix timesharing system makes sure that everyone gets equal service, by which we mean equally shitty service.

One thing that needs to be mentioned is that operating system vendors have a vested interest in making applications on the same platform have a similar "look and feel", and commercial reality is that this matters. If you've ever worked in a business environment, you know that non-technically-savvy people need to be trained in how to use the software they rely on for their job, and following platform conventions saves time and money there. But this is a matter of conventions. The existence of Qt (which of course has many problems of its own) proves that you don't need to use the operating system's toolkits to maintain look and feel.

The disadvantage is the advantage

My final thought is that one underlying principle is that the key disadvantage of some technology is often its key advantage. Take C++. Its main disadvantage, the thing that makes the higher-level stuff utterly painful to use, is that all the good syntax is already taken by the C subset. But that is also the main advantage of C++: it includes C.

One key difference between GPUs and x86 is that GPUs run well-written code extremely fast, but x86 runs moderately crappy code quite well under the circumstances. Modern operating systems also run moderately crappy code quite well under the circumstances. That's a disadvantage, but it's also an advantage. We don't want that crappy code at all! But the crappy code out there does reasonably well under the circumstances.

It's hard out there.

Comments

Floppies != 1000s of OS

The first major error was the assertion that "custom operating systems numbering in the 1000s came on floppy disks" (paraphrase)

That's just not true. By and large if they were for IBM PC Clones, they had copies of fragments of the MS-DOS operating system. It was only later we saw things like DR DOS and alternatives like OS/2

In the boot sectors of removable disks (floppy or otherwise), you can put machine code. Same thing happens today, though abstractions in hard drive technologies have changed the nature of this code.

USB != evil

USB can actually be seen as a repackaging of Serial RX/TX, 5V and GND though there is a way to co-transfer AUDIO as well if you fully follow the spec by doing some electronics trickery. Depending on what flavor, you have different options. USB-A, USB-B, USB-C and newer standards are being released all the time.

The problem is when you have competing standards from snooty Apple Corporation. It's nowhere near as bad as the VIDEO CABLE PEOPLE who have reinvented the same thing over and over again just to make you frustrated and to fill the oceans up with more plastic pollution and strip-mining from rare earth metals. The ever rotating door of DVI-D, DVI-M, HDMI1,2,3,NEXT, SMART connectors, VGA, XVGA, SVGA, COMPOSITE, COAX, 5-BNC ... then you're just getting silly since they all function almost identical to VGA

Diversity = Good for Capitalism, and Good for Innovation

The argument that "less options = faster computer" might be true from a technical standpoint from a handful of angles, but the innovation index plummets when industries are overly regulated. Without such innovation, computers would not get faster because there would be no funding for such innovation.

Standards = Yep, Still Got 'em

The industry does have a sort-of ISA type standard moving things forward, which is limited to 2 companies NVIDIA and ATI, and two major standards (with their own sub-standards): OpenGL (Khronos) and DirectX (Microsuck)

The industry actually has a lot more hardware and other technical standards these days than ever before. The specs are longer, the options greater and the number of required standards for manufacturers is more complicated now that computers are also more complicated.

Lines of Code != Executing Instructions

The # of lines of code is not relevant to the size or complexity in a 1:1 ratio. The perceived bloat is mainly with Windows. Code that is not executing doesn't take anything up except maybe disk space. Fast indexing of DLLs is NOT A PROBLEM as many large projects can have upwards of 100s of DLLs. There are equally hundreds of static libs in Linux, and again, ITS NO BIG DEAL.

What is annoying is wishy-washy interfaces that always leave you hanging from a programming standpoint, the constant recycling of yesterday through a variety of means, and the relatively poor innovation occurring in programming languages.
1. Those were not PC floppies, they were Amiga floppies. That was a very important detail that you may have missed, because Casey was right about that. At the time this was happening on the Amiga, PC games (and demos, too) were largely using DOS extenders such as EMM386 and DOS/4G. You didn't need to boot games off a boot disk on the PC because DOS would get out of the way. (Even Linux was originally booted using LOADLIN, a DOS program.)

2. Nobody said USB is evil. (Certainly compared to Bluetooth, the less said about which, the better.) All Casey said is that USB (and PNP) was one thing that left computers at the mercy of hardware manufacturers, the other being incompatible video cards. Remember all those "blue screens of death" on Windows 95/98? Approximately 99.99% of the time that happened, a third party driver caused it.

By the way, smart consumer hardware isn't new. The Commodore 64 had a 6510 as its CPU. What most people who never owned one don't realise is that the 1541 disk drive ran on a 6502 (same as the Apple II), just to run the disk operating system. The 1541 communicated with the C64 via a generic and (theoretically) chainable serial bus. So what Casey is talking about isn't fundamentally incompatible with generic busses like USB any more than it's incompatible with ISA or PCI or even S-100 (choice lyric from that song: "where everything is standardised, but nothing works the same").

3. I'm not going to address the "good for capitalism" argument, because it essentially contradicts the second paragraph of your earlier point about Apple and video cable people. I will note, however, that it's worth taking a trip through the Linux source code some time just to see how many hardware bugs it works around.

4. Actually, there are three "standards"; if you want iOS or macOS, you realistically need to target Metal. But that's not really the problem. The problem is that these "standards" are almost completely opaque. Any conforming OpenGL implementation, for example, must compile shaders from source. Or, to put it another way, almost every video card ships with a customised copy of LLVM in the driver.

5. You're right that a driver that isn't used is not relied upon. I thought that too when Casey was talking about 30 million or 55 million lines of code. The 17 million lines of code in Linux includes a lot of code for hardware and platforms I don't use. (I always wonder if anyone has a floppy tape any more.)
The problem with USB was/is that for every device that talked a standard protocol like USB-HID, there was/is another which talked its own gobbledegook altogether. Webcams used to be particularly notorious for this.
On the topic of USB....

The problem with USB was/is that for every device that talked a standard protocol like USB-HID, there was/is another which talked its own gobbledegook altogether. Webcams used to be particularly notorious for this.

Things have improved. It was in no way "regulation" or "restriction" that lead to this, merely education, experimentation and desire of engineers to follow it. People actively "push" the USB standard on hardware, because it's a good standard. Those early webcams were made before the industry had sufficient experience, and I bet there are still wacky devices out there that don't conform. You can build your own USB device using an Arduino or Raspi. You should try it. If you think through this long enough you have to ask yourself how compliant you would be.

Or how about how Windows does not work very well with USB_HID, from a device developer's perspective, we could not get simple things working predictably on windows and had to bail for linux. There's nothing wrong with USB, it's the OS that sucks.

I once worked on a PC-based POS system that had a Star Micronics Receipt Printer. Every now and then you would discover that the device name had changed. Due to reboots, or updating, or just the Receipt Printer being rebooted, or the fact that Windows would lose track of the devices that were connected to it through normal operation. So when referencing the device, your software would be looking for "Receipt Printer" but Windows had renamed it "Receipt Printer (Copy 2)" ... and this process would happen over and over again all the time. Over the course of just a few days you would start to see "Receipt Printer (Copy 27)" even though only one printer was ever attached.

I had to write code that would search for "the latest named copy" of what I needed every time I needed to talk to the device. Stupid.

This explains why most POS cashier systems are made in Java (JPOS) and on Linux, since it does not wrap this with dumbly-made software.


Those were not PC floppies, they were Amiga floppies. That was a very important detail that you may have missed, because Casey was right about that. At the time this was happening on the Amiga, PC games (and demos, too) were largely using DOS extenders such as EMM386 and DOS/4G. You didn't need to boot games off a boot disk on the PC because DOS would get out of the way. (Even Linux was originally booted using LOADLIN, a DOS program.)

I'm really not seeing how this equates to "1000s of operating systems"


I'm not going to address the "good for capitalism" argument, because it essentially contradicts the second paragraph of your earlier point about Apple and video cable people. I will note, however, that it's worth taking a trip through the Linux source code some time just to see how many hardware bugs it works around.

I can complain about something but yet still find inherent goodness in it. We would not have the diversity of options if we didn't live in a free society that encourages diversity of opinion. We do not live in a "scientifically managed socio-meritocratic communism utopia" if that's what you are getting at, but at least we have active innovation and prices have by and large gone down.

Regulation is not completely bad either, and there are reasons for the "Video Connector People" doing what they are doing -- for whatever reason they are doing it -- that apparently are necessary to them. Luckily HDMI has slowed that churn but Apple is also responsible for continuing that "let's invent a new connector" trend. I mean Apple was so opinionated that they co-opted the Display Port connector for the Thunderport and ruined many a day by making their hardware proprietary -- they just don't play nice with the rest of the industry so you need two landfills, one for the industry and one for Apple


Actually, there are three "standards";

There's Vulkan. But beside the world of GPUs, there are just a bajillion other things.


You're right that a driver that isn't used is not relied upon. I thought that too when Casey was talking about 30 million or 55 million lines of code. The 17 million lines of code in Linux includes a lot of code for hardware and platforms I don't use. (I always wonder if anyone has a floppy tape any more.)

* wry smile, tip hat *


"20 years ago, we had 32-bit plug-and-play BIOS calls to handle most hardware discovery. Today, this has been replaced by ACPI. ACPI is a bunch of data structures written into the BIOS ROM or RAM which the operating syste, reads. No operating system that I'm aware of does this using code rolled by itself. Instead, they use the client library ACPICA, which clocks in at over 125,000 lines of code. Integrating ACPICA is one of the reasons why Homebrew OS is on hiatus. Difficulty level: Nightmare."

What difficulty did you have with integrating ACPICA?
The "capitalism = diversity = competition = innovation = good, standardization = regulation = bad" angle is, at the very least, an oversimplification. Unfettered capitalism is exactly what landed us in a situation where there are only three major players in the PC hardware space (intel, AMD, nvidia) and only three major players in the OS space (windows, linux, BSD). This is no surprise, it happens to every industry. Capitalism, deregulation, competition, and progress are perhaps not all as inextricably synonymous as you make them out to be. Of course, it seems you already understand this on some level, seeing as you berate apple for doing exactly the things you say are beneficial (innovating, diversifying, competing).

The kinds of standards (not regulations) that Casey proposes likely won't be able to bring back diversity/competition/innovation to the hardware market, but it will be a big step toward bringing those things back to the OS market, as it will make it feasible for smaller players to develop OSes and OS-like things once again.
lost
On the topic of USB....
The problem with USB was/is that for every device that talked a standard protocol like USB-HID, there was/is another which talked its own gobbledegook altogether. Webcams used to be particularly notorious for this.

Things have improved. It was in no way "regulation" or "restriction" that lead to this, merely education, experimentation and desire of engineers to follow it. People actively "push" the USB standard on hardware, because it's a good standard. Those early webcams were made before the industry had sufficient experience, and I bet there are still wacky devices out there that don't conform. You can build your own USB device using an Arduino or Raspi. You should try it. If you think through this long enough you have to ask yourself how compliant you would be.

I have done embedded development, including on Arduino + RPi. Fun for the whole family. If anything, it makes me appreciate standardisation.

lost
I once worked on a PC-based POS system that had a Star Micronics Receipt Printer. Every now and then you would discover that the device name had changed. Due to reboots, or updating, or just the Receipt Printer being rebooted, or the fact that Windows would lose track of the devices that were connected to it through normal operation. So when referencing the device, your software would be looking for "Receipt Printer" but Windows had renamed it "Receipt Printer (Copy 2)" ... and this process would happen over and over again all the time. Over the course of just a few days you would start to see "Receipt Printer (Copy 27)" even though only one printer was ever attached.

Some of the embedded development work I did was developing a POS system. This was 20 years ago. Talking to the hardware directly, including the printer and driving a solenoid to open the cash register was a piece of cake because it ran on top of DOS.

lost
I had to write code that would search for "the latest named copy" of what I needed every time I needed to talk to the device. Stupid.

This explains why most POS cashier systems are made in Java (JPOS) and on Linux, since it does not wrap this with dumbly-made software.

Oh, for sure. Windows' printer support is an absolute mess. Printer drivers will show up twice for no good reason or an update will cause a printer to stop working.

lost

Those were not PC floppies, they were Amiga floppies. That was a very important detail that you may have missed, because Casey was right about that. At the time this was happening on the Amiga, PC games (and demos, too) were largely using DOS extenders such as EMM386 and DOS/4G. You didn't need to boot games off a boot disk on the PC because DOS would get out of the way. (Even Linux was originally booted using LOADLIN, a DOS program.)
I'm really not seeing how this equates to "1000s of operating systems"

Because all those 1000s of games couldn't just copy the bottom part of Kickstart and use it for their OS. A DOS game would typically use io.sys + command.com and then use a DOS extender to set up a flat memory model and get DOS out of the way. On Amiga game bootdisks often contained their own HAL.
nakst
What difficulty did you have with integrating ACPICA?

I'm tempted to reply along the lines of: It's 125,000+ lines; do you really need to ask?

Nonetheless, here's the relevant code. Like most OSes, there is a two-stage init because I use ACPI to find the APICs. And yes, whoever came up with those easily-confused abbreviations is fired.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
void
acpi_init_early()
{
    ACPI_STATUS status;

    status = AcpiInitializeTables(acpi_initial_tables, ACPI_MAX_INIT_TABLES, 0);
    if (ACPI_FAILURE(status)) {
        kprintf("AcpiInitializeTables failed (%s)\n", AcpiFormatException(status));
        return;
    }
}

void
acpi_init()
{
    if (status != AE_OK) {
        kprintf("WARNING: could not initialize ACPI\n");
        // TODO: Do something better here
        return;
    }

    status = AcpiReallocateRootTable();
    if (status != AE_OK) {
        kprintf("WARNING: could not reallocate ACPI root table\n");
        // TODO: Do something better here
        return;
    }

    status = AcpiLoadTables();
    if (status != AE_OK) {
        kprintf("WARNING: could not load ACPI tables\n");
        // TODO: Do something better here
        return;
    }

    // TODO: Install local handlers here

    status = AcpiEnableSubsystem(ACPI_FULL_INITIALIZATION);
    if (status != AE_OK) {
        kprintf("WARNING: could not enable ACPI\n");
        // TODO: Do something better here
        return;
    }

    status = AcpiInitializeObjects(ACPI_FULL_INITIALIZATION);
    if (status != AE_OK) {
        kprintf("WARNING: could not initialize ACPI objects\n");
        // TODO: Do something better here
        return;
    }

    kprintf("ACPI initialized\n");
}


When I run this, there is an infinite loop inside once of the Acpi calls. I think it's in AcpiInitializeObjects, but don't quote me on that; it's been a while.
And ACPI tables are sometimes broken for non-Windows OS'es on laptops. I need to use special Linux kernel parameters for my laptop to not freeze once it boots and launches X11. This - options acpi_osi=! acpi_osi="Windows 2009". My understanding is that it makes Linux to pretend it is Windows for ACPI table interpreter.
Having a HAL != OS
This is on VirtualBox and Bochs, by the way, so it should be well-behaved for esoteric OSes.
notnullnotvoid
The "capitalism = diversity = competition = innovation = good, standardization = regulation = bad" angle is, at the very least, an oversimplification. Unfettered capitalism is exactly what landed us in a situation where there are only three major players in the PC hardware space (intel, AMD, nvidia) and only three major players in the OS space (windows, linux, BSD). This is no surprise, it happens to every industry. Capitalism, deregulation, competition, and progress are perhaps not all as inextricably synonymous as you make them out to be. Of course, it seems you already understand this on some level, seeing as you berate apple for doing exactly the things you say are beneficial (innovating, diversifying, competing).

The kinds of standards (not regulations) that Casey proposes likely won't be able to bring back diversity/competition/innovation to the hardware market, but it will be a big step toward bringing those things back to the OS market, as it will make it feasible for smaller players to develop OSes and OS-like things once again.


I just would like to add something about that, actually, there is no such thing as "unfettered capitalism", because, let's not forget about patents or intellectual property, which actually imposes rules over other small companies that would try enter the market, So whether or not there are regulations for this market, the patents are regulations themselves. So, no, capitalism/free market would not land us in this situation, that's not what a free market means, there are only three major players in the OS space because their patents make it impossible to other competitors to enter the market properly because they are obligated to respect the patents, the patents alter the method which a new entrepreneur would use to work at the first moment, and that creates an intervencionism, which is basically a price control and a control of the means of production, in this case the patents controls the means of production. And yes I agree, it happens with every industry because all of them has governamental intervention, which is the entity that creates and regulates the patents laws. So capitalism happens on a free market with competition and diversity, innovation is just a consequence of entrepreneurship, regulations and standardization does not need to be related, because on a short term a lot of standards would exist, and on a long term a standard would be established without the need of regulations, i'm in favor of what casey proposed, a standard is always better, but even if a standard does not exists right now, that does not mean that the free market is the cause of that, on the contrary, the patents are the cause of why we have three major players in PC hardware and in OS space as you said and can't reach a good standard
inb4 linux is open source

Yeah it is, however the license is GPL which means you cannot release a closed source adaptation of linux. You have to open source your adaptation under a GPL as well. That kinda kills quite a bit of business opportunities there.
Well, on the other hand it's quite a stretch to make as if patents laws were imposed from the outside by a regulatory state. Patents are solely the intellectual counterpart of the idea of private property over the means of production, which is a core idea of capitalism. Whether this is regulated by a state or by monopolies is another question.

But the idea of a "free market" is in itself a bit of a paradox. To create a true free market you would have to enforce the fact that some resources and techniques remain free to use for every new competitor... hence you would have to regulate. Otherwise you would quickly see the constitution of big actors that try to prevent new one from entering the competition. And there are multiple stories of innovation not happening because of that.

But maybe we're getting a bit off-topic...
Intellectual property is something intangible, private property is for tangible and scarce resources, that's why it's private because if it was infinite there would be no reason to have a market for it.
"To create a true free market you would have to enforce the fact that some resources and techniques remain free to use for every new competitor." That's already not a free market, this statement is contraditory itself because free market means that no force will be used on the individuals to obey laws, in this case related to patents, made for "techniques" (ideas are infinite resources) and intangible things that prohibit them of using their OWN tangible and scarce resources, that's what's wrong with patents. When you rob a pen from the owner, the owner has no pen anymore, when you use the same method/idea from the owner, the owner still retains the method/idea with itself.

Yeah it is a bit off topic but it's kind of related, not the debate itself but these things have influence on this industry, thats a fact.
khofez
"To create a true free market you would have to enforce the fact that some resources and techniques remain free to use for every new competitor." That's already not a free market, this statement is contraditory itself because free market means that no force will be used on the individuals to obey laws, in this case related to patents, made for "techniques" (ideas are infinite resources) and intangible things that prohibit them of using their OWN tangible and scarce resources, that's what's wrong with patents.


Yes, you're right, and that's kind of my point : what I'm saying is that the idea of a purely free market is contradictory in itself, because if you put several competitors with diverging interests in a market and let them free to go about their business, the first to gain an advantage over the others will use that to prevent them from challenging him.
As you said ideas are infinite resources, but it won't prevent anyone that has the power to do so from trying to exert a control on them, as for any other tangible resource.
So, it will endogenously install a "law of the strongest", suitable to his own interests, and the market will not remain "free". From the point of view of the other competitors, that's no different from a constraint imposed by a law. In fact one can say that laws are a written acknowledgement of this kind of balance of power.


Yeah it is a bit off topic but it's kind of related, not the debate itself but these things have influence on this industry, thats a fact.

Yes, that's not completely unrelated, and maybe that's the kind of discussion that is beneficial to have in order to avoid looking at the problem from a purely technical point of view and missing the broader picture. I can't remember who wrote it but I saw a tweet one day that was like "Maybe instead of trying to get everyone interested in coding, it would be more beneficial to get programmers interested in sociology, philosophy and politics". I think there's some truth there.
I suspect there's a kind of Godwin's Law going on here. As the length of a discussion about the state of the tech industry increases, the probability of it turning into a generic economics discussion approaches 1.