• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What's a Computer Operating System?

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,226
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
What is an  operating system?

In very simple terms, it is a software package that hosts whatever other software that one might want to run.

It's possible to program for a computer that has no OS, but one has to make one's programs do what OSes usually do, so it's usually done only on small embedded systems. Embedded meaning a computer system inside another device.

An OS can be decomposed as follows:
  • Kernel: processes (what's running), memory spaces, interrupts, and inter-process communication
  • Application-support libraries: device drivers, file systems, network stacks, more typical app libraries (runtimes), GUI libraries -- these are called by running programs or the kernel
  • Utilities: programs that are run by other utilities or by the OS's user
Note: the kernel here is a "microkernel". Many OS kernels include device drivers, file systems, network stacks, and the like, making them "megakernels", for lack of a better word.

The Kernel

A process is a program loaded into a memory space. The CPU switches between execution of each the processes. In some OSes, processes can have multiple threads of execution that the CPU can switch between.

Processes can share a memory space or can have separate memory spaces. If they have separate ones, they can nevertheless share some memory.

An interrupt can come from a device or from a timer or from a bad instruction, either undefined or too-low privilege.

I/O devices like keyboards often work by doing interrupts. You press a key and it interrupts the CPU, transferring control to the interrupt-handler part of the kernel. It then transfers control to the keyboard driver, which picks up which key you pressed. It then forwards your key press to whichever app had been active.

An alternative approach to doing I/O is polling, but it usually requires running a timer that does timer interrupts, like having the keyboard driver check a keyboard every 10 milliseconds or whatever.

Switching between processes is multitasking, and involves transferring control to the kernel and then to another process. If a process does so by executing some appropriate instructions, it is cooperative multitasking. If the kernel uses timer interrupts, it is preemptive multitasking.

Interrupts are also useful for switching from executing programs ("user space") to kernels or other privileged software ("kernel space" or "supervisor space"). This sort of interrupt is usually created by executing some special bad instruction.

Application Support

At the lowest level is device drivers. CPU's, RAM chips, and I/O devices are attached to buses, hardware for communicating between them. I/O-device buses usually make the devices look like memory in the computer. So in the keyboard example, the keyboard driver looks at the memory location corresponding to the currently-pressed key.

Next is file-system software. It creates abstractions of file systems, with directories, files, and associated info. It translates requests to read files into requests to read locations on a disk, and likewise for writing.

Alongside them is network-stack software. It creates abstractions for network connections, handling sending and receiving of network-data packets.

App runtimes do various tasks like math functions and text-string manipulation and getting dates/times and so forth. In GUIfied systems, they will paint GUI widgets on the screen, and determine which widgets get which user inputs. Thus, if you wish to type in a text field, you may have to click on it to give it "focus", receiving keyboard and mouse inputs. Once you do so, your keyboard and mouse inputs are directed to it, and its software makes little pictures (font glyphs) in it of the characters that you had typed.

GUI widgets are objects like windows, menus, buttons, sliders, text fields, canvases, etc.

Utilities

These are programs that users or other utilities can run. In some OSes, some utilities continually run, being server or daemon processes. Some GUIfied OSes have utilities that host the GUI.

Most OSes have utilities for managing files, and many also have text editors, image viewers, system-configuration ones, and other such convenience ones.
 
Some illustrations from what's familiar to me.

Multitasking

MacOS Classic had essentially a megakernel, with much of its Toolbox (app support) accessed by doing bad instructions called A-traps.

It had a single memory space, and was originally single-tasking for GUI apps. Cooperative multitasking was later added on. A Classic app used the OS's GUI software by calling a function that requests the next I/O event, like a key press or a mouse click. The app then waits until the Toolbox composes one for that app, then the Toolbox returns control to the app with the event. The Toolbox did multitasking by switching to some other app when it had an event for that app.

This is cooperative, because an app has to request an event before switching between apps can happen. Needless to say, if an app does not request an event, it will hog the system.

It was done that way because of the design of the GUI software -- it involved apps directly accessing certain data structures, something that made it hard to do preemptive multitasking. That's because GUI apps could too easily step on each others' toes (metaphorically, of course).

MacOS Classic handled multiple apps' memory by giving each app its own memory partition with some specified size.


The two PM's of OSes are preemptive multitasking and protected memory: multiple memory spaces. They have been common in mainframe and minicomputer OSes since the 1970's; most Unix flavors have supported both PM's. All three mainstream desktop OSes, OSX, Linux, and Windows, have supported both PM's for over a decade now. It's harder to tell for smartphone and tablet OSes, though iOS and Android are related to OSX and Linux.


Kernel design

Linus Torvalds recalled that when he was starting out, microkernels were in vogue in academia. Those are kernels which only do what I'd described above for a kernel -- processes and threads and memory spaces and interrupts. Instead, he decided on a megakernel, one that includes device drivers and file and network stuff.

Thus, for a long time, to install new drivers, one had to rebuild one's Linux kernel. Seems like extreme computer-geek territory.

More recently, the Linux kernel includes the ability to load and unload device drivers and the like, something that the OSX kernel can also do. But even most recently, I can quickly find instructions for customizing one's Linux kernel that involve rebuilding it.


The OSX kernel is a microkernel, but a lot of the lower-level app-support stuff runs in its memory space, making it much like a megakernel.


Megakernels like the Windows one and quasi-megakernels like the OSX one are vulnerable to device-driver misbehavior. While that's rare in OSX, that has been much more of a problem in Windows. So you can see where a lot of Windows problems have come from.


BTW, I'm oversimplifying about Windows. Windows 3.x was essentially a DOS app, while what I described of Windows is true of Windows NT and its descendants, 2000, XP, Vista, 7, 8, and 10. Windows 95, 98, and ME seem to have been some sort of hybrid.

DOS itself was a tiny OS -- it had some interrupt services and some utilities, but not much more.
 
GUI shells

All these OSes have had "standard" GUI shells or software layers built into them: MacOS Classic, MacOS X, AmigaOS, NeXTStep, the BeOS, and Windows. So one might think that a GUI shell is intimately connected with the lower-level parts of an OS.

But that is not the case for several Unix flavors, including Linux. Unix developers have created several of them over the decades.

Since the late 1980's, most of them have been based on the X-window system. That is a low-level GUI shell. It maintains windows, does drawing of images, text, and simple geometric shapes, dispatches user-input events, but not much more. It is thus much like MacOS Classic Quickdraw, MacOS X Quartz, and Windows GDI. The official reason was to implement mechanism but not policy, however those are defined here.

X-windows does not do GUI widgets, and Unix developers have developed several GUI-widget sets over the years ( List of widget toolkits). One of the first was the Athena set, developed at MIT. It is rather primitive-looking. A later and fancier one was Motif, but it was proprietary and was open-sourced in 2012 after some legal entanglements were resolved. Another commonly-used proprietary one is Qt, but that's been made sort-of open-source over the years. In response to this, the GTK widget set was developed as open-source from the beginning, and it's also been widely used.

I note that Qt and GTK have been overlaid on the OSX and Windows GUI shells, making it easy to port apps written in them. So if you write an app with Qt widgets, you'll be able to port it to all three major desktop platforms and some smartphone/tablet ones without much extra work.


OS GUI shells also include what's sometimes called a desktop environment. These are some GUI widgets for accessing apps, listing open windows, etc. DE's often come with various utilities, like a GUI file manager (MacOS Finder, Windows Explorer, etc.), but I'll mainly be concerned with their screen widgets.

The Windows Taskbar is arguably the most familiar DE widget set. It includes a "Start" button for making a menu for accessing various things, and icons for open windows. More recent versions include icons for apps, something like the NeXT/OSX Dock.

The next DE is likely the MacOS menubar, in both Classic and X. On the left, it has the Apple button for making menus of OS stuff and recently-opened stuff, and on the right, it has various displays, like the time. OSX adds a NeXTStep feature: the Dock, a list of apps, folders, and minimized windows in icon form.

NeXTStep, the BeOS, AmigaOS, etc. also have/had their own DE's.


Other Unix systems have had various DE's, In the 1990's, commercial-Unix developers settled on one that they called the Common Desktop Environment or CDE. It has a "Front Panel", a bar that looks like a cross between the OSX Dock and the Windows Taskbar.

Since CDE uses Motif, it was proprietary for a long time, a rather annoying situation.


In the late 1990's, some developers built an open-source DE for Linux: KDE. But it used Qt widgets, something that kept it from being fully open-source for some time. Some other ones responded by creating a DE called Gnome that used GTK widgets. That one was fully open-source from the beginning.

KDE looks more Windows-ish, while GNOME looks more OSX-ish.
 
Linux Distributions

Strictly speaking, Linux is a kernel, but many Linux users have put together numerous "Linux distributions" with various app-support libraries and utilities, thus including all three OS parts. A Linux distribution or "distro" is thus comparable to Windows or OSX. Many of them are designed with various goals in mind, like small size, support of old systems, complete configurability, server duty, or ease of use for non-hardcore users. Many of them are also derivative of other distros, it must be noted.

Another open-source Unix flavor, BSD, also has several distributions, but not nearly as many as Linux. It has also split up into flavors like FreeBSD and OpenBSD, something that Linux has avoided.

A common feature of Linux distros is a package manager, something for downloading additional software from some repository of software. Package managers handle software dependencies, to avoid redundant downloads. Thus, if one has downloaded a paint program called Gimp, the package manager will detect that it needs a GUI-widget toolkit called GTK, and it will also download GTK if one does not already have it. If one then downloads a draw program called Inkscape, another GTK user, the package manager will note that GTK is already present and only download Inkscape proper.

Virtual Machines

A virtual machine is for running an OS (guest) and its apps inside another OS (the host).

One can do that by emulating the CPU that the guest OS runs on. The host computer runs an app that imitates this guest CPU so that the guest OS cannot tell the difference. A common application of emulation has been for earlier game consoles and game-arcade machines. These are often OS-less apps or close to OS-less ones, but the principle is the same.

Emulating a CPU gives a performance hit. One can reduce it with Just-In-Time (JIT) compilation, making translations of the guest CPU's code into host-CPU instructions. But even that is not quite the ideal: running at full speed on the host CPU.

That is possible if the guest OS and its apps were written for the host's CPU. But that has challenges of its own.

A problem is access to low-level functions usually handled by a kernel and device drivers. A simple approach would be to turn all CPU-management, memory-management, and I/O instructions into privileged ones, and then run the guest OS in user-level, underprivileged fashion. Then every time the guest OS tries to execute a privileged instruction, the virtual-machine software catches it and translates it into some form appropriate for the host system. When it's finished, it returns control to the guest OS.

That's been done since the 1970's on IBM mainframes: the System/370's, the S/390's, the z-Series systems, and related ones. ( Hypervisor, Virtualization - z9VM-MS-080109.pdf). But while it is straightforward on those Intimidating Big Machines, it has awkward complications on Intel-x86 CPU's. But it's been possible to virtualize those CPU's also.

In other words, the host OS makes some of the guest OS's CPU instructions bad instructions. This approach also works with memory-mapped I/O, where I/O devices are assigned memory addresses that one reads from and writes to. One then make those addresses bad memory addresses and catches accesses to them.

One can help the guest OS by installing special drivers in it, drivers that transfer control to the host OS. Thus, a guest-OS keyboard driver can get key-press events straight from a host-OS keyboard driver without the virtualizer having to interpret bad actions. These drivers can also make it easier to integrate with host-OS facilities, like copying files between the host OS and the guest OS.


Also, virtualization is how Apple had handled MacOS Classic in MacOS X -- creating a "Blue Box" virtualizer that runs Classic inside of it. It makes a single memory space, and is thus OK for Classic. I also have VMWare Fusion, a virtualizer that I run Windows inside of if I have to run anything that only runs in Windows. So I've experienced virtualization on both IBM and Apple computers
 
GUI window management

A rather obvious problem for GUI shells is window management, including deciding which windows and parts of windows to display. What happens when they overlap? One can keep windows from overlapping, and some window managers indeed do that. But it has not been a very popular solution, for rather obvious reasons.

If windows can overlap, then they are almost universally arranged in top-to-bottom order, so it's easy to tell what gets painted over. But what does one do when one wants to update a window's contents? Common solutions are to update only the visible parts of a window, or else to update all that one wants to update, then show only the visible parts of that update.


MacOS Classic had a window manager in its Toolbox, its very mega kernel.

However, as far as I can tell, Unix flavors, Unix-like OSes, and Windows all have some continuously-running window-manager utility, a window-manager daemon or server process. The ones with included GUI shells (NeXTStep, OSX, the BeOS, Windows) all have included window managers. But other ones, like Linux, have had numerous window managers written for them, in part because X-windows needs to be run with some window manager ( Comparison of X window managers).

Of Linux GUI shells, KDE has its own window manager, KWin, and GNOME has had at least two: Metacity for GNOME 2 and Mutter for GNOME 3.

My favorite name for a window manager is Ratpoison, because it's designed to enable users to manage windows without using their mice.


Unix command names

Unix command-line utilities often have very cryptic names ( List of Unix commands):
ls, cp, mv, rm, fsck, nroff, awk, sed, grep, vi, ar, df, gcc, ...

To some critics, some of these names seem like digestive noises. Fortunately, a good-enough GUI shell enables one to avoid using such commands. Like OSX's.

Also in Unix-land, one can also choose between several command-line shells/interpreters. OSX comes with these ones of them:
bash, csh, ksh, sh, tcsh, zsh

Bourne-Again Shell, C Shell, Korn Shell, Bourne shell, Tenex C Shell, Z Shell


Server Processes

Now another note about OSes: background or server or daemon processes. These are programs that continuously run, typically out of sight unless one looks for them. Some of them have more visible effects, however, like window managers and desktop-widget makers. Like the OSX Dock, which is maintained by a Dock program that continuously runs.

The maintainers of BSD Unix even have a cutesy daemon as their mascot (The BSD Daemon).

I remember some of my early years with Unix. When I checked on what was running, I'd get a big list of background programs, and I'd get very unnerved. It seemed superfluous -- why did they need to run? I had used VMS systems about then, and they didn't have nearly as many.

But later, I found that the BeOS had a lot of server processes, and also OSX and Windows.
 
Booting

I should mention how operating-system software gets loaded. At first sight, it seems like a paradox, because one presumably needs an OS to load an OS. That's why the process is called booting, short from bootstrapping, from the phrase "pulling oneself up with one's bootstraps".

 Booting - Wikipedia's contributors have written an article on that also.

The usual solution nowadays is to have a boot-loader program in a boot ROM for doing so. It often loads a second boot loader from a disk to continue the booting process. Next up is the kernel, which gets loaded and initialized. After that are whichever daemon / server processes will be active, including user-interface ones.

The boot ROM of many PeeCees is often called the BIOS, the Basic Input-Output System, dating back from DOS days.


References

Wikipedia's contributors have compiled these lists:
 History of operating systems
 Timeline of operating systems
 List of operating systems
 Comparison of operating systems
 Comparison of open-source operating systems
 Comparison of operating system kernels
 Comparison of file systems
 
Thanks for this. Always good to see stuff about what an operating system actually is.

A couple of random things: the Commodore Amiga didn't call it "booting", but "kickstarting." These are two lovely metaphors. The first one means "bootstrapping", and refers back to the story of Baron Von Munchausen who escaped a swamp by pulling on the straps of his boots. The second refers to starting a motorcycle. Both are trying to get to the point that somehow, you've got to get something up and running that kind of already assumes it is up-and-running.

There's some interest in a concept called "unikernels" now. These are programs, such as web-servers, that run on a microkernel such as L4 or Xen, but which only bring in as much of an operating system as they need, which can be extremely minimal. They are sometimes called "library operating systems", after some forgotten calls by computer scientists that operating systems shouldn't really exist; we should just have other code that we depend on to do basic stuff, which is what a "library" is in computing.
 
A long-time bastion of OS-less programming has been game consoles ( Home video game console). Or at least close to OS-less programming. I'll order them by generations.

Gen #1

One of the first ever was the Magnavox Odyssey. It was not even a full-scale computer with a CPU that ran stored instructions. It had discrete components connected with switches that could be set to make different games. Its game cartridges contained the switches with settings appropriate to different games.

Gen #2

The first game consoles to be full-scale computers, CPU chips and all. Many of them also had some system software on ROM. Rudimentary OSes? Their games were loaded from ROM chips in cartridges.
  • Fairchild Video Entertainment System (VES) / Channel F (1976)
  • RCA Studio II (1977)
  • Atari 2600 (1977)
  • Bally Astrocade (1977)
  • Magnavox Odyssey-2 (1978)

Gen #3

  • Nintendo Entertainment System (NES) (1983)
  • Sega Master System (SMS) (1985)
  • Atari 7800 (1986)
  • Atari XEGS (1987)

Gen #4

Some of these systems started getting CD-ROM drives and nonvolatile RAM, though usually in game cartridges. Nonvolatile RAM was for saving games. The console would write a savegame file into a cartridge, and a player could start later from where he/she had left off.

  • NEC TurboGrafx-16 (1987)
  • Sega Mega Drive / Genesis (1988)
  • SNK Neo Geo (1990)
  • Super Nintendo Entertainment System (Super NES) (1990)
  • Pioneer LaserActive (1993)

Gen #5

The Sony Playstation was the most successful one, and arguably one of the most advanced, doing 3D graphics. It and some others of this generation used CD-ROM's, though Nintendo continued to use cartridges. It also had some nice app-support libraries that programmers could link into their games. But it and most other consoles were still essentially OS-less.

  • Fujitsu FM Towns Marty (1993)
  • Atari Jaguar (1993)
  • 3DO Interactive Multiplayer (1993)
  • NEC PC-FX (1994)
  • Sega 32X (1994)
  • Sega Saturn (1994)
  • Sony Playstation (1994)
  • Nintendo 64 (1996)

Gen #6

The Microsoft Xbox is notable for having its own operating system, a modification of Windows. Its name is short for DirectXBox, in honor of Microsoft's DirectX game-support software. For 3D graphics, programmers use its version of Windows Direct3D, the 3D-graphics part of DirectX. The Xbox also has 10-gigabyte hard disk in it.

I recall Bungie's programmers stating that the Xbox is the first game console that didn't suck to program for. One could program for it pretty much as for a PeeCee, instead of having one's game app include a video-card driver.

OS-lessness continues in the others.

  • Sega Dreamcast (1998)
  • Sony PlayStation 2 (2000)
  • Nintendo GameCube (2001)
  • Microsoft Xbox (2001)

Gen #7

The Xbox 360 continues with having its own OS, and the PS3 and Wii join it. The PS3's one, CellOS, is apparently based on Unix flavor FreeBSD.

Among other things, their OSes are for downloading and managing games and various other software.

  • Microsoft Xbox 360 (2005)
  • Sony Playstation 3 (2006)
  • Nintendo Wii (2006)

Gen #8

All of them have OSes. Xbox One continues with Windows and the PS4 with FreeBSD (Orbis OS).

  • Nintendo Wii U (2013)
  • Sony Playstation 4 (2014)
  • Microsoft Xbox One (2014)

At this time is the rise of  Microconsoles -- all with OSes, mostly Android.

Smartphones and tablets also become competition -- all with OSes like iOS (OSX variant), Android, and Windows.

Cellphones / mobile phones themselves had progressed from OS-lessness to becoming smartphones and having full-scale OSes.
 
Thanks for this. Always good to see stuff about what an operating system actually is.

A couple of random things: the Commodore Amiga didn't call it "booting", but "kickstarting." These are two lovely metaphors. The first one means "bootstrapping", and refers back to the story of Baron Von Munchausen who escaped a swamp by pulling on the straps of his boots. The second refers to starting a motorcycle. Both are trying to get to the point that somehow, you've got to get something up and running that kind of already assumes it is up-and-running.

There's some interest in a concept called "unikernels" now. These are programs, such as web-servers, that run on a microkernel such as L4 or Xen, but which only bring in as much of an operating system as they need, which can be extremely minimal. They are sometimes called "library operating systems", after some forgotten calls by computer scientists that operating systems shouldn't really exist; we should just have other code that we depend on to do basic stuff, which is what a "library" is in computing.

To your point... another thing that an Operating System is, is a TRUSTED set of libraries and security subsystems.
 
Thanks for this. Always good to see stuff about what an operating system actually is.

A couple of random things: the Commodore Amiga didn't call it "booting", but "kickstarting." These are two lovely metaphors. The first one means "bootstrapping", and refers back to the story of Baron Von Munchausen who escaped a swamp by pulling on the straps of his boots. The second refers to starting a motorcycle. Both are trying to get to the point that somehow, you've got to get something up and running that kind of already assumes it is up-and-running.

There's some interest in a concept called "unikernels" now. These are programs, such as web-servers, that run on a microkernel such as L4 or Xen, but which only bring in as much of an operating system as they need, which can be extremely minimal. They are sometimes called "library operating systems", after some forgotten calls by computer scientists that operating systems shouldn't really exist; we should just have other code that we depend on to do basic stuff, which is what a "library" is in computing.

To your point... another thing that an Operating System is, is a TRUSTED set of libraries and security subsystems.
The main unikernel going is written in Ocaml, a memory safe language with a type system designed around further safety, that I trust implicitly more than C.
 
I recall Bungie's programmers stating that the Xbox is the first game console that didn't suck to program for..

Did you ever see the leaked source code for the original Halo? It is full of commented code.. some very funny, some very telling about just how happy those developers were...

some excerpts I recall (from a decade ago)...

"Keep your grubby fingers out of my texture pool!!!!!"
"fucking unhandled fucking exception here"

and my favorite:

"Bink corrupted your shit again. have fun rebuilding.".

I think it was called "Bink"... the 3D graphics engine they were using. Not the happiest sounding bunch of folks always.
 
Thanks for this. Always good to see stuff about what an operating system actually is.

A couple of random things: the Commodore Amiga didn't call it "booting", but "kickstarting." These are two lovely metaphors. The first one means "bootstrapping", and refers back to the story of Baron Von Munchausen who escaped a swamp by pulling on the straps of his boots. The second refers to starting a motorcycle. Both are trying to get to the point that somehow, you've got to get something up and running that kind of already assumes it is up-and-running.

There's some interest in a concept called "unikernels" now. These are programs, such as web-servers, that run on a microkernel such as L4 or Xen, but which only bring in as much of an operating system as they need, which can be extremely minimal. They are sometimes called "library operating systems", after some forgotten calls by computer scientists that operating systems shouldn't really exist; we should just have other code that we depend on to do basic stuff, which is what a "library" is in computing.

To your point... another thing that an Operating System is, is a TRUSTED set of libraries and security subsystems.
The main unikernel going is written in Ocaml, a memory safe language with a type system designed around further safety, that I trust implicitly more than C.

The language may be safe (from breaking out of a sandbox, or violating memory bounds, etc..), but the safest language in the world will still do whatever malicious activity it is programmed to do. If Explorer.exe was written by every software developer for their own application, you cannot trust that "save file" isn't, " copy, upload, and encrypt local versions of all files"... silly example, I know... totally different level of the environment, but I hope you get what I mean.
 
The main unikernel going is written in Ocaml, a memory safe language with a type system designed around further safety, that I trust implicitly more than C.

The language may be safe (from breaking out of a sandbox, or violating memory bounds, etc..), but the safest language in the world will still do whatever malicious activity it is programmed to do. If Explorer.exe was written by every software developer for their own application, you cannot trust that "save file" isn't, " copy, upload, and encrypt local versions of all files"... silly example, I know... totally different level of the environment, but I hope you get what I mean.
I don't, in the context of unikernels. Unikernels are written in memory safe languages and, following a library model, only bring in the dependencies that the server requires, rather than the complete OS. They are not only safer by choice of language, but safer in presenting a much smaller attack surface. It's much easier to trust this code, not just because of the language, but because there's less code to trust.
 
Did you ever see the leaked source code for the original Halo? It is full of commented code.. some very funny, some very telling about just how happy those developers were...

some excerpts I recall (from a decade ago)...

"Keep your grubby fingers out of my texture pool!!!!!"
"fucking unhandled fucking exception here"
Has any of that survived Bungie's and M$'s copyright lawyers?

and my favorite:

"Bink corrupted your shit again. have fun rebuilding.".

I think it was called "Bink"... the 3D graphics engine they were using. Not the happiest sounding bunch of folks always.
 Bink Video -- it's a video format, often used for prerendered cutscenes.
 
Did you ever see the leaked source code for the original Halo? It is full of commented code.. some very funny, some very telling about just how happy those developers were...

some excerpts I recall (from a decade ago)...

"Keep your grubby fingers out of my texture pool!!!!!"
"fucking unhandled fucking exception here"
Has any of that survived Bungie's and M$'s copyright lawyers?
apparently not.... can't find it again... wanted to link it.
 
The main unikernel going is written in Ocaml, a memory safe language with a type system designed around further safety, that I trust implicitly more than C.

The language may be safe (from breaking out of a sandbox, or violating memory bounds, etc..), but the safest language in the world will still do whatever malicious activity it is programmed to do. If Explorer.exe was written by every software developer for their own application, you cannot trust that "save file" isn't, " copy, upload, and encrypt local versions of all files"... silly example, I know... totally different level of the environment, but I hope you get what I mean.
I don't, in the context of unikernels. Unikernels are written in memory safe languages and, following a library model, only bring in the dependencies that the server requires, rather than the complete OS. They are not only safer by choice of language, but safer in presenting a much smaller attack surface. It's much easier to trust this code, not just because of the language, but because there's less code to trust.

Perhaps it's the "library model" you are referring to that provides the security I am referencing... What control is in place to certify the integrity of the imported library, if not from a single, trusted source (such as M$ is, for Windows libraries)?
 
Perhaps it's the "library model" you are referring to that provides the security I am referencing... What control is in place to certify the integrity of the imported library, if not from a single, trusted source (such as M$ is, for Windows libraries)?
What about that MS-DOG, amirite?!?

Google "unikernel". Or don't. Just don't expect me to answer ignorant rhetorical questions.
 
Perhaps it's the "library model" you are referring to that provides the security I am referencing... What control is in place to certify the integrity of the imported library, if not from a single, trusted source (such as M$ is, for Windows libraries)?
What about that MS-DOG, amirite?!?

Google "unikernel". Or don't. Just don't expect me to answer ignorant rhetorical questions.

-1

If you don't understand what Ring 0 is and the principle of least privilege, then I understand why you are unwilling to answer the question. Personally, I don't want application developers interfacing directly with my hardware on a multipurpose system. Maybe it's good for handheld special-purpose devices that traditionally have been just running an embedded OS and Java... I mean, who cares of your hand scanner suddenly blows out because a software update was able to disable the CPU fan... But on an end user multipurpose system, such a model is potentially a security nightmare....

Unless, of course, there is something about how shared libraries are packaged and deployed that provides trust.... the question you were unable / unwilling to answer (and then got all insulting and shit - for no apparent reason).
 
The term Disk Operating Sytem or DOS goes back to early days of magnetic computer storage.

Original operating systems were essentially file maintenance and input-output functions. The first large scalee IBM and Digital Equipment commercial systems are examples.

The original MSDOS was such a system. Very simple.

Windows evolved and became called the Wibdows OS obviously for GUI windows.

As it exists today it is very complicated. It is an asynchrony OS that responds to unpredictable interrupts from many sources like the net or keyboards. Asynchronous systems are difficult to debug.

You can buy small operating systems for embedded computing that mange file systems in solid-state memory.

Linux, Unix, and Windows are all variations on a theme.

Any processor based software that mange's a file system is an operating system.

On a PC on a reset or power up solid state memory contains enough code to load the start of the OS into RAM from disk. It is commonly called the 'boot ROM'. or 'boot loader'. The initial OS loaded into RAM then brings in the entire OS into RAM. A common process in computer system.
 
https://en.wikipedia.org/wiki/History_of_Unix

I'd forgotten some of it , the OS wars. Thenbst aspect of freemarket comprtition.

I alawys assiciare Unuix with the DEC Vax system, it is was a common system used befor e the PC. In the 80s at Lockheed there were several VAX cluster on the campus. User had terminals in offices. To support the system departments were charged by the CPU second of run time. If you wanted fast numerical processing you had to pay extra to use a hardware math processor. Today the math coprocessor is built into the processor. On the first PCs you had to buy a hardware math accelerator chip that plugged into the motherboard.

The VAX system was not unlike the first IBM pc. It had on open archtecture and companies made plug in boards. Floating Point Systems made a math coprocessor fpr the VAX.
 
Back
Top Bottom