liam-on-linux.dreamwidth.org Open in urlscan Pro
174.129.17.11  Public Scan

Submitted URL: http://liam-on-linux.dreamwidth.org/
Effective URL: https://liam-on-linux.dreamwidth.org/
Submission: On November 16 via api from US — Scanned from IT

Form analysis 2 forms found in the DOM

POST https://www.dreamwidth.org/login?ret=1

<form id="login" class="lj_login_form" action="https://www.dreamwidth.org/login?ret=1" method="post"><input type="hidden" name="mode" value="login"><input type="hidden" name="returnto" value="https://liam-on-linux.dreamwidth.org/">
  <label for="xc_user" class="invisible"> Account name: </label>
  <input type="text" name="user" value="" maxlength="27" size="7" id="xc_user" placeholder="Account name:" tabindex="1" class="text">
  <label for="xc_password" class="invisible"> Password </label>
  <input type="password" size="7" id="xc_password" tabindex="2" placeholder="Password" class="lj_login_password" name="password" value="">
  <input type="submit" value="Log in" class="submit" tabindex="4">
  <div id="lj_controlstrip_login_secondary_controls">
    <a href="https://www.dreamwidth.org/openid/" tabindex="5">(OpenID?)</a>
    <a href="https://www.dreamwidth.org/lostinfo" tabindex="6">(Forgot it?)</a>
    <input type="checkbox" id="xc_remember" class="checkbox" tabindex="3" value="1" name="remember_me">
    <label for="xc_remember" class="checkboxlabel"> Remember Me </label>
  </div>
</form>

POST https://www.dreamwidth.org/multisearch

<form action="https://www.dreamwidth.org/multisearch" method="post">
  <input type="text" class="text" title="Search" size="20" name="q" id="search"> <select class="select" name="type">
    <option value="int" selected="selected">Interest</option>
    <option value="region">Region</option>
    <option value="nav_and_user">Site and Account</option>
    <option value="faq">FAQ</option>
    <option value="email">Email</option>
  </select> <input type="submit" value="Go">
</form>

Text Content

Account name: Password
(OpenID?) (Forgot it?) Remember Me
You're viewing liam_on_linux's journal
Create a Dreamwidth Account  Learn More
Interest Region Site and Account FAQ Email
Reload page in style:  light


LIAM ON LINUX


LIAM PROVEN'S THOUGHTS ON IT, ESPECIALLY FOSS


RECENT ENTRIES


 * Previous 20


OUTLINER NOTES

Oct. 12th, 2024 11:39 am

Word is a nightmare.

«

RT ColiegeStudent on Twitter 
 
using microsoft word
 
*moves an image 1 mm to the left*
 
all text and images shift. 4 new pages appear. in the distance, sirens.
»



But there's still a lot of power in that festering ball of 1980s code.

In 6 weeks in 2016, I drafted, wrote, illustrated, laid out and submitted a ~330
page technical maintenance manual for a 3D printer, solo, entirely in MS Word
from start to finish. I began in Word 97 & finished it in Word 2003, 95% of the
time running under WINE on Linux... and 90% of the time, using it in Outline
Mode, which is a *vastly* powerful writer's tool which the FOSS word has nothing
even vaguely comparable to.

But as a novice... Yeah, what the tweet said. It's a timeless classic IMHO.

Some Emacs folks told me Org-mode is just as good as an outliner. I've tried it.
This was my response.



Org mode compared to Word 2003 Outline View is roughly MS-DOS Edlin compared to
Emacs. It's a tiny fragmentary partial implementation of 1% of the
functionality, done badly, with a terrible *terrible* UI.

No exaggeration, no hyperbole, and there's a reason I specifically said 2003 and
nothing later.



 



I've been building and running xNix boxes since 1988. I have often tried both Vi
and Emacs over nearly 4 decades. I am unusual in terms of old Unix hands: I
cordially detest both of them.

The reason I cite Word 2003 is that that's the last version with the old menu
and toolbar UI. Everything later has a "ribbon" and I find it unusable.

Today, the web-app/Android/iOS versions of Word do not have Outline View, no.
Only the rich local app versions do.

But no, org-mode is not a better richer alternative; it is vastly inferior, to
the point of being almost a parody.

It's really not. I tried it, and I found it a slightly sad crippled little thing
that might be OK for managing my to-do list.

Hidden behind Emacs' *awful* 1970s UI which I would personally burn in a fire
rather than ever use.

So, no, I don't think it's a very useful or capable outliner from what I have
seen. Logseq has a better one.

To extend my earlier comparison:

Org-mode to Word's Outline View is Edlin to Emacs.

Logseq to Outline View is MS-DOS 5 EDIT to Emacs: it's a capable full-screen
text editor that I know and like and which works fine. It's not very powerful
but what it does, it does fine.

Is Org-mode aimed at something else? Maybe, yes. I don't know who or what it's
aimed at, so I can't really say.
 

Word Outline Mode is the last surviving 1980s outliner, an entire category of
app that's disappeared.

http://outliners.com/default.html

It's a good one but it was once one among many. It is, for me, *THE* killer
feature of MS Word, and the only thing I keep WINE on my computers for.

It's a prose writer's tool, for writing long-form documents in a human language.

Emacs is a programmer's editor for writing program code in programming
languages.

So, no, they are not the same thing, but the superficial similarity confuses
people.
 

I must pick a fairly small example as I'm not very familiar with Emacs.

In Outline Mode, a paragraph's level in the hierarchy is tied with its paragraph
style. Most people don't know how to use Word's style sheets, but think of HTML.
Word has 9 heading levels, like H1...H9 on the Web, plus Body Text, which is
always the lowest level.

As you promote or demote a paragraph, its style automatically changes to match.

(This has the side effect that you can see the level from the style. If that
bothered you, in old versions you could turn off showing the formatting.)

As you move a block of hierarchical text around the outline all its levels
automatically adopt the correct styles for their current location.

This means that when I wrote a manual in it, I did *no formatting by hand* at
all. The text of the entire document is *automatically* formatted according to
whether it's a chapter heading, or section, or subsection, or subsubsection,
etc.

When you're done Word can automatically generate a table of contents, or an
index, or both, that picks up all those section headings. Both assign page
numbers "live", so if you move, add or delete any section, the ToC and index
update immediately with the new positions and page numbers.
 

I say a small example as most professional writers don't deal with the
formatting at all. That's the job of someone else in a different department.

Or, in technical writing, this is the job of some program. It's the sort of
thing that Linux folks get very excited about LaTeX and LyX, or for which
documentarians praise DocBook or DITA, but I've used both of those and they need
a*vast* amount of manual labour -- and *very* complex tooling.

XML etc are also *extremely* fragile. One punctuation mark in the wrong place
and 50 pages of formatting is broken or goes haywire. I've spent days
troubleshooting one misplaced `:`. It's horrible.

Word can do all this automatically, and most people *don't even know the
function is there.* It's like driving an articulated lorry as a personal car and
never noticing that it can carry 40 tonnes of cargo! Worse still, people attach
a trailer and roofrack and load them up with stuff... *because they don't know
their vehicle can carry 10 cars already* as a built in feature.

I could take a sub sub section of a chapter and promote it to a chapter in its
own right, and adjust the formatting of 100 pages, in about 6 or 8 keystrokes.
That will also rebuild the index and redo the table of contents, automatically,
for me.
 

All this can be entirely keyboard driven, or entirely mouse driven, according to
the user's preference. Or any mixture of both, of course. I'm a keyboard warrior
myself. I can live entirely without a pointing device and it barely slows me
down.

You can with a couple of clicks collapse the whole book to just chapter
headings, or just those and subheadings, or just all the headings and no body
text... Any of 9 levels, as you choose. You can hide all the lower levels,
restructure the whole thing, and then show them again. You can adjust formatting
by adjusting indents in the overview, and then expand it again to see what
happened and if it's what you want.

You could go crazy... zoom out to the top level, add a few new headings, indent
under the new headings, and suddenly in a few clicks, your 1 big book is now 2
or 3 or 4 smaller books, each with its own set of chapters, headings, sub
headings, sub sub headings etc. Each can have its own table of contents and
index, all automatically generated and updated and formatted.
 

I'm an xNix guy, mainly. I try to avoid Windows as much as possible, but the
early years of my career were supporting DOS and then Windows. There is good
stuff there, and credit where it's due.

(MS Office on macOS also does this, but the keyboard UI is much clunkier.)

Outliners were just an everyday tool once. MS just built a good one into Word,
way back in the DOS era. Word for DOS can do all this stuff too and it did it in
like 200kB of RAM in 1988!

Integrating it into a word processor makes sense, but they were standalone apps.

It's not radical tech. This is really old, basic stuff. But somehow in the
switch to GUIs on the PC, they got lost in the transition.

And no, LibreOffice/Abiword/CalligraWords has nothing even resembling this.
 

There are 2 types of outliner: intrinsic and extrinsic, also known as 1-pane or
2-pane.

https://en.wikipedia.org/wiki/Outliner#Layout

There are multiple 2-pane outliners that are FOSS.

But they are tools for organising info, and are almost totally useless for
writers.

There are almost no intrinsic outliners in the FOSS world. I've been looking for
years. The only one I know is LoqSeq, but it is just for note-taking and it does
none of the formatting/indexing/ToC stuff I mentioned. It does handle Markdown
but with zero integration with the outline structure.

So it's like going from Emacs to Notepad. All the clever stuff is gone, but you
can still edit plain text.

 

 * 
 * 

 * Link
 * 3 comments
 * Reply





INFERNO NOTES

Oct. 12th, 2024 10:44 am
Plan 9 is Unix but more so. You write code in C and compile it to a native
binary and run it as a process. All processes are in containers all the time,
and nothing is outside the containers. Everything is virtualised, even the
filesystem, and everything really is a file. Windows on screen are files.
Computers are files. Disks are files. Any computer on the network can load a
program from any other computer on the network (subject to permissions of
course), run it on another computer, and display it on a third. The whole
network is one giant computer.
 
You could use a slower workstation and farm out rendering complicated web pages
to nearby faster machines, but see it on your screen.
 
But it's Unix. A binary is still a binary. So if you have a slow Arm64 machine,
like a Raspberry Pi 3 (Plan 9 runs great on Raspberry Pis), you can't run your
browser on a nearby workstation PC because that's x86-64. Arm binaries can't run
on x86, and x86 binaries can't run on Arm.
 
Wasm (**W**eb **AS**se**M**bly) is a low-level bytecode that can run on any OS
on any processor so long as it has a Wasm runtime. Wasm is derived from asm.js
which was an earlier effort to write compilers that could target the Javascript
runtime inside web browsers, while saving the time it takes to put Javscript
through a just-in-time compiler.
 
https://en.wikipedia.org/wiki/WebAssembly
 
eBPF (extended Berkeley Packet Filters) is a language for configuring firewall
rules, that's been extended into a general programming language. It runs inside
the Linux kernel: you write programs that run _as part of the kernel_ (not as
apps in userspace) and can change how the kernel works on the fly. The same eBPF
code runs inside any Linux kernel on any architecture. 
 
https://en.wikipedia.org/wiki/EBPF
 
Going back 30 years, Java runs compiled binary code on any CPU because code is
compiled to JVM bytecode instead of CPU machine code... But you need a JVM on
your OS to run it.
 
https://en.wikipedia.org/wiki/List_of_Java_virtual_machines
 
All these are bolted on to another OS, usually Linux.
 
But the concept works better if integrated right into the OS. That's what Taos
did.
 
https://wiki.c2.com/?TaoIntentOs
 
Programs are compiled for a virtual CPU that never existed, called VP.
 
https://en.wikipedia.org/wiki/Virtual_Processor
 
They are translated from that to whatever processor you're running on as they're
loaded from disk into RAM. So *the same binaries*  run natively on any CPU.
X86-32, x86-64, Arm, Risc-V, doesn't matter.
 
Very powerful. It was nearly the basis of the next-gen Amiga.
 
http://www.amigahistory.plus.com/deplayer/august2001.html
 
But it was a whole new OS and a quite weird OS at that. Taos 1 was very skeletal
and limited. Taos 2, renamed Int**e**nt (yes, with the bold), was much more
complete but didn't get far before the company went under.
 
Inferno was a rival to Java and the JVM, around the time Java appeared.
 
It's Plan 9, but with a virtual processor runtime built right into the kernel.
All processes are written in a safer descendant of C called Limbo (it's a direct
ancestor of GoLang) and compiled to bytecode that executes in the kernel's VM,
which is called Dis.
 
Any and all binaries run on all types of CPU. There is no "native code" any
more. The same compiled program runs on x86, on Risc-V, on Arm. It no longer
matters. Run all of them together on a single computer. 
 
Running on a RasPi, all your bookmarks and settings are there? No worries, run
Firefox on the headless 32-core EPYC box in the next building, displaying on
your Retina tablet, but save on the Pi. Or save on your Risc-V laptop's SSD next
to your bed. So long as they're all running Inferno, it's all the same. One
giant filesystem and all computers run the same binaries.
 
By the way, it's like 1% of the size of Linux with Wasm, and simpler too.
 
 * 
 * 

 * Link
 * 1 comment
 * Reply





CHRIS DA KIWI'S PERSONAL HISTORY OF COMPUTERS

Sep. 30th, 2024 09:35 pm

This is Chris's "Some thoughts on Computers" – the final, edited form.

 

The basic design of computers hasn't changed much since the mechanical one, the
Difference Engine, invented by Charles Babbage in 1822 – but not built until
1991.  

Ada Lovelace was the mathematical genius who saw the value in Babbage’s work,
but it was Alan Turing who   invented computer science, and the ENIAC in 1945
was arguably the first electronic general-purpose digital computer. It filled a
room. The Micral N was the world's first “personal computer,” in 1973.



Since then, the basic design has changed little, other than to become smaller,
faster, and on occasions, less useful.



The current trend to lighter, smaller gadget-style toys – like cell phones,
watches, headsets of various types, and other consumer toys – is an indication
that the industry has fallen into the clutches of mainstream profiteering, with
very little real innovation now at all.

 

I was recently looking for a new computer for my wife and headed into one of the
main laptop suppliers only to be met with row upon row of identical machines, at
various price points arrived at by that mysterious breed known as "marketers".
In fact, the only difference in the plastic on display was how much drive space
had the engineers fitted in, and how much RAM did they have. Was the case a
pretty colour, that appealed to the latest 10-year-old-girl, or a rugged he-man,
who was hoping to make the school whatever team? In other words, rows of blah.

 

Where was the excitement of the early Radio Shack "do-it-yourself" range: the
Sinclair ZX80, the Commodore 8-bits (PET and VIC-20),which ran the CPM operating
system, (one of my favorites) later followed by the C64? What has happened to
all the excitement and innovation? My answer is simple: the great big clobbering
machine known as "Big Tech".

 

Intel released its first 8080 processor in 1972 and later followed up with
variations on a theme, eventually leading to the 80286, the 80386, the 80486
(getting useful), and so on. All of these variations needed an operating system
which basically was a variation of MS-DOS, believed to have been based on QDOS,
or "Quick and Dirty Operating System," the work of developer Tim Paterson at a
company called Seattle Computer Products (SCP). It was later renamed 86-DOS,
after the Intel 8086 processor, and this was the version that Microsoft licensed
and eventually purchased. Or alternatively the newer, FOSS, FreeDOS. 

Games started to appear, and some of them were quite good. But the main driver
of the computer was software.


In particular, word-processors and spreadsheets. 


At the time, my lost computer soul had found a niche in CP/M, which on looking
back was a lovely little operating system – but quietly disappeared into the
badlands of marketing. 


Lost and lonely I wandered the computerverse until I hooked up with Sanyo –
itself now long gone the way of the velociraptor and other lost prehistoric
species.
 

The Sanyo bought build quality, the so-called "lotus card" to make it fully
compatible with the IBM PC, and later, an RGB colour monitor and a 10 meg hard
drive. The basic model was still two 5¼" floppy drives, which they pushed up to
720kB, and later the 3.½" 1.25MB floppy drives. Ahead of its time, it too went
the way of the dinosaur.


These led to the Sanyo AT-286, which became a mainstay, along with the Commodore
64. A pharmaceutical company had developed a software system for pharmacies that
included stock control, ordering, and sales systems. I vaguely remember that
machine and software bundle was about NZ$ 15,000, which was far too rich for
most.  Although I sold many of them over my time.


Then the computer landscape began to level out, as the component manufacturers
began to settle on the IBM PC-AT as a compatible, open-market model of computer
that met the Intel and DOS standards. Thus, the gradual slide into 10000
versions of mediocrity.


The consumer demand was for bigger and more powerful machines, whereas the
industry wanted to make more profits. A conflict to which the basic computer
scientists hardly seemed to give a thought.

I was reminded of Carl Jung's dictum that “greed would destroy the West.” 


A thousand firms sprang up, all selling the same little boxes, whilst the
marketing voices kept trumpeting the bigger/better/greater theme… and the costs
kept coming down, as businesses became able to afford these machines, and head
offices began to control their outlying branches through the mighty computer. 


I headed overseas, to escape the bedlam, and found a spot in New Guinea – only
to be overrun by a mainframe which was to be administered from Australia, and
was going to run my branch – for which I was responsible, but without having any
control.


Which side of the fence was I going to land on? The question was soon answered
by the Tropical Diseases Institute in Darwin, which diagnosed dengue fever… and
so I returned to NZ.


For months I battled this recurring malady, until I was strong enough to attend
a few hardware and programming courses at the local Polytechnic, eventually
setting up my own small computer business, building up 386 machines for resale,
followed by 486 and eventually a Texas Instrument laptop agency. Which was about
1992 from my now fragile memory.  I also dabbled with the Kaypro as a personal
beast and it was fun but not as flexible as the Sanyo AT I was using.



The Texas Instruments laptop ran well enough and I remember playing Doom  on it,
but it had little battery life, and although rechargeable, they needed to be
charged every two or three hours. At least the WiFi worked pretty consistently,
and for the road warrior, gave a point of distinction.



Then the famous 686 arrived, and by the use of various technologies, RAM began
to climb up to 256MB, and in some machines 512MB.




Was innovation happening? No – just more marketing changes. As in, some machines
came bundled with software, printers or other peripherals, such as modems,
scanners, or even dot matrix printers. 



As we ended the 20th century, we bought bigger and more powerful machines. The
desktop was being chased by the laptop, until I stood in my favorite computer
wholesaler staring at a long row of shiny boxes that were basically all the
same, wondering which one my wife would like… knowing that it would have to
connect to the so-called "internet", and in doing so, make all sorts of
decisions inevitable.  As to securing a basically insecure system which would
require third part programs of dubious quality and cost. 


Eventually I chose a smaller Asus, with 16GB of main RAM and an NVIDIA   card,
and retreating to my cottage, collapsed in despair. Fifty years of computing and
wasted innovation left her with a black box that, when she opened, it said
“HELLO” against a big blue background that promised the world – but only offered
more of the same.  As in, a constant trickle of hackers, viruses, Trojans and
barely anything useful – but now included several new perversions called 
chat-bot or “AI”.


I retired to my room in defeat.

 

We have had incremental developments, until we have today's latest chips from
Intel and AMD based on the 64-bit architecture first introduced around April
2003.

 

So where is the 128-bit architecture – or the 256 or the 512-bit?

 

What would happen if we got really innovative? I still remember Bill Gates
saying "Nobody will ever need more than 640k of RAM." And yet, it is common now
to buy machines with 8 or 16 or 32GB of RAM, because the poor quality of
operating systems fills the memory with badly codded garbage that causes memory
leaks, stack-overflow errors and other memory issues.

 

Then there is Unix which I started using at my courses in Christchurch
polytechnic.  A Dec 10 from memory which also introduced me to the famous or
infamous BOFH.
 

I spent many happy hours chuckling over the BOF’s exploits. Then came awareness
of the twin geniuses: Richard Stallman, and from Linus Torvalds, GNU/Linux. A
solid, basic series of operating systems, and programs by various vendors, that
simply do what they are asked, and do it well.

  

I wonder where all this could head, if computer manufacturers climbed onboard
and developed, for example, a laptop with an HDMI screen, a rugged case with a
removable battery, a decent sound system, with a good-quality keyboard,
backlight with per-key colour selection. Enough RAM slots to boost the main
memory up to say 256GB, and video RAM to 64GB, allowing high speed draws to the
screen output.

 

Throw away the useless touch pads, and gimmicks like second mini screens built
in to the chassis. With the advent of Bluetooth mice, they are no longer needed.
Instead, include an 8TB NV Me drive, then include a decent set of controllable
fans and heat pipes that actually kept the internal temperatures down, so as to
not stress the RAM and processors.


I am sure this could be done, given that some manufacturers, such as Tuxedo, are
already showing some innovation in this area. 


Will it happen? I doubt it. The clobbering machine will strike again.

- - - - -

Having found that I could not purchase a suitable machine for my needs, I
wandered throughout the computerverse until I discovered in a friends small
computer business an Asus ROG Windows 7 model, in about 2004. It was able to
have a RAM upgrade, which I duly carried out, with 2 × 8GB sodim ram plus 4GB of
SDDR2 video RAM, and 2×500GB WD 7200RPM spinning rust hard drives. This was
beginning to look more like a computer. Over the time I used it, I was able to
replace the spinning-rust drives with 500GB Samsung SSDs, and as larger sticks
of RAM became available, increased that to the limit as well. I ran that
machine, which was Linux-compatible, throwing away the BSOD [Blue Screen Of
Death – Ed.] of Microsoft Windows, and putting one of the earliest versions of
Ubuntu with GNOME on it. It was computing heaven: everything just worked, and I
dragged that poor beast around the world with me.


While in San Diego, I attended Scripps University and lectured on cot death for
three months as a guest lecturer. 

Scripps at the time was involved with IBM in developing a line-of-sight optical
network, which worked brilliantly on campus. It was confined to a couple of
experimental computer labs, but you had to keep your fingers off the mouse or
keyboard, or your machine would overload with web pages if browsing. I believe
it never made it into the world of computers for ordinary users, as the machines
of the day could not keep up.


There was also talk around the labs of so-called quantum computing, which had
been talked about since the 1960s on and off, but some developments appeared in
1968.

The whole idea sounds great – if it could be made to work at a practicable user
level.  But in the back of my mind, I had a suspicion that these ideas would
just hinder investment and development of what was now a standard of
motherboards and BIOS-based systems. Meanwhile, my Tux machine just did what was
asked of it.


Thank you, Ian and Debra Murdoch, who developed the Debian version of Linux – on
which Ubuntu was based.

I dragged that poor Asus around the Americas, both North and South, refurbishing
it as I went. I found Fry's, the major technology shop in San Diego, where I
could purchase portable hard drives and so on at a fraction of the cost of
elsewhere in the world as well as just about any computer peripheral  dreamed
of.  This shop was a techs heaven so to speak. And totally addictive to some on
like me.


Eventually, I arrived in Canada, where I had a speaking engagement at Calgary
University – which also had a strong Tux club – and I spent some time happily
looking at a few other distros. Distrowatch had been founded about 2001, which
made it easy to keep up with Linux news, new versions of Tux, and what system
they were based on. Gentoo seemed to be the distro for those with the knowledge
to compile and tweak every little aspect of their software.


Arch attracted me at times. But eventually, I always went back to Ubuntu – 
until I learned of Ubuntu MATE. The University had a pre-release copy of Ubuntu
MATE 14.10, along with a podcast from Alan Pope and Martin Wimpress, and before
I could turn around I had it on my Asus. It was simple, everything worked, and
it removed the horrors of GNOME 3.


I flew happily back to New Zealand and my little country cottage.


Late in 2015, my wife became very unwell after a shopping trip. Getting in touch
with some medical friends, they were concerned she’d had a heart attack. This
was near the mark: she had contracted a virus which had destroyed a third of her
heart muscle. It took her a few years to die, and a miserable time it was for
her and for us both. After the funeral, I had rented out my house and bought a
Toyota motor home, and I began traveling around the country. I ran my Asus
through a solar panel hooked up to an inverter, a system which worked well and
kept the beast going.


After a couple of years, I decided to have a look around Australia. My
grandfather on my father's side was Australian, and had fascinated us with tales
of the outback, where he worked as a drover in the 1930s and ’40s.


And so, I moved to Perth, where my brother had been living since the 1950s. 


There, I discovered an amazing thing: a configurable laptop based on a Clevo
motherboard – and not only that, the factory  of manufacturers Metabox was just
up the road in Fremantle.


Hastily, I logged on to their website, and in a state of disbelief, browsed
happily for hours at all the combinations I could put together. These were all
variations on a theme by Windows 7, (to misquote Paganini) and there were no
listing of ACPI records or other BIOS information with which to help make a
decision.


I looked at my battered old faithful; my many-times-rebuilt Asus, and decided
the time had come. I started building. Maximum RAM and video RAM, latest NVIDIA
card, two SSDs, their top-of-the-line WiFi and Bluetooth chip sets, sound cards,
etc. Then, as my time in Perth was at an end I got it sent to New Zealand, as I
was due to fly back the next day. 


That was the first of four Metabox machines I have built, and is still running
flawlessly using Ubuntu MATE. I gave it to a friend some years ago and he is
delighted with it still.


I had decided to go to the Philippines and South east Asia to help set up
clinics for distressed children, something I had already done in South America,
and the NZ winter was fast approaching.  Hastily I arranged with a church group
in North Luzon to be met at Manila airport.  I had already contacted an
interpreter who was fluent in Versaya and Tagalog, and was an english teacher so
we arranged to meet at Manila airport and go on from there.

Packing my trusty Metabox I flew out of Christchurch in to a brand new world. 

The so called job soon showed up as a scam and after spending a week or so In
Manila I suggested that rather than waste visa we have a look over some of the
country.  Dimp pointed out her home was on the next Island over and would make a
good base to move from.

So we ended up in Cagayan de Ora – the city of the river of gold!  After some
months of traveling around  we decided to get married and so I began the process
of getting a visa for Dimp to live in NZ.  This was a very difficult process,
but with the help of a brilliant immigration lawyer, and many friends, we
managed it and next year Dimp becomes a NZ citizen.

My next Metabox was described as a Windows 10 machine, but I knew that it would
run Linux beautifully – and so it did. A few tweaks around the ACPI subsystem
and it computed away merrily, with not a BSOD in sight. A friend of mine who had
popped in for a visit was so impressed with it that he ordered one too, and that
arrived about three months later. A quick wipe of the hard drive (thank you,
Gparted!), both these machines are still running happily, with not a cloud on
the horizon.

One, I gave to my stepson about three months back: a Win 10 machine, and he has
taken it back with him to the Philippines, where he reports it is running fine
in the tropical heat.

My new Metabox arrived about six weeks ago, and I decided – just out of
curiosity – to leave Windows 11 on it. A most stupid decision, but as my wife
was running Windows 11 and had already blown it up once, needing a full reset
(which, to my surprise, worked), I proceeded to charge it for the recommended 24
hours, and next day, switched it on. “Hello” it said, in big white letters, and
then the nonsense began… a torrent of unwanted software proceeded to fill up one
of my 8TB NVMe drives, culminating after many reboots with a Chatbot, an AI
“assistant”, and something called “Co-pilot”. 

“No!” I cried, “not in a million years!” – and hastily plugging in my Ventoy
stick, I rebooted it into Gparted, and partitioned my hard drive as ext4 for
Ubuntu MATE.


So far, the beast seems most appreciative, and it hums along with just a gentle
puff of warm air out of the ports. I needed to do a little tweaking, as the
latest NVIDIA cards don’t seem to like Wayland as a graphics server, and the
addition to GRUB of  acpi=off, and another flawless computer is on the road.


Now, if only I could persuade Metabox to move to a 128-bit system, and can get
delivery of that on the other side of the great divide, my future will be in
computer heaven.


Oh, if you’re wondering what happened to the Asus? It is still on the kitchen
table in our house in the Philippines, in pieces, where I have no doubt it is
waiting for another rebuild! Maybe my Stepson Bimbo will do it and give it to
his niece.  Old computers never die they just get recycled


— Chris Thomas

In Requiem 

03/05/1942 — 02/10/2024 



 
 * Current Location: Douglas IoM
 * Current Music: 6music

Tags:
 * guest post

 * 
 * 

 * Link
 * 0 comments
 * Reply





THE SECOND AND FINAL PART OF CHRIS' PERSONAL HISTORY WITH LINUX

Sep. 23rd, 2024 06:23 pm

This is the second, and I very much fear the last, part of my friend Chris "da
Kiwi" Thomas' recollections about PCs, Linux, and more. I shared the first part
a few days ago.

Having found that I could not purchase a suitable machine for my needs, I
discovered the Asus ROG Windows 7 model, in about 2004. It was able to have a
RAM upgrade, which I duly carried out, with 2 × 8GB SO-DIMMs, plus 4GB of SDDR2
video RAM, and 2×500GB WD 7200RPM hard drives. This was beginning to look more
like a computer. Over the time I used it, I was able to replace the
spinning-rust drives with 500GB Samsung SSDs, and as larger sticks of RAM became
available, increased that to the limit as well. I ran that machine, which was
Tux-compatible [“Tux” being Chris’s nickname for Linux. – Ed.], throwing away
the BSOD [Blue Screen Of Death – that is, Microsoft Windows. – Ed.] and putting
one of the earliest versions of Ubuntu with GNOME on it. It was computing
heaven: everything just worked, and I dragged that poor beast around the world
with me.


While in San Diego, I attended Scripps and lectured on cot death for three
months as a guest. Scripps at the time was involved with IBM in developing a
line-of-sight optical network, which worked brilliantly on campus. It was
confined to a couple of experimental computer labs, but you had to keep your
fingers off the mouse or keyboard, or your machine would overload with web pages
if browsing. I believe it never made it into the world of computers for ordinary
users, as the machines of the day could not keep up.


There was also talk around the labs of so-called quantum computing, which had
been talked about since the 1960s on and off, but some developments appeared in
1968.

The whole idea sounds great – if it could be made to work at a practicable user
level.  But in the back of my mind, I had a suspicion that these ideas would
just hinder investment and development of what was now a standard of
motherboards and BIOS-based systems. Meanwhile, my Tux machine just did what was
asked of it.


Thank you, Ian and Debra Murdoch, who developed the Debian version of Tux – on
which Ubuntu was based.

I dragged that poor Asus around the Americas, both North and South, refurbishing
it as I went. I found Fry's, the major technology shop in San Diego, where I
could purchase portable hard drives and so on at a fraction of the cost of
elsewhere in the world.


Eventually, I arrived in Canada, where I had a speaking engagement at Calgary
University – which also had a strong Tux club – and I spent some time happily
looking at a few other distros. Distrowatch had been founded about 2001, which
made it easy to keep up with Linux news, new versions of Tux, and what system
they were based on. Gentoo seemed to be the distro for those with the knowledge
to compile and tweak every little aspect of their software.


Arch attracted me at times. But eventually, I always went back to Ubuntu – 
until I learned of Ubuntu MATE. The University had a pre-release copy of Ubuntu
MATE 14.10, along with a podcast from Alan Pope and Martin Wimpress, and before
I could turn around I had it on my Asus. It was simple, everything worked, and
it removed the horrors of GNOME 3.


I flew happily back to New Zealand and my little country cottage.


Late in 2015, my wife became very unwell after a shopping trip. Getting in touch
with some medical friends, they were concerned she’d had a heart attack. This
was near the mark: she had contracted a virus which had destroyed a third of her
heart muscle. It took her a few years to die, and a miserable time it was for
her and for us both. After the funeral, I had rented out my house and bought a
Toyota motorhome, and I began traveling around the country. I ran my Asus
through a solar panel hooked up to an inverter, a system which worked well and
kept the beast going.


After a couple of years, I decided to have a look around Australia. My
grandfather on my father's side was Australian, and had fascinated us with tales
of the outback, where he worked as a drover in the 1930s and ’40s.


And so, I moved to Perth, where my brother had been living since the 1950s. 


There, I discovered an amazing thing: a configurable laptop based on a Clevo
motherboard – and not only that, their factory was just up the road in
Fremantle.



Hastily, I logged on to their website, and in a state of disbelief, browsed
happily for hours at all the combinations I could put together. These were all
variations on a theme by Windows 7, and there were no listing of ACPI records or
other BIOS information.


I looked at my battered old faithful, my many-times-rebuilt Asus, and decided
the time had come. I started building. Maximum RAM and video RAM, latest nVidia
card, two SSDs, their top-of-the-line WiFi and Bluetooth chipsets, sound cards,
etc. Then, I got it sent to New Zealand, as I was due back the next day.


That was the first of four Metabox machines I have built, and is still running
flawlessly using Ubuntu MATE. 


My next Metabox was described as a Windows 10 machine, but I knew that it would
run Tux beautifully – and so it did. A few tweaks around the ACPI subsystem and
it computed away merrily, with not a BSOD in sight. A friend of mine who had
popped in for a visit was so impressed with it that he ordered one too, and that
arrived about three months later. A quick wipe of the hard drive (thank you,
Gparted!), both these machines are still running happily, with not a cloud on
the horizon.


One, I gave to my stepson about three months back, and he has taken it back with
him to the Philippines, where he reports it is running fine in the tropical
heat.


My new Metabox arrived about six weeks ago, and I decided – just out of
curiosity – to leave Windows 11 on it. A most stupid decision, but as my wife
was running Windows 11 and had already blown it up once, needing a full reset
(which, to my surprise, worked), I proceeded to charge it for the recommended 24
hours, and next day, switched it on. “Hello” it said, in big white letters, and
then the nonsense began… a torrent of unwanted software proceeded to fill up one
of my 8TB NVMe drives, culminating after many reboots with a Chatbot, an AI
“assistant”, and something called “Co-pilot”. 


“No!” I cried, “not in a million years!” – and hastily plugging in my Ventoy
stick, I rebooted it into Gparted, and partitioned my hard drive for Ubuntu
MATE.


So far, the beast seems most appreciative, and it hums along with just a gentle
puff of warm air out of the ports. I needed to do a little tweaking, as the
latest nVidia cards don’t seem to like Wayland as a graphics server, and the
addition to GRUB of  acpi=off, and another flawless computer is on the road.


Now, if only I could persuade Metabox to move to a 128-bit system, and can get
delivery of that on the other side of the great divide, my future will be in
computer heaven.



Oh, if you’re wondering what happened to the Asus? It is still on the kitchen
table in our house in the Philippines, in pieces, where I have no doubt it is
waiting for another rebuild! 


Chris Thomas

In Requiem 

03/05/1942 — 02/10/2024 

 
 * 
 * 

 * Link
 * 0 comments
 * Reply





GUEST POST: "SOME THOUGHTS ON COMPUTERS", BY CHRIS DA KIWI

Sep. 21st, 2024 12:14 am
A friend of mine via the Ubuntu mailing list for the last couple of decades,
Chris is bedbound now and tells me he's in his final weeks of life. He shared
with me a piece he's written. I've lightly edited it before sharing it, and if
he's feeling up to it, there is some more he wants to say. We would welcome
thoughts and comments on it.



                                                  Some thoughts on Computers

 

The basic design of computers hasn't changed much since the mechanical one, the
Difference Engine, invented by Charles Babbage in 1822 – but not built until
1991. Alan Turing invented computer science, and the ENIAC in 1945 was arguably
the first electronic general-purpose digital computer. It filled a room. The
Micral N was the world's first “personal computer,” in 1973.

 

Since then, the basic design has changed little, other than to become smaller,
faster, and on occasions, less useful.

 

The current trend to lighter, smaller gadget-style toys – like cell phones,
watches, headsets of various types, and other consumer toys – is an indication
that the industry has fallen into the clutches of mainstream profiteering, with
very little real innovation now at all.

 

I was recently looking for a new computer for my wife and headed into one of the
main laptop suppliers only to be met with row upon row of identical machines, at
various price points arrived at by that mysterious breed known as "marketers".
In fact, the only difference in the plastic on display was how much drive space
had the engineers fitted in, and how much RAM did they have. Was the case a
pretty colour, that appealed to the latest 10-year-old-girl, or a rugged he-man,
who was hoping to make the school whatever team? In other words, rows of blah.

 

Where was the excitement of the early Radio Shack "do-it-yourself" range: the
Sinclair ZX80, the Commodore 8-bits (PET and VIC-20), later followed by the C64?
What has happened to all the excitement and innovation? My answer is simple: the
great big clobbering machine known as "Big Tech".

 

Intel released its first 8080 processor in 1972 and later followed up with
variations on a theme [PDF], eventually leading to the 80286, the 80386, the
80486 (getting useful), and so on. All of these variations needed an operating
system which basically was a variation of MS-DOS, or more flexibly, PC DOS.
Games started to appear, and some of them were quite good. But the main driver
of the computer was software.


In particular, word-processors and spreadsheets. 


At the time, my lost computer soul had found a niche in CP/M, which on looking
back was a lovely little operating system – but quietly disappeared into the
badlands of marketing. 


Lost and lonely I wandered the computerverse until I hooked up with Sanyo –
itself now long gone the way of the velociraptor and other lost prehistoric
species.
 

The Sanyo bought build quality, the so-called "lotus card" to make it fully
compatible with the IBM PC, and later, an RGB colour monitor and a 10 gig hard
drive. The basic model was still two 5¼" floppy drives, which they pushed up to
720kB, and later the 3.½" 1.25MB floppy drives. Ahead of its time, it too went
the way of the dinosaur.


These led to the Sanyo AT-286, which became a mainstay, along with the Commodore
64. A pharmaceutical company had developed a software system for pharmacies that
included stock control, ordering, and sales systems. I vaguely remember that
machine and software bundle was about NZ$ 15,000, which was far too rich for
most.


Then the computer landscape began to level out, as the component manufacturers
began to settle on the IBM PC-AT as a compatible, open-market model of computer
that met the Intel and DOS standards. Thus, the gradual slide into 100 versions
of mediocrity.


The consumer demand was for bigger and more powerful machines, whereas the
industry wanted to make more profits. A conflict to which the basic computer
scientists hardly seemed to give a thought.



I was reminded of Carl Jung's dictum: that “greed would destroy the West.” 


A thousand firms sprang up, all selling the same little boxes, whilst the
marketing voices kept trumpeting the bigger/better/greater theme… and the costs
kept coming down, as businesses became able to afford these machines, and head
offices began to control their outlying branches through the mighty computer. 


I headed overseas, to escape the bedlam, and found a spot in New Guinea – only
to be overrun by a mainframe run from Australia, which was going to run my
branch – for which I was responsible, but without any control.


Which side of the fence was I going to land on? The question was soon answered
by the Tropical Diseases Institute in Darwin, which diagnosed dengue fever… and
so I returned to NZ.


For months I battled this recurring malady, until I was strong enough to attend
a few hardware and programming courses at the local Polytechnic, eventually
setting up my own small computer business, building up 386 machines for resale,
followed by 486 and eventually a Texas Instrument laptop agency.


These ran well enough, but had little battery life, and although they were
rechargeable, they needed to be charged every two or three hours. At least the
WiFi worked pretty consistently, and for the road warrior, gave a point of
distinction.


[I think Chris is getting his time periods mixed up here. —Ed.]


Then the famous 686 arrived, and by the use of various technologies, RAM began
to climb up to 256MB, and in some machines 512MB.


Was innovation happening? No – just more marketing changes. As in, some machines
came bundled with software, printers or other peripherals, such as modems.



As we ended the 20th century, we bought bigger and more powerful machines. The
desktop was being chased by the laptop, until I stood at a long row of shiny
boxes that were basically all the same, wondering which one my wife would like…
knowing that it would have to connect to the so-called "internet", and in doing
so, make all sorts of decisions inevitable.


Eventually I chose a smaller Asus, with 16GB of main RAM and an nVidia card, and
retreating to my cottage, collapsed in despair. Fifty years of computing and
wasted innovation left her with a black box that, when she opened, it said
“HELLO” against a big blue background that promised the world – but only offered
more of the same.  As in, a constant trickle of hackers, viruses, Trojans and
barely anything useful – but now included a new perversion called a chat-bot or
“AI”.


I retired to my room in defeat.

 

We have had incremental developments, until we have today's latest chips from
Intel and AMD based on the 64-bit architecture first introduced around April
2003.

 

So where is the 128-bit architecture – or the 256 or the 512-bit?

 

What would happen if we got really innovative? I still remember Bill Gates
saying "Nobody will ever need more than 640k of RAM." And yet, it is common now
to buy machines with 8 or 16 or 32GB of RAM, because the poor quality of
operating systems fills the memory with poorly-written garbage that causes
memory leaks, stack-overflow errors and other memory issues.

 

Then there is Unix – or since the advent of Richard Stallman and Linus Torvalds,
GNU/Linux. A solid, basic series of operating systems, by various vendors, that
simply do what they are asked. 

 

I wonder where all this could head, if computer manufacturers climbed onboard
and developed, for example, a laptop with an HDMI screen, a rugged case with a
removable battery, a decent sound system, with a good-quality keyboard, backlit
with per-key colour selection. Enough RAM slots to boost the main memory up to
say 256GB, and video RAM to 64GB, allowing high speed draws to the screen
output.

 

Throw away the useless touch pads. With the advent of Bluetooth mice, they are
no longer needed. Instead, include an 8TB NVMe drive, then include a decent set
of controllable fans and heatpipes that actually kept the internal temperatures
down, so as to not stress the RAM and processors.


I am sure this could be done, given that some manufacturers, such as Tuxedo, are
already showing some innovation in this area. 


Will it happen? I doubt it. The clobbering machine will strike again.



Friday September 20th 2024 

 * Current Location: Dublin
 * Current Music: Hips don't Lie (feat. Wyclef Jean)

Tags:
 * guest post

 * 
 * 

 * Link
 * 3 comments
 * Reply





TO A TILING WM USER, APPARENTLY OTHER GUIS ARE LIKE WEARING HANDCUFFS

Aug. 29th, 2024 09:56 am
 This is interesting to me. I am on the other side, and ISTM that the tiling WM
folks are the camp you describe.

Windows (2.01) was the 3rd GUI I learned. First was classic MacOS (System 6 and
early System 7.0), then Acorn RISC OS on my own home computer, then Windows.

Both MacOS and RISC OS have beautiful, very mouse-centric GUIs where you must
use the mouse for most things. Windows was fascinating because it has rich,
well-thought-out, rational and consistent keyboard controls, and they work
everywhere. In all graphical apps, in the window manager itself, and on the
command line.

-- Ctrl + a letter is a discrete action: do this thing now.

-- Alt + a letter opens a menu

-- Shift moves selects in a continuous range: shift+cursors selects text or
files in a file manager. Shift+mouse selects multiple icons in a block in a file
manager.

-- Ctrl + mouse selects discontinuously: pick disconnected icons.

-- These can be combined: shift-select a block, then press ctrl as well to add
some discontinuous entries.

-- Ctrl + cursor keys moves a word at a time (discontinuous cursor movement).

-- Shift + ctrl selects a word at a time.

In the mid-'90s Linux made Unix affordable and I got to know it, and I switched
to it early '00s.

But it lacks that overall cohesive keyboard UI. Some desktops implement most of
Windows' keyboard UI (Xfce, LXDE, GNOME 2.x), some invent their own (KDE), many
don't have one.

The shell and editors don't have any consistency. Each editor has its own set of
keyboard controls, and some environments honour some of them -- but not many
because the keyboard controls for an editor make little sense in a window
manager. What does "insert mode" mean in a file manager?

They are keyboard-driven windowing environments built by people who live in
terminals and only know the extremely limited keyboard controls of the most
primitive extant shell environment, one that doesn't honour GUI keyboard UI
because it predates it and so in which every app invents its own.

Whereas Windows co-evolved with IBM CUA and deeply embeds it.

The result is that all the Linux tiling WMs I've tried annoy me, because they
don't respect the existing Windows-based keystrokes for manipulating windows.
GNOME >=3 mostly doesn't either: keystrokes for menu manipulation make little
sense when you've tried to eliminate menus from your UI.

Even the growing-in-trendiness MiracleWM because the developer doesn't use plain
Ubuntu, he uses Kubuntu, and Kubuntu doesn't respect basic Ubuntu keystrokes
like Ctrl+Alt+T for a terminal, so neither does MiracleWM.

They are multiple non-overlapping, non-cohesive, non-uniform keyboard UIs
designed by and for people who never knew how to use a keyboard-driven whole-OS
UI because they didn't know there was one. So they all built their own ones
without knowing that there's 30+ years of prior art for this.

All these little half-thought-out attempts to build something that already
existed but its creators didn't know about it.

To extend the prisoners-escaping-jail theme:

Each only extends the one prisoner cell that inmate knew before they got out,
where the prison cell is an app -- often a text editor but sometimes it's one
game.

One environment lets you navigate by only going left or straight. To go right,
turn left three times! Simple!

One only lets you navigate in spirals, but you can adjust the size, and toggle
clockwise or anticlockwise.

One is like Asteroids: you pivot your cursor and apply thrust.

One uses Doom/Quake-style WASD + mouse, because everyone knows that, right? It's
the standard!

One expects you to plug in a joypad controller and use that.

 * 
 * 

 * Link
 * 0 comments
 * Reply





BRING BACK DISTRO-WIDE THEMES!

Jul. 26th, 2024 06:20 pm
Someone on Reddit was asking about the Bluecurve theme on Red Hat Linux.

Back then, Red Hat Linux only offered KDE and GNOME, I think. The great thing
about Bluecurve was that they looked the same and both of them had the Red Hat
look.

Not any more. In recent years I've tried GNOME, Xfce, MATE, KDE, Cinnamon, and
LXQt on Fedora.

They all look different. They may have some wallpaper in common but that's it.
In any of them, there's no way you can glance from across a room (meaning, too
far away to read any text or see any logos) and go "oh, yeah, that's Fedora."

And on openSUSE, I tried all of them plus LXDE and IceWM. Same thing. Wallpaper
at best.

Same on Ubuntu: I regularly try all the main flavours, as I did here and they
all look different. MATE makes an effort, Unity has some of the wallpapers, but
that's about it.

If a vendor or project has one corporate brand and one corporate look, usually,
time and money and effort went into it. Into logos, colours, tints, gradients,
wallpaper, all that stuff.

It seems to me that the least the maintainers of different desktop flavours or
spins could do is adopt the official theme and make their remixes look like they
are the same OS from the same vendor.

I like Xfce. Its themes aren't great. Many, most, make window borders so thin
you can't grab them to resize. Budgie is OK and looks colourful, but Ubuntu
Budgie does not look like Ubuntu.

Kubuntu looks like Fedora KDE looks like Debian with KDE looks like anything
with KDE, and to my eyes, KDE's themes are horrible, as they have been since KDE
1 -- yes I used 1.0, and liked it -- and only 3rd party distro vendor themes
ever made KDE look good.

Only 2 of them, really: Red Hat Linux with Bluecurve, and Corel LinuxOS and
Xandros.

Everyone else's KDE skins are horrible. All of them. It's one reason I can't use
KDE now. It almost hurts my eyes. (Same goes for TDE BTW.) It is nasty.

Branding matters. Distros all ignore it now. They shouldn't.

And someone somewhere should bring back Bluecurve, or failing that, port GNOME's
Adwaita to all the other desktops. I can't stand GNOME but its themes and
appearance are the best distro in the West. (Some of the Chinese ones like
Deepin and Kylin are beautiful, but everyone's afraid they're full of spyware
for the Chinese Communist Party... and they might be right.)

 * Current Location: Douglas
 * Current Music: Barbie Girl

Tags:
 * repurposed comment

 * 
 * 

 * Link
 * 2 comments
 * Reply





"COMPUTER DESIGNS, BACK THEN": THE STORY OF ARRA, THE FIRST DUTCH COMPUTER

Jul. 24th, 2024 05:49 pm
ARRA was the first ever Dutch computer.
 
There's an account of its creation entitled 9.2 Computers ontwerpen,
toen ("Computer Designs, then") by the late Carel S Scholten, but sadly for
Anglophone readers it's in Dutch.

This is a translation into English, done using ChatGPT 4o by Gavin Scott. I
found it readable and fun, although I have no way to judge how accurate it is.



C.S. Scholten



In the summer of 1947, I was on vacation in Almelo. Earlier that year, on the
same day as my best friend and inseparable study mate, Brain Jan Loopstra, I had
successfully passed the qualifying exams in mathematics and physics. The
mandatory brief introduction to the three major laboratories—the Physics
Laboratory, the V.d. Waals Laboratory, and the Zeeman Laboratory—was behind us,
and we were about to start our doctoral studies in experimental physics. For two
years, we would be practically working in one of the aforementioned
laboratories.

 

One day, I received a telegram in Almelo with approximately the following
content: "Would you like to assist in building an automatic calculating
machine?" For assurance, another sentence was added: "Mr. Loopstra has already
agreed." The sender was "The Mathematical Center," according to further details,
located in Amsterdam. I briefly considered whether my friend had already
confirmed my cooperation, but in that case, the telegram seemed unnecessary, so
I dismissed that assumption. Both scenarios were equally valid: breaking up our
long-standing cooperation (dating back to the beginning of high school) was
simply unthinkable. Furthermore, the telegram contained two attractive points:
"automatic calculating machine" and "Mathematical Center," both new concepts to
me. I couldn’t deduce more than the name suggested. Since the cost of a telegram
exceeded my budget, I posted a postcard with my answer and resumed my vacation
activities. Those of you who have been involved in recruiting staff will, I
assume, be filled with admiration for this unique example of recruitment
tactics: no fuss about salary or working hours, not to mention irrelevant
details like pension, vacation, and sick leave. For your reassurance, it should
be mentioned that I was indeed offered a salary and benefits, which, in our
eyes, were quite generous.

 

I wasn't too concerned about how the new job could be combined with the
mandatory two-year laboratory work. I believed that a solution had to be found
for that. And a solution was found: the laboratory work could be replaced by our
work at the Mathematical Center.

 

Upon returning to Amsterdam, I found out the following: the Mathematical Center
was founded in 1946, with a goal that could roughly be inferred from its name.
One of the departments was the 'Calculation Department,' where diligent young
ladies, using hand calculators—colloquially known as 'coffee
grinders'—numerically solved, for example, differential equations (in a later
stage, so-called 'bookkeeping machines' were added to the machinery). The
problems dealt with usually came from external clients. The head of the
Calculation Department was Dr. ir. A. van Wijngaarden. Stories about automatic
calculating machines had also reached the management of the Mathematical Center,
and it was clear from the outset that such a tool—if viable—could be of great
importance, especially for the Calculation Department. However, it was not
possible to buy this equipment; those who wanted to discuss it had to build it
themselves. Consequently, it was decided to establish a separate group under the
Calculation Department, with the task of constructing an automatic calculating
machine. Given the probable nature of this group’s activities, it was somewhat
an oddity within the Mathematical Center, doomed to disappear, if not after
completing the first machine, then certainly once this kind of tool became a
normal trade object.

 

We were not the only group in the Netherlands involved in constructing
calculating machines. As we later discovered, Dr. W.L. v.d. Poel had already
started constructing a machine in 1946.

 

Our direct boss was Van Wijngaarden, and our newly formed two-man group was
temporarily housed in a room of the Physics Laboratory on Plantage Muidergracht,
where Prof. Clay was in charge. Our first significant act was the removal of a
high-voltage installation in the room, much to the dismay of Clay, who was fond
of the thing but arrived too late to prevent the disaster. Then we thought it
might be useful to equip the room with some 220V sockets, so we went to
Waterlooplein and returned with a second-hand hammer, pliers, screwdriver, some
wire, and a few wooden (it was 1947!) sockets. I remember wondering whether we
could reasonably submit the exorbitant bill corresponding to these purchases.
Nonetheless, we did.

 

After providing our room with voltage, we felt an unpleasant sensation that
something was expected from us, though we had no idea how to start. We decided
to consult the sparse literature. This investigation yielded two notable
articles: one about the ENIAC, a digital (decimal) computer designed for
ballistic problems, and one about a differential analyzer, a device for solving
differential equations, where the values of variables were represented by
continuously variable physical quantities, in this case, the rotation of shafts.
The first article was abominably written and incomprehensible, and as far as we
understood it, it was daunting, mentioning, for instance, 18,000 vacuum tubes, a
number we were sure our employer could never afford. The second article (by V.
Bush), on the other hand, was excellently written and gave us the idea that such
a thing indeed seemed buildable.

 

Therefore, it had to be a differential analyzer, and a mechanical one at that.
As we now know, we were betting on the wrong horse, but first, we didn’t know
that, and second, it didn’t really matter. Initially, we were not up to either
task simply because we lacked any electronic training. We were supposed to
master electricity and atomic physics, but how a vacuum tube looked inside was
known only to radio amateurs among us, and we certainly were not. Our own
(preliminary) practicum contained, to my knowledge, no experiment in which a
vacuum tube was the object of study, and the physics practicum for medical
students (the so-called 'medical practicum'), where we had supervised for a year
as student assistants, contained exactly one such experiment. It involved a
rectifier, dated by colleagues with some training in archaeology to about the
end of the First World War. The accompanying manual prescribed turning on the
'plate voltage' only tens of seconds after the filament voltage, and the
students had to answer why this instruction was given. The answers were
sometimes very amusing. One such answer I won’t withhold from you: 'That is to
give the current a chance to go around once.'

 

Our first own experiment with a vacuum tube would not have been out of place in
a slapstick movie. It involved a triode, in whose anode circuit we included a
megohm resistor for safety. Safely ensconced behind a tipped-over table, we
turned on the 'experiment.' Unlike in a slapstick movie, nothing significant
happened in our case.

 

With the help of some textbooks, and not to forget the 'tube manuals' of some
manufacturers of these useful objects, we somewhat brushed up on our electronic
knowledge and managed to get a couple of components, which were supposed to play
a role in the differential analyzer, to a state where their function could at
least be guessed. They were a moment amplifier and a curve follower. How we
should perfect these devices so that they would work reliably and could be
produced in some numbers remained a mystery to us. The solution to this mystery
was never found. Certainly not by me, as around this time (January 1948), I was
summoned to military service, which couldn’t do without me. During the two years
and eight months of my absence (I returned to civilian life in September 1950),
a drastic change took place, which I could follow thanks to frequent contacts
with Loopstra.

 

First, the Mathematical Center, including our group, moved to the current
building at 2nd Boerhaavestraat 49. The building looked somewhat different back
then. The entire building had consisted of two symmetrically built schools.
During the war, the building was requisitioned by the Germans and used as a
garage. In this context, the outer wall of one of the gymnasiums was demolished.
Now, one half was again in use as a school, and the other half, as well as the
attic above both halves, was assigned to the Mathematical Center. The Germans
had installed a munitions lift in the building. The lift was gone, but the
associated lift shaft was not. Fortunately, few among us had suicidal
tendencies. The frosted glass in the toilet doors (an old school!) had long
since disappeared; for the sake of decorum, curtains were hung in front of them.

 

Van Wijngaarden could operate for a long time over a hole in the floor next to
his desk, corresponding with a hole in the ceiling of the room below
(unoccupied). Despite his impressive cigar consumption at that time, I didn’t
notice that this gigantic ashtray ever filled up.

 

The number of employees in our group had meanwhile expanded somewhat; all in
all, perhaps around five.

 

The most significant change in the situation concerned our further plans. The
idea of a differential analyzer was abandoned as it had become clear that the
future belonged to digital computers. Upon my return, a substantial part of such
a computer, the 'ARRA' (Automatische Relais Rekenmachine Amsterdam), had already
been realized. The main components were relays (for various logical functions)
and tubes (for the flip-flops that composed the registers). The relays were
Siemens high-speed relays (switching times in the order of a few milliseconds),
personally retrieved by Loopstra and Van Wijngaarden from an English war
surplus. They contained a single changeover contact (break-before-make), with
make and break contacts rigidly set, although adjustable. Logically appealing
were the two separate coils (with an equal number of windings): both the
inclusive and exclusive OR functions were within reach. The relays were mounted
on octal bases by us and later enclosed in a plastic bag to prevent contact
contamination.

 

They were a constant source of concern: switching times were unreliable
(especially when the exclusive OR was applied) and contact degradation occurred
nonetheless. Cleaning the contacts ('polishing the pins') and resetting the
switching times became a regular pastime, often involving the girls from the
Calculation Department. The setting was done on a relay tester, and during this
setting, the contacts were under considerable voltage. Although an instrument
with a wooden handle was used for setting, the curses occasionally uttered
suggested it was not entirely effective.

 

For the flip-flops, double triodes were used, followed by a power tube to drive
a sufficient number of relays, and a pilot lamp for visual indication of the
flip-flop state. Since the A had three registers, each 30 bits wide, there must
have been about 90 power tubes, and we noted with dismay that 90 power tubes
oscillated excellently. After some time, we knew exactly which pilot lamp socket
needed a 2-meter wire to eliminate the oscillation.

 

At a later stage, a drum (initially, the instructions were read from a plugboard
via step switches) functioned as memory; for input and output, a tape reader
(paper, as magnetic tape was yet to be invented) and a teleprinter were
available. A wooden kitchen table served as the control desk.

 

Relays and tubes might have been the main logical building blocks, but they were
certainly not the only ones. Without too much exaggeration, it can be said that
the ARRA was a collection of what the electronic industry had to offer, a
circumstance greatly contributed to by our frequent trips to Eindhoven, from
where we often returned with some 'sample items.' On the train back, we first
reminisced about the excellent lunch we had enjoyed and then inventoried to
determine if we brought back enough to cover the travel expenses. This
examination usually turned out positive.

 

It should be noted that the ARRA was mainly not clocked. Each primitive
operation was followed by an 'operation complete' signal, which in turn started
the next operation. It is somewhat amusing that nowadays such a system is
sometimes proposed again (but hopefully more reliable than what we produced) to
prevent glitch problems, a concept we were not familiar with at the time.

 

Needless to say, the ARRA was so unreliable that little productive work could be
done with it. However, it was officially put into use. By mid-1952, this was the
case. His Excellency F.J. Th. Rutten, then Minister of Education, appeared at
our place and officially inaugurated the ARRA with some ceremony. For this
purpose, we carefully chose a demonstration program with minimal risk of
failure, namely producing random numbers à la Fibonacci. We had rehearsed the
demonstration so often that we knew large parts of the output sequence by heart,
and we breathed a sigh of relief when we found that the machine produced the
correct output. In hindsight, I am surprised that this demonstration did not
earn us a reprimand from higher-ups. Imagine: you are the Minister of Education,
thoroughly briefed at the Department about the wonders of the upcoming computing
machines; you attend the official inauguration, and you are greeted by a group
explaining that, to demonstrate these wonders, the machine will soon produce a
series of random numbers. When the moment arrives, they tell you with beaming
faces that the machine works excellently. I would have assumed that, if not with
the truth, at least with me, they were having a bit of fun. His Excellency
remained friendly, a remarkable display of self-control.

 

The emotions stirred by this festivity were apparently too much for the ARRA.
After the opening, as far as I recall, no reasonable amount of useful work was
ever produced. After some time, towards the end of 1952, we decided to give up
the ARRA as a hopeless case and do something else. There was another reason for
this decision. The year 1952 should be considered an excellent harvest year for
the Mathematical Center staff: in March and November of that year, Edsger
Dijkstra and Gerrit Blaauw respectively appeared on the scene. Of these two, the
latter is of particular importance for today's story and our future narrative.
Gerrit had worked on computers at Harvard, under the supervision of Howard
Aiken. He had also written a dissertation there and was willing to lend his
knowledge and insight to the Mathematical Center. We were not very compliant
boys at that time. Let me put it this way: we were aware that we did not have a
monopoly on wisdom, but we found it highly unlikely that anyone else would know
better. Therefore, the 'newcomer' was viewed with some suspicion. Gerrit’s
achievement was all the greater when he convinced us in a lecture of the
validity of what he proposed. And that was quite something: a clocked machine,
uniform building blocks consisting of various types of AND/OR gates and
corresponding amplifiers, pluggable (and thus interchangeable) units, a neat
design method based on the use of two alternating, separate series of clock
pulses, and proper documentation.

 

We were sold on the plan and got to work. A small difficulty had to be overcome:
what we intended to do was obviously nothing more or less than building a new
machine, and this fact encountered some political difficulties. The solution to
this problem was simple: formally, it would be a 'revision' of the ARRA. The new
machine was thus also called ARRA II (we shall henceforth speak of A II), but
the double bottom was perfectly clear to any visitor: the frames of the two
machines were distinctly separated, with no connecting wire between them.

 

For the AND/OR gates, we decided to use selenium diodes. These usually arrived
in the form of selenium rectifiers, a sort of firecrackers of varying sizes,
which we dismantled to extract the individual rectifier plates, about half the
diameter of a modern-day dime. The assembly—the selenium plates couldn't
tolerate high temperatures, so soldering was out of the question—was as follows:
holes were drilled in a thick piece of pertinax. One end of the hole was sealed
with a metal plug; into the resulting pot hole went a spring and a selenium
plate, and finally, the other end of the hole was also sealed with a metal plug.
For connecting the plugs, we thought the use of silver paint was appropriate,
and soon we were busy painting our first own circuits. Some time later, we had
plenty of reasons to curse this decision. The reliability of these connections
was poor, to put it mildly, and around this time, the 'high-frequency hammer'
must have been invented: we took a small hammer with a rubber head and rattled
it along the handles of the units, like a child running its hand along the
railings of a fence. It proved an effective means to turn intermittent
interruptions into permanent ones. I won't hazard a guess as to how many
interruptions we introduced in this way. At a later stage, the selenium diodes
were replaced by germanium diodes, which were simply soldered.

 

The AND/OR gates were followed by a triode amplifier and a cathode follower.
ARRA II also got a drum and a tape reader. For output, an electric typewriter
was installed, with 16 keys operable by placing magnets underneath them. The
decoding tree for these magnets provided us with the means to build an
echo-check, and Dijkstra fabricated a routine where, simultaneously with
printing a number, the same number (if all went well) was reconstructed. I
assume we thus had one of the first fully controlled print routines.
Characteristic of ARRA II’s speed was the time for an addition: 20 ms (the time
of a drum rotation).

 

ARRA II came into operation in December 1953, this time without ministerial
assistance, but it performed significantly more useful work than its
predecessor, despite the technical difficulties outlined above.

 

The design phase of ARRA II marks for me the point where computer design began
to become a profession. This was greatly aided by the introduction of uniform
building blocks, describable in a multidimensional binary state space, making
the use of tools like Boolean algebra meaningful. We figured out how to provide
ARRA II with signed multiplicative addition for integers (i.e., an operation of
the form (A,S) := (M) * (±S') + (A), for all sign combinations of (A), (S), and
(M) before and of the result), despite the fact that ARRA II had only a counter
as wide as a register. As far as I can recall, this was the first time I devoted
a document to proving that the proposed solution was correct. Undoubtedly, the
proof was in a form I would not be satisfied with today, but still... It worked
as intended, and you can imagine my amusement when, years later, I learned from
a French book on computers that this problem was considered unsolvable.

 

In May 1954, work began on a (slightly modified) copy of ARRA II, the FERTA
(Fokker's First Calculating Machine Type A), intended for Fokker. The FERTA was
handed over to Fokker in April 1955. This entire affair was mainly handled by
Blaauw and Dijkstra. Shortly thereafter, Blaauw left the service of the
Mathematical Center.

 

In June 1956, the ARMAC (Automatic Calculating Machine Mathematical Center),
successor to ARRA II, was put into operation, several dozen times faster than
its predecessor. Design and construction took about 1½ years. Worth mentioning
is that the ARMAC first used cores, albeit on a modest scale (in total 64 words
of 34 bits each, I believe). For generating the horizontal and vertical
selection currents for these cores, we used large cores. To drive these large
cores, however, they had to be equipped with a coil with a reasonable number of
windings. Extensive embroidery work didn’t seem appealing to us, so the
following solution was devised: a (fairly deep) rim was turned from transparent
plastic. Thus, we now had two rings: the rim and the core. The rim was sawed at
one place, and the flexibility of the material made it possible to interlock the
two rings. Then, the coil was applied to the rim by rotating it from the outside
using a rubber wheel. The result was a neatly wound coil. The whole thing was
then encased in Araldite. The unintended surprising effect was that, since the
refractive indices of the plastic and Araldite apparently differed little, the
plastic rim became completely invisible. The observer saw a core in the Araldite
with a beautifully regularly wound coil around it. We left many a visitor in the
dark for quite some time about how we produced these things!

 

The time of amateurism was coming to an end. Computers began to appear on the
market, and the fact that our group, which had now grown to several dozen
employees, did not really belong in the Mathematical Center started to become
painfully clear to us. Gradual dissolution of the group was, of course, an
option, but that meant destroying a good piece of know-how. A solution was found
when the Nillmij, which had been automating its administration for some time
using Bull punch card equipment, declared its willingness to take over our group
as the core of a new Dutch computer industry. Thus it happened. The new company,
N.V. Elektrologica, was formally established in 1956, and gradually our group’s
employees were transferred to Elektrologica, a process that was completed with
my own transfer on January 1, 1959. As the first commercial machine, we designed
a fully transistorized computer, the XI, whose prototype performed its first
calculations at the end of 1957. The speed was about ten times that of the
ARMAC.

 

With this, I consider the period I had to cover as concluded. When I confront my
memories with the title of this lecture, it must be said that 'designing
computers' as such hardly existed: the activities that could be labeled as such
were absorbed in the total of concerns that demanded our attention. Those who
engaged in constructing calculating machines at that time usually worked in very
small teams and performed all the necessary tasks. We decided on the
construction of racks, doors, and closures, the placement of fans (the ARMAC
consumed 10 kW!), we mounted power distribution cabinets and associated wiring,
we knew the available fuses and cross-sections of electrical cables by heart, we
soldered, we peered at oscillographs, we climbed into the machine armed with a
vacuum cleaner to clean it, and, indeed, sometimes we were also involved in
design.

 

We should not idealize. As you may have gathered from the above, we were
occasionally brought to the brink of despair by technical problems. Inadequate
components plagued us, as did a lack of knowledge and insight. This lack existed
not only in our group but globally the field was not yet mastered.

 

However, it was also a fascinating time, marked by a constant sense of 'never
before seen,' although that may not always have been literally true. It was a
time when organizing overtime, sometimes lasting all night, posed no problem. It
was a time when we knew a large portion of the participants in international
computer conferences at least by sight!


 * 
 * 

 * Link
 * 5 comments
 * Reply





ANOTHER DAY, ANOTHER PAEAN OF PRAISE FOR THE AMIGA'S 1980S PRE-EMPTIVE
MULTITASKING GUI

Mar. 24th, 2024 11:45 am
Yes, the Amiga offered a GUI with pre-emptive multitasking, as early as 1985 or
so. And it was affordable: you didn't even need a hard disk.


The thing is, that's only part of the story.

There's a generation of techies who are about 40 now who don't remember this
stuff well, and some of the older ones have forgotten with time but don't
realise. I had some greybeard angrily telling me that floppy drives were IDE
recently. Senile idiot.

Anyway.

Preemptive multitasking is only part of the story. Lots of systems had it.
Windows 2.0 could do preemptive multitasking -- but only of DOS apps, and only
in the base 640kB of RAM, so it was pretty useless.

It sounds good but it's not. Because the other key ingredient is memory
protection. You need both, together, to have a compelling deal. Amiga and
Windows 2.x/3.x only had the preemption part, they had no hardware memory
management or protection to go with it. (Windows 3.x when running on a 386 and
also when given >2MB RAM could do some, for DOS apps, but not much.)

Having multiple pre-emptive tasks is relatively easy if they are all in the same
memory space, but it's horribly horribly unstable.

Also see: microkernels. In size terms, AmigaOS was a microkernel, but a
microkernel without memory protection is not such a big deal, because the hard
part of a microkernel is the interprocess communication, and if they can just do
that by reading and writing each other's RAM it's trivially easy but also
trivially insecure and trivially unstable.

RISC OS had pre-emptive multitasking too... but only of text-only command-line
windows, and there were few CLI RISC OS apps so it was mostly useless. At least
on 16-bit Windows there were lots of DOS apps so it was vaguely useful, if
they'd fit into memory. Which only trivial ones would. Windows 3 came along very
late in the DOS era, and by then, most DOS apps didn't fit into memory on their
own one at a time. I made good money optimising DOS memory around 1990-1992
because I was very good at it and without it most DOS apps didn't fit into
500-550kB any more. So two of them in 640kB? Forget it.

Preemption is clever. It lets apps that weren't designed to multitask do it.

But it's also slow. Which is why RISC OS didn't do it. Co-op is much quicker
which is also why OSes like RISC OS and 16-bit Windows chose it for their GUI
apps: because GUI apps strained the resources of late-1980s/very-early-1990s
computers. So you had 2 choices:

• The Mac and GEM way: don't multitask at all.

• The 16-bit Windows and RISC OS way: multitask cooperatively, and hope nothing
goes wrong.

Later, notably, MacOS 7-8-9 and Falcon MultiTOS/MiNT/MagiC etc added coop
multitasking to single-tasking GUI OSes. I used MacOS 8.x and 9.x a lot and I
really liked them. They were extraordinarily usable to an extent Mac OS X has
never and will never catch up with.

But the good thing about owning a Mac in the 1990s was that at least one thing
in your life was guaranteed to go down on you every single day.               

(Repurposed from a HN comment.)
 
 
 * 
 * 

 * Link
 * 12 comments
 * Reply





I WAS A HACKINTOSHER

Mar. 20th, 2024 07:21 pm

I can’t speak for anyone else but I can tell you why I did it.

I was broke, I know PCs and Macs and Mac OS X – I ran OS X 10.0, 10.1 and 10.2
on a PowerMac 7600 using XPostFacto.

I got the carcase of a Core 2 Extreme PC on my local Freecycle group in 2012.

https://twitter.com/lproven/status/257060672825851904

RAM, no hard disks, no graphics, but case/mobo/CPU/PSU etc.

I took the nVidia card and hard disks from my old Athlon XP. I got the machine
running, and thought it was worth a try since it was mostly Intel: Intel
chipset, Intel CPU, etc.

I joined some fora, did some reading, used Clover and some tools from TonyMacX86
and so on.

After two days’ work it booted. I got no sound from my SoundBlaster card, so I
pulled it, turned the motherboard sound back on, and reinstalled.

It was a learning experience but it worked very well. I ran Snow Leopard on it,
as it was old enough to get no new updates that would break my Hack, but new
enough that all the modern browsers and things worked fine. (2012 was the year
Mountain Lion came out, so I was 2 versions behind, which suited me fine – and
it ran PowerPC apps, and I preferred the UI of the PowerPC version of MS Word,
my only non-freeware app.)

I had 4 CPU cores, it was maxed out with 8GB RAM, and it was nice and quick. As
it was a desktop, I disabled all support for sleep and hibernation: I turn my
desktops off at night to save power. It drove a matched pair of 21” CRT monitors
perfectly smoothly. I had an Apple Extended keyboard on an ADB-to-USB convertor
since my PS/2 ports weren’t supported.

It wasn’t totally reliable – occasionally it failed to boot, but a power cycle
usually brought it back. It was fast and pretty stable, it ran all the OS X FOSS
apps I usually used, it was much quicker than my various elderly PowerMacs and
the hardware cost was essentially £0.

It was more pleasant to use than Linux – my other machines back then ran the
still-somewhat-new Ubuntu, using GNOME 2 because Unity hadn’t gone mainstream
yet.

Summary: why not? It worked, it gave me a very nice and perfectly usable desktop
PC for next to no cost except some time, it was quite educational, and the
machine served me well for years. I still have it in a basement. Sadly its main
HDD is not readable any more.

It was fun, interesting, and the end result was very usable. At that time there
was no way I could have afforded to buy an Intel Mac, but a few years, one
emigration and 2 new jobs later, I did so: a 2011 i5 Mac mini which is now my
TV-streaming box, but which I used as my main machine until 2017 when I bought a
27” Retina iMac from a friend.

Cost, curiosity, learning. All good reasons in my book.

This year I Hacked an old Dell Latitude E7270, a Core i7 machine maxed out with
16GB RAM – with Big Sur because its Intel GPU isn’t supported in the Monterey I
tried at first. It works, but its wifi doesn’t, and I needed to buy a USB wifi
dongle. But performance wasn’t great, it took an age to boot with a lot of scary
text going past, and it didn’t feel like a smooth machine. So, I pulled its SSD
and put a smaller one in, put ChromeOS Flex on it, and it’s now my wife’s main
computer. Fast, simple, totally reliable, and now I have spare Wifi dongle. :-/
I may try on one of my old Thinkpads next.

It is much easier to Hackintosh a PC today than it was 10-12 years ago, but
Apple is making the experience less rewarding, as is their right. They are a
hardware company.

(Repurposed from a Lobsters comment.)

 * Current Location: Douglas
 * Current Music: Server fans

Tags:
 * hackintosh,
 * mac,
 * mac os x,
 * macos

 * 
 * 

 * Link
 * 0 comments
 * Reply





FOSDEM 2024

Feb. 5th, 2024 06:39 pm
I am travelling onwards from Brussels as I write.

I did 2 talks this year. One panel, with my ex-SUSE colleague Markus Feilner,
and one solo talk.

The panel was called:
RHEL and CentOS and the growth of openwashing in FOSS.

There were no slides but I think there will be video very soon.

My solo talk was called:

One way forward: finding a path to what comes after Unix.

Now with added slides, notes and script!

There should be video soon.

This link should let you see the script. Warning, it's an MS Word outline and
you need outline view for it to render properly. Neither Google Docs nor
LibreOffice can do this.

This is the slide deck. (LibreOffice 24.02 format.)

And this is the slide deck with speaker's notes.

UPDATE: I've moved the files to Dropbox for slightly easier public sharing.
Please let me know if they still don't work.

 * Current Location: ICE train to Berlin

Tags:
 * fosdem,
 * slides,
 * talk

 * 
 * 

 * Link
 * 3 comments
 * Reply





WHAT MAKES A LINUX DISTRO LIGHT?

Nov. 15th, 2023 02:14 pm
This is an extremely broad question and it needs a tonne of context to give an
unambiguous answer.

 * For what role?
   
   * Server...
     
     * Web server?
     
     * File server?
     
     * Print server?
     
     * Router/firewall?
   
   * Desktop?
     
     * General purpose desktop?
     
     * Gaming desktop?
     
     * Emergency recovery desktop?
     
     * App-specific desktop?

Alpine is lightweight because almost nothing is pre-configured for you and you
must DIY... but saying that its origins are as a router distro repurposed to be
general-purpose. It uses a different libc, which is a huge change. Every single
app has to be recompiled to work with musl libc instead of glibc.

Open_WRT is lightweight because it's dedicated to running on routers.

CBL Mariner is lightweight because it's only for certain niche server VMs.

antiX is lightweight because it's a general-purpose graphical desktop but
ruthlessly purged of heavyweight components, all replaced with the smallest
lightest-weight alternatives.

Raspberry Pi Desktop is lightweight because it's an x86 version of a brutally
pared-down Debian originally meant for a single-core Arm computer with 512MB of
RAM.

Bodhi Linux is lightweight because it's Ubuntu but with all the desktop stuff
removed, replaced with a forked old version of a very lightweight window manager
and almost nothing else. Any functionality you want you must install.

Lots of different answers, lots of different use cases, lots of different
strategies.

This is not a "yes/no" question. It's complex and nuanced.

Debian is not lightweight. Its strapline is "the universal operating system".
It's a Swiss Army knife that can do anything and that's part of its definition.

You can make a lightweight install of it if you know what you're doing but
ticking the box for a lightweight desktop and installing is not doing that.

Comparison: you see a lightweight sports motorcycle. It's green. You buy a
Harley and paint it green and say "look mine is a lightweight sports bike now!"

Devuan is just Debian with systemd removed and openrc or sysvinit in its place.
This is not a big sweeping change. It's equivalent to looking at the sports
bike, seeing it has Bridgestone tyres instead of Dunlop, and swapping the tyres
on the Harley to Bridgestone tyres.

It is a trivial change compared to a libc change. It's routine maintenance to
change your tyres. You need to do it regularly anyway. It doesn't need the bike
to be rebuilt.

It's not easy. It takes hours and skills and tools and so on but it's not
sweeping.

Devuan has rebuilt a tonne of packages to remove dependencies on systemd and
that's not trivial but it's still Debian. By and large you can download any
Debian package and install it and it'll just work because most things never
interact with the init daemon and it won't make a big difference.

A Swiss Army knife with a different axle that pivots a bit more smoothly and
with less force is still a Swiss Army knife and only a knife expert will be able
to even tell the difference.

It doesn't make it into a super-slim lightweight knife, like -- I know nothing
about knives -- something like this.

You could disassemble a Victorinox and rebuild it into something like that but
it's really hard and an amateur will end up with a broken pile of bits.

So the fact that people build lightweight distros out of Debian doesn't mean
Debian is lightweight or that you can do it yourself. Think about it: if it was
easy, lightweight remixes wouldn't exist! There'd be no point.

How do you tell if it's lightweight or not?

Look at how big the ISO file you download is.

4-5GB is big.

2-3GB is typical.

<2GB is small.

~1GB is tiny.

Run df -h and look at how much disk space it takes. Much the same applies.

Run free -h on a newly-booted machine and look at how much RAM it's using.

200MB is light in 2023.

Under 0.5GB is good.

0.75GB is OK.

Over 1GB is typical.

 * 
 * 

 * Link
 * 0 comments
 * Reply





TIL THAT SOME PEOPLE CAN'T REMEMBER THE DIFFERENCE BETWEEN THE 386 & 486

Nov. 15th, 2023 10:50 am
I suppose it was a long time ago.

So...



The 80386DX was the first x86 CPU to be 32-bit and have an on-chip MMU. And
nothing else: no cache, no FPU.

The FPU was a discrete part, the 80387DX.

Because OS/2 1.x didn't support the 80386, and so couldn't run DOS apps well,
and so flopped, the 16-bit 80286 kept selling well. It ran DOS fast and it could
run Windows 2/286 and Windows 3 in Standard Mode which was good enough. It could
only address 16MB of RAM but that was fantastically expensive and it was more
than enough for DOS and Windows 3.

So, because DOS still ruled, Intel made a cost-reduced version of the 80386DX,
the 80386SX. This had a 16-bit data bus, so it could use cheaper 16-bit
motherboards and 16-bit wide RAM, still limited to a max of 16MB. Still enough.

That needed a maths copro for hardware floating point, too: a different part,
the 80387SX.

Then Windows 3 came along, which was also good enough, and started a move in PC
apps to GUIs. Windows 3.1 (1992) was better still.

So Intel had a 2nd go at the 32-bit chip market with the 80486, marketed as the
"486". This integrated a better 386DX-compatible CPU core with a few extra
instructions, complete with MMU, plus a 387-style FPU, plus a small amount of L1
cache, all onto one die.

But it was expensive, and didn't sell well.

Also, all the 3rd party x86 makers leapt on the bandwagon and integrated the
extra instructions into 16-bit bus 386SX compatible chips and branded them as
486s: the Cyrix and IBM "486slc" for instance. This ate into sales of the real
486.

So Intel came up with an ethically very dodgy borderline scam: it shipped 486s
with the FPU disabled, calling them the "486DX" to reuse the branding that
distinguished the 32-bit-bus models of 386 from the 16-bit-bus.

People don't understand stuff like bus widths or part numbers, as your post
demonstrates, and I mean no offense. They don't.

So now there was a new model of 486, the 486SX with a disabled FPU, and the
486DX with it still turned on.

The "SX" model needed a new motherboard with a 2nd CPU socket that accepted a
"floating point co-processor", called the "487", which was nothing of the kind.
The "SX" was a lie and so was the "487 copro". The 487 was a 2nd complete 486
chip that disabled the original and took over.

But it reused the older branding, which is what you've remembered.

Later, briefly, Intel made a cheaper cost-reduced 486SX with a smaller die with
no FPU present, but not many of them. The clock-doubled 486DX2 took over quite
quickly and killed the 486DX and 486SX market.

Some commentators speculated that the 486DX vs 486SX marketing thing allowed
Intel to sell defective 486s in which the FPU didn't work but if it did that was
a tiny tiny number: a rounding error.



 
 * Current Location: Douglas
 * Current Music: 6music

Tags:
 * 286,
 * 287,
 * 386,
 * 387,
 * 486,
 * chips,
 * hardware,
 * intel

 * 
 * 

 * Link
 * 2 comments
 * Reply





ANTI-ALIASING AND SUBPIXEL ALLOCATION AND HOW IT'S ALL GOING AWAY

Oct. 30th, 2023 11:31 am
There used to be multiple ways to try to smooth out text on screen. Most are no
longer relevant.

XP used several such methods quite heavily, such as font antialiasing and
hinting.

1. Font antialiasing: using grey pixels that are not 100% on/off to soften the
edges of screen fonts on portions of letters that are not vertical or
horizontal, and therefore are stepped ("aliased"). First OS to do this was Acorn
RISC OS in the 1980s. By the mid-1990s Windows started to catch up using
greyscale pixels.

2. Subpixel allocation (used heavily in Windows in the 20x0s). This takes
advantage of knowledge of the display screen type, acquired through more
sophisticated (that is, bigger and slower) display systems. Pixels are not 1
colour dot. They are a group of a red dot, a green dot, and a blue dot, working
together.

CRT displays used triangular or other shaped grids of R/G/B pixels. Sony
Trinitron CRTs used stripes of R/G/B pixels because their [shadow
mask](https://en.wikipedia.org/wiki/Shadow_mask) was a grille not a grid of
holes. The then-new active matrix colour LCDs have R/G/B pixels turned on and
off individually (that's what "active matrix" means, 1 transistor per pixel, as
opposed to 1 transistor per rows and 1 per column, where the intersection goes
on or off).

Subpixel allocation is antialiasing using R, G and B pixels where the
arrangement of the individual colour pixels is known to the renderer. It is
colour antialiasing but it doesn't work right if, for example, the pixels are:

`R B G`

But the renderer thinks they are

`B R G`

3. Running displays at non-native resolutions used to break this. CRTs do not
have a single fixed native res. LCDs all do. Standard definition (SD) LCDs have,
for example, 1024×768 pixels, or 1280×1024. Later, hi-def (HD) ones have more
than you can easily see, e.g. my iMac's 2nd screen is 2560×1440 pixels on a 27"
LCD. You'd often run this at 1.5× or so zoom

Now, HiDPI displays have way more than even that. My iMac's built in screen is
27" but 5120×2880.

You can't use a HiDPI ("retina") screen at its native res: letters would be too
small to see. They always run at 2× or 2.5× or even weird ratios like 3.33×.

There is no longer a linear relationship between pixels the display has and
pixels you see, because they're too small to see any more. So it's OK to have a
resolution that means 6 pixels per dot in places and 7 in others because you
can't see individual pixels except with a [hand
lens](https://extension.psu.edu/a-brief-guide-to-hand-lenses) so you don't see
fringing and aliasing.

It is meaningful to talk about having no subpixel smoothing or hinting on an SD
screen where you can see pixels, but it's all more or less gone away now,
because it's all programmatically curves on pixel-oriented displays where there
are no visible pixels any more.

Ever since the iPhone 6 series, AFAIK, iOS devices *never* run at 1:1  native
res _or_ 1:x ratios. The actual display res is almost unrelated to the lower res
the OS runs the screen at and there's never a linear relationship between
logical display pixels and actual hardware screen dots.

So the 1990s stuff about antialiasing has gone away now. At a microscopic scale
it's all blurred out but the blurring is at such a small scale the human eye
can't see it. So the OS UI options to turn on greyscale antialiasing or RGB
subpixel allocation have gone away.
 * 
 * 

 * Link
 * 0 comments
 * Reply





DELL PRECISION 420 WITH RED HAT LINUX (PERSONAL COMPUTER WORLD • SEPTEMBER 2000)

Sep. 10th, 2023 12:57 pm

Found an old article of mine online. I think this might have been the first
review of a machine preinstalled with Linux from a major manufacturer in the UK.

EXCLUSIVE

Linux’s growing popularity gets a boost as Dell entrusts its latest high-end
workstation to the OS.

A sure sign of Linux’s growing popularity is that vendors are starting to offer
it as a pre-installed OS. Until recently, this has largely been confined to
specialist Linux system builders such as Penguin Computing, Digital Networks UK
or the large US company VA Linux Computing. Now, though, mainstream corporate
vendors are starting to preload Linux and Dell is one of the first to deliver.

The Precision Workstation 420 is a high-end workstation system. The midi-tower
case can be opened without tools and internal components, such as the PSU and
drive cage, can be released with latches and swung out on hinges for access to
thei840-based motherboard. This supports dual Pentium III processors running at
up to 1GHz and up to four RIMMs; the review machine had two 64MB modules for
128MB of dual-channel RDRAM.

The highly-integrated motherboard includes Cirrus Logic sound, 3Com Fast
Ethernet and Adaptec Ultra2 Wide LVD SCSI controllers. The only expansion card
fitted is a Diamond nVidia TNT2 32MB graphics adaptor driving a flat-screen 19in
Dell UltraScan Trinitron monitor, leaving the four 32-bit PCI slots and one
PCI/RAID port free.

Internal components include an 866M Hz Pentium III with a 133MHz front-side bus
(FSB) and full-core-speed 256KB secondary cache, an 18GB Quantum Atlas Ultra2
SCSI hard disk, LiteOn 48-speed ATAPI CD and an ATAPI Iomega Zip250 drive.

It is certainly a powerful and expandable high-end workstation with very few
corners cut. However, all-SCSI storage might be more preferable and the 3D card,
while ideal for gamers, is somewhat wasted in business use.

However, the real interest lies in the operating system installed: Red Hat Linux
6.1. (Since this machine was supplied, Dell has upgraded this to Red Hat 6.2.)
When appropriately configured with a GUI desktop, Linux isn’t much harder to use
than Windows or any other graphical OS; the hardest part is often getting it
installed. Buying a pre-configured system is therefore attractive, as the vendor
does this for you, but what matters is how well the job is done.

The system boots into the Linux loader, LILO, offering a choice of kernels — the
default multiprocessor one and one for single-processor machines. Choosing
either takes you straight into X and the GNOME login screen. There’s only one
pre-configured user account, root, with no password. Logging in as root reveals
a standard GNOME default desktop, but with Dell-logo wallpaper. The installation
is largely a default Red Hat one with some minor tweaks, such as the AfterStep
window manager offered as an alternative to Enlightenment.

Most of the system’s hardware was correctly configured. XFree86 was correctly
set up for the graphics card with a default resolution of 1,024 x 768, the SCSI
controller, Ethernet, Zip and CD-ROM devices were all configured, and TCP/IP was
set to auto-configure itself using DHCP. Red Hat’s linuxconf tool made it easy
to check and adjust the various parameters, and a Dell directory of drivers and
basic documentation was provided on the hard disk to accompany a slim paper
manual introducing Red Hat Linux. One area where Linux is more complex than
Windows is disk partitioning. Dell has chosen sensible settings: a 20M B boot
partition close to the start of the drive, a 5GB (root) partition, 2GB /home and
10GB /usr volumes, plus 128MB of swap space (larger for machines with more
memory).

There were some niggles, though. The mount point for the Zip drive was created
as a symbolic link instead of a directory, which had to be corrected before the
Zip drive could be used, and the GNOME desktop icon for the CD-ROM drive didn’t
work correctly.

As Red Hat doesn’t support the onboard CS4614 sound chip, the machine was mute;
a SoundBlaster Live will be fitted if the customer requests sound capabilities.

Although it’s the most popular distribution in the US, Red Hat is quite spartan,
with few added extras, but we tried popular programs such as StarOffice,
WordPerfect 8 and VMware without a hitch. Internet access was easily configured,
too. Dell also bundles 90 days’ free phone and email support through LinuxCare
alongside the three-year on-site warranty.

The system has some teething problems, although they aren’t critical and as
shipped it was usable - but they would require some Linux expertise to repair.
Once these are smoothed out, though, this will be an excellent high-
specification Linux workstation.

LIAM PROVEN

DETAILS

★★★★

PRICE £3,053.83 (£2,599 exVAT)

CONTACT Dell 0870 152 4699

www.dell.co.uk
 

PROS Well-built, high-specification hardware; reasonable Linux configuration

CONS Some rough edges to Linux configuration; no sound support; no ISA slots

OVERALL A good first try. Dell’s inexperience with Linux shows, but the problems
are minor and the hardware is excellent

[Scan on Imgur]

[Source on the Internet Archive] 

 * 
 * 

 * Link
 * 0 comments
 * Reply





WHAT IF... SOMEONE MADE A PLAN 9 THAT COULD RUN LINUX APPS?

Sep. 8th, 2023 07:18 pm
Idle thought... I was writing about MicroVMs the other day:

https://www.theregister.com/2023/08/29/freebsd_boots_in_25ms/

It made me wonder... if you can have a Linux VM that starts in milliseconds,
could you create one for a vaguely Unix-like but non-Unix OS, such as Plan 9, so
that it could run Linux binaries on this non-Linux-like OS?

ISTM you'd need two or three things, all of which either exist or are doable.

1. A very small Linux distro, tailored to talk to VirtIO devices and to
net-mount all its filesystems over 9P. No `initrd`, because we know exactly what
the virtual hardware will be in advance. Maybe some tiny simple init like tini:
https://github.com/krallin/tini

Boot, run the binary supplied on the command line, when it exits quit.

2. Plan 9's VM, vmx, cut down even further to make a microVM.

https://9lab.org/plan9/virtualisation/

3. Some tool that lets you start a Linux binary and spawns a microVM to run it,
with the current working directory your current dir in the Plan9 FS.

`lxrun strings foo`

MicroVM starts, loads `strings`, supplies it with `foo` as input, outputs to
your `rc` session -- just like starting a VM with Vagrant on Linux: disk chugs,
and suddenly you're SSHed into some totally different distro _in the same
terminal session_.

If the app is graphical it could attach to X11 or Equis:

https://9p.io/wiki/plan9/x11_installation/index.html

The root idea: provide something much simpler and easier than the existing
Linux-emulation tools out there -- which are very much a Thing, but are _hard_
and don't work terribly well -- but with much the same end result.

Examples:

• the FreeBSD Linuxulator: https://wiki.freebsd.org/Linuxulator

• the SCO `lxrun` tool:
http://download.nust.na/pub3/solaris/intel/7/lxrun/lxrun.html

• Solaris lx zones:
https://docs.huihoo.com/opensolaris/solaris-containers-resource-management-and-solaris-zones/html/p99.html

I just found out that there _is_ a Linux emulator for Plan 9 -- Linuxemu:
http://9p.io/wiki/plan9/Linux_emulation/index.html

So, a Plan 9 "Linuxulator" exists, but this might still be easier, more current,
and need less work. In other words, make a Plan 9 setup that you can use like a
Linux distro, to encourage Linux folks to try it and maybe move over. TBH if I
could run Waterfox and Chrome or Epiphany, plus a Markdown editor, I could do
most of my job right there.

Is this absurd, or is it fairly doable?

I daresay the Plan 9 fans would regard this as heresy or simply
incomprehensible, but they're not the target audience. People who like minimal
terminal-driven Linux distros are.

As for _why_... well, my impression is that in many ways Plan 9 is a better Unix
than Unix... but it's not better _enough_ to have driven adoption. As a result,
it still has a _lot_ of rough edges, and in that, it reminds me of mid-1990s
Linux: weird, cryptic, and _hard_.

Qubes OS is a thing: keep your system safe by running everything in a different
VM. It's hard and it's complicated and most people are not that worried. But
what if you put a better OS underneath?
 * 
 * 

 * Link
 * 0 comments
 * Reply





COMPARING NICHE PROGRAMMING LANGUAGES TO THE MAINSTREAM

Jul. 1st, 2023 01:48 pm
[Another repurposed HN comment, saved for my own reference as much as anything]



I think you are focusing on the trees and so not seeing the size and the shape
of the forest.

Most organisations use C and languages implemented in C, on OSes implemented in
C, because they do the job, the people are cheap and readily available, and the
dominant OS is free and costs nothing to deploy.

Which can be reduced to:

Most people use the tools most people use.

That's not a very useful observation, but it poses an interesting question:

Why?

That's easier.

Here is the shape of the outside of the answer:

They use them not because they are good -- they aren't very good, measured
objectively -- but because they are ubiquitous and cheap.

Other tools are better, and just as free, but then the people cost more, and the
associated tooling costs more. (Frameworks, supporting libraries, deployment
costs, whatever. E.g. it's very cheap to deploy Javascript because all you need
is a reasonably modern browser, and those are free and almost all OSes have
them.)

Those are the externalities, in a manner of speaking.

But the other side of the answer is the inside: the area, not the shape.

The mainstream, conventional, software industry is huge, and hugely lucrative.

Writing just-barely-good-enough apps, minimum viable products, gets you out
there and making money. Then you can put a share of your revenues into
incrementally improving it.

Every now and then you can push out a big new version, with disruptive changes.
You can charge for getting the new major releases, but more to the point, once
an old version is officially obsolete, you can charge for continued fixes to the
now-obsolete versions.

It makes money. It's an ecosystem, a food chain or more accurately a web. Some
members are predators, some are prey, but they all work together and if you just
eliminate either predators or prey, the system collapses.

In other words:

Most people use the tools most people use, because most people use them, because
you can make big money from volume of cheap junk.

But there is another model of making and selling stuff: make small volumes of
really good products, using highly skilled workers, and sell those high-quality
products in very small volumes but for very high prices, to discerning customers
who know they're buying something built to last and who may not come back to you
for new versions every couple of years, but that's fine if you made a couple of
decades' revenue from them on the original sale.

Because cars are a long standing metaphor in computers and computing:

If you have a mass market for cars, then you get cheap cars, and everyone's cars
are much the same because they are built down to a price and mass produced.

These car consumers can be upsold some extra buttons and minor features, and a
lot of those combined may double the price of the car.

(This is how different Linux distros survive. Some have more buttons. Some have
A/C. Some are fully automatic, others have fully manual controls. Some have
power steering, some don't.)

But such a market also supports hand-made sports cars (and, inevitably,
superficially similar cheaper sports cars from the mass-producers). It also
supports vast tanklike cars that weigh as much as 10 normal cars, but produce
10x the engine power of those cars so they still perform.

Very different products, but they cost an order of magnitude more than the
mass-produced cars... and do the same job. Some owners of mass-produced cars
aspire to own fancy sports cars, and some aspire to own luxury behemoths. Most
never will.

People who don't care much for cars and just see them as a tool for getting from
A to B do not see why anyone would pay for fancier cars. That's OK, too.

Some people live in other countries and see more clearly, because for them
trains and bicycles are a perfectly viable way of getting from A to B, and are
both cleaner, healthier, use less resources and create less waste.

Tools like Lisp are the artisanal hand-made cars compared to the mass market.
People who've never used anything but cheap mass-produced tin boxes can't even
imagine that there are things that are so much better, let alone that in the
long run, you might be better off using them.

As Terry Pratchett put it:

« “The reason that the rich were so rich, Vimes reasoned, was because they
managed to spend less money.

Take boots, for example. He earned $38 a month plus allowances. A really good
pair of leather boots cost $50. But an affordable pair of boots, which were sort
of OK for a season or two and then leaked like hell when the cardboard gave out,
cost about $10. Those were the kind of boots Vimes always bought, and wore until
the soles were so thin that he could tell where he was in Ankh-Morpork on a
foggy night by the feel of the cobbles.

But the thing was that good boots lasted for years and years. A man who could
afford $50 had a pair of boots that'd still be keeping his feet dry in ten
years' time, while the poor man who could only afford cheap boots would have
spent a $199 on boots in the same time and would still have wet feet.

This was the Captain Samuel Vimes 'Boots' theory of socioeconomic unfairness.” »

Some of us live in countries with really good public transport, and know that
it's possible to replace the entire category of personal automobiles with
something better for everyone...

But try telling that to an American. They won't even try to understand; they
will instead earnestly explain why they need cars, and their country is better
because everyone has cars.

More wild generalisation:

In the late 20th century, there was another model of software construction, an
alternative to the "Worse is better" model. The WIB model is that one type of
software fits all, and build one (or a very few) minimum viable operating
systems, from minimal viable programming languages, and make them cheap or give
them away for free.

The artisanal software model was more common in Europe and Japan: pick the best
language for the job, and build tiny bespoke OSes for each category of device.
Have multiple whole incompatible families of desktop OSes, and families of
totally different unrelated server OSes, and families of different OSes for
handhelds and games consoles and school computers for teaching kids, and so on.

Unify them, minimally, with some standard formats: network protocols, disk and
file formats, maybe some quite similar programming languages in lots of weird
little nonstandard dialects.

iTron, RISC OS, SiBO/EPOC/EPOC32/Symbian, QDOS/Minerva/SMSQe, Con-Tiki, Symbos,
GEOS, Novell Netware.

Keep the complexity are the market level, in which multiple radically different
products compete for sales in their market segments. Lots of overlap, lots of
duplication... but also competition, evolution, rivalry, advancement.

The software remains small and relatively simple. This aids development, but
mainly, it keeps the resource requirements low, so the devices are cheaper.
Human brainpower is cheap: spend it on clever software to enable cheap hardware.

The approach I am calling WIB is of vast general-purpose OSes which can do
anything, so you only need a handful of them... but you need massive hardware to
run it, so devices powered by WIB software are expensive, and very complicated,
and the software is vastly complicated, so you need armies of programmers to
maintain it, meaning frequent updates, so you need lots of storage and fast
connections.

And there is no overview, because it's much too big to fit into a single human
head, so improvement is incremental, not radical.

The end result is a handful of vast monoliths, full of holes and leaks, but a
vast economic machine that generates continuous income and lots of jobs.

When you live in one of these monoliths, the fact that there are happy
accomplished people working in weird tools making weird little products for tiny
markets seems incomprehensible. Why would they?

Most people use the standard tools, meaning the ones most people use. So
obviously they are good enough: look at these trillion-dollar corporations that
use them!

So obviously, there isn't really any point to the weirdoes.               

 * 
 * 

 * Link
 * 0 comments
 * Reply





EVALUATING PLAN 9 (AND INFERNO)

Jul. 1st, 2023 01:42 pm
[Repurposed HN comments. May grow into an article. Saving for my own reference.]

There are a lot of things in Plan 9 which are not improvements over conventional
Unix... but it seems to me that this is partly because it's stuck in its tiny
little niche and never got the rough edges worn down by exposure to millions.

I was a support guy, not a programmer. I started Unixing on SCO Xenix and later
dabbled in AIX and Solaris, and they were all painful experiences with so many
rough edges that I found them really unpleasant to use.

Linux in the 1990s was, too. Frankly WinNT was a much more pleasant experience.

But while Win2K was pleasant, XP was a bloated mess of themes and mandatory
uninstallable junk like Movie Maker. So I switched to Linux and found that, 5-6
years after I first tried Slackware and RHL and other early distros with 0.x or
1.0 kernels, it was much more polished now.

By a few years later, the experience for non-programmers was pretty good. It
uses bold and underline and italics and colour and ANSI block characters, right
in the terminal, because it assumes you're using a PC, while the BSDs still
don't because you might be on a dumb terminal or a VAX or a SPARCstation or
something. (!)

Linux just natively supports the cursor keys. It supports up and down and
command-line editing, the way Windows does, the way PC folk expect. BSD doesn't
do this, or very poorly.

Linux just natively supports plain old DOS/Windows style partitions, whereas BSD
did arcane stuff involving "slices" inside its own special primary partitions.
(GPT finally banishes this.)

I've taken this up with the FreeBSD and OpenBSD devs, and they just plain do not
understand what my problem is.

But this process of going mainstream on mainstream hardware polished the raw
Unix experience -- and it was very raw in the 1990s. Linux from the 2nd decade
of the 21st century onwards got refined into something much less painful to use
on

Plan 9 never got that. It still revels in its 1990s-Unix weirdness.

If Plan 9 went mainstream somehow, as a lightweight Kubernetes replacement say,
it would soon get a lot of that weirdness eroded off. The purists would hate it,
of course, just as BSD purists still don't much like Linux today.

Secondly, Plan 9 did a tonne of cleaning up the C language, especially (AFAICT)
after it dropped Alef. No includes that contain other includes is obvious and
sensible and takes orders of magnitude off compilation times. That rarely gets
mentioned.

The other vital thing to remember is that despite 9front (and HarveyOS and
Jeanne and so on), Plan 9 was not the end of its line.

After Plan 9 came Inferno.

I have played around with both and just by incorporating late-1990s GUI
standardisation into its UI, Inferno is much more usable than Plan 9 is.

Plan 9 made microkernels and loosely-couple clustering systems obsolete >25y
ago.

A decade or so later, Inferno made native code compilation and runtime VMs and
bytecode and all that horrible inefficient 1980s junk obsolete. It obsoleted
WASM, 2 decades before WASM was invented.

With Plan 9, all the machines on your network with the same CPU architecture
were parts of your machine if you wanted.

(A modernised one should embed a VM and a dramatically cut-down Linux kernel so
it can run text-only Linux binaries in system containers, and spawn them on
other nodes around the network. Inelegant as all get-out, but would make it 100x
more useful.)

But with Inferno, the restrictions of CPU architecture went away too. The dream
of Tao Group's Taos/Intent/Elate, and AmigaDE, delivered, real, and FOSS.

When considering Plan 9, also consider Inferno. It fixed some of the issues. It
smoothed off some of the rough edges.

I feel, maybe wrongly, that there could be some mileage in somehow merging the
two of them together into one. Keep Plan 9 C and native code as an option for
all-X86-64 or all-Arm64 clusters. Otherwise, by default, compile to Dis. Maybe
replace Limbo with Go.

-----

It doesn't need cgroups or containers, because every process has its own
namespace, its own view of the network-global filesystem, so everything is in a
container by default.

It doesn't need a microkernel, because the concept of microkernels is to split a
big monolithic kernel into lots of small simple "servers" running in user space,
and have them communicate by passing messages over a defined communications
protocol, some kind of RPC type thing. It works, and QNX is the existence proof.
But it's really hard and it's really inefficient -- of which, the HURD and Minix
3 are the existence proofs.

So most of the actual working "microkernel" OSes kludge it by embedding a huge
in-kernel "Unix server" that negates the entire microkernel concept but delivers
compatibility and performance. Apple macOS and iOS are the existence proof here.
(It could be argued that Windows NT 4 and later are also examples.)

Plan 9 achieves the same result, without the difficulties, by default by having
most things user space processes and communicating via the filesystem.








Disclaimer: this is my very rudimentary understanding. I am not an expert on
Plan 9 by any means.

------

In general, what I am getting at is that Unix is a (?) uniquely weird situation
that happens to have grown like Kudzu. It's an early generation of a long
running project, where that early generation caught on and became massive and
thus ignores simple but profound improvements from later versions of the same
project.

And since in that ecosystem, almost everything is built on a foundation of C,
problems with C affect everything layered on top... even though they do not in
any other ecosystem. But something like 99% of inhabitants of the ecosystem are
not even aware that other ecosystems even exist, let alone knowing anything
about them.

If they know of any, it's macOS or Windows. macOS is also UNIX, and modern
Windows is NT, which is Windows built with Unix tools and a Unix type design...
so they are not really different at all.

So what I am getting at is that Plan 9 has aspects other than the namespaces and
the cosmetic aspects of the design. It makes changes to the language and how
it's compiled that are just as important, and hacks to gain some of that on
modern versions of the parent OS family are not of comparable significance.

Famous quote:

<<

So... the best way to compare programming languages is by analogy to cars. Lisp
is a whole family of languages, and can be broken down approximately as follows:

* Scheme is an exotic sports car. Fast. Manual transmission. No radio.

* Emacs Lisp is a 1984 Subaru GL 4WD: "the car that's always in front of you."

* Common Lisp is Howl's Moving Castle.

>>

Comparison: if somehow you make a pruned-down Howl's Moving Castle, you can
probably never ever, whatever you do, no matter how much effort you invest, make
it into something as small and light as the Subaru, let alone the sports car.

In more detail:

Let's imagine one team invented first the train, then the bicycle, then the car.

The bicycle takes the idea of a wheeled vehicle from the train but reduces it
down to an incredibly minimal version. It needs no fuel, just 2 wheels, but it
is very simple, very light, and can go almost anywhere.

Although you do need 1 per passenger, it's true, whereas a single carriage train
can carry 100 people, you can make 100 bicycles for those 100 people with fewer
materials than that 1 train, even excluding consideration of the rails etc.

If someone still making trains looked at bicycles and inspired by them came up
with a train with just 2 wheels, that balanced or hung from the track, they do
not get to claim that they have successfully imported the simplicity of the
bicycle.

They have retained just one aspect and may have achieved savings in moving
parts, or slight reduction of complexity, or a slightly simpler machine... but
it's still a train.

If you tweak a conventional C compiler to not waste time rereading text that has
already been read once, or a disk cache obviates it, then you have not caught up
with the advance I am trying to discuss here.

For clarity, this is not my original thinking. This idea is from a Go talk over
a decade ago:

https://go.dev/talks/2012/splash.article#TOC_5.






               




 * 
 * 

 * Link
 * 0 comments
 * Reply





THE AMIGA IS DEAD. LONG LIVE THE AMIGA! (THE INQUIRER, JANUARY 3 2007)

Mar. 8th, 2023 10:49 pm
I thought this was gone forever, like the rest of the late lamented Inq, but I
found a copy.


The Amiga is dead. Long live the Amiga!
 
http://www.theinquirer.net/default.aspx?article=36685
 
AmigaOS 4 launches after last Amiga compatible dies
 
THE END OF 2006 brought good and bad news for nostalgic geeks.
 
On the plus side, an unexpected Christmas pressie: on December 24, Amiga, Inc.
released AmigaOS 4.0, the all-new PowerPC version of the classic 1980s operating
system.
 
The bad news is rather more serious, though - just a month earlier, the only
remaining PowerPC Amiga-compatible went out of production, as Genesi announced
that it was ending production of the Pegasos.
 
So although there is, at long last, a new, modern version of the OS, there's
nothing to run it on. Bit of a snag, that.
 
But all isn't lost. The first successor to the Pegasos, Efika, a low-end
motherboard based on a PowerPC system-on-a-chip intended for the embedded
market, is sampling and will be available Real Soon Now(TM). Genesi is also
considering a more powerful high-end dual-core machine.
 
Just to complicate things, though, the AmigaOS 4 betas wouldn't run on Pegasos
hardware; the only machine the new OS would run on, the AmigaOne, has been out
of production for years, and its creators, small British manufacturer Eyetech,
have sold off their remaining stock of Amiga bits to Leaman Computing and left
the business.
 
What's an Amiga and why should I care?

Launched in 1985, the Amiga was the first multimedia personal computer. Based
like the Mac on a Motorola 68000, it sported 4,096 colour graphics, multichannel
digital stereo sound and a fully preemptively-multitasking operating system - in
512K. That's kilobytes, not meg - 0.5MB of RAM and no hard disk. It's hard to
convey today how advanced this was, but '85 is the same year that Windows 1.0
was released, when a top-end PC had an 80286, EGA graphics and could go "beep".
EGA means sixteen colours at 640x350. My last phone did better than that. Macs
of the time offered 512x384 black and white pixels and one channel of sound.
 
The Amiga dominated multimedia and graphics computing at the time. Babylon 5 was
made possible because it used Amiga-generated special effects: cinematic quality
at TV programme prices.
 
But even in today's Windows-dominated world, it's never gone away. The original
68000 OS continued development until 2000 - you can still buy version 3.9 from
Haage & Partner in Germany today. You'll need a seriously-uprated Amiga though:
recommended specs are a 68030 and 32MB of RAM. Do they think we're made of
money?
 
Hang on - Amiga is still around?

Oh, yes. It never went away. But it's had a rough time of it.
 
After Commodore went bust in 1994, the Amiga line was sold off to Escom as that
company expanded rapidly through multiple acquisitions - including that of
British high-street electronics vendor Rumbelows. Escom grew too fast and went
under in 1996, and Amiga was sold to Gateway. In 2000, a new company was set up
by former Amiga employees Fleecy Moss and Bill McEwen, which licensed the name
and rights from Gateway. The OS itself was sold off to KMOS, Inc. in 2003, which
subsequently changed its name to Amiga, Inc.
 
Over the years, Amiga Inc. has tried several times to come up with a new product
to recapture the Miggy magic.
 
The first effort was to be based on the Unix-like embedded OS QNX. It's small,
fast and highly efficient, but not very Amiga-like. Negotiations broke down,
though since then, QNX has boasted a GUI and multimedia support. Then there was
a plan based around the Linux kernel.
 
Then came AmigaAnywhere, based on Tao Group's remarkable Intent OS. Intent is
amazing, like Java on steroids: the entire OS and all apps are compiled for a
nonexistent, generalised "Virtual Processor". Code is translated for the actual
CPU as it's loaded from disk - no "virtual machine" involved. Complete binary
compatibility across every supported marchitecture. It's very clever, but it's
not got much to do with the original Amiga and the fans weren't very interested.
 
Finally, Amiga Inc. came up with an idea to get the fans on board - a new,
PowerPC-native version of the Amiga OS: AmigaOS 4. This would run on new PowerPC
hardware, but look and feel like an updated version of classic AmigaOS and offer
backwards compatibility. Amiga no longer had the manpower to develop this
in-house, so the product was licensed to Hyperion, a games house specialising in
ports of Windows games for Amiga, Mac and Linux.
 
Pegasos: the IBM Amiga

The idea of moving to PowerPC came from Phase5, a German maker of accelerator
cards for Amigas and Macs. Some of Phase5's later Amiga accelerators, the
Blizzard and Cyberstorm range, featured PowerPC processors and some nifty code
to allow apps to be compiled for PowerPC but run on the 68K-based Amiga OS.
 
As the Amiga market withered, Phase5 went under, but a group of its former
engineers set up bPlan GmbH. Amongst other products, bPlan agreed to license OS4
from Amiga and make PowerPC-based Amiga-compatibles.
 
Around the turn of the century, things were looking very bleak for Amiga and
little progress was being made. Growing impatient, bPlan set up Genesi with the
management of Thendic, a vendor of embedded and Point-Of-Sale equipment, and
decided to go it alone. Genesi designed a new PowerPC-based desktop machine,
Pegasos, based on OpenFirmware and IBM's Common Hardware Reference Platform -
the Mac/PC hybrid that was the basis of some of the last models of licensed Mac
clones, which could even run the short-lived PowerPC version of Windows NT.
 
The Pegasos was designed and built to run Morphos. This started out as an OS for
Amigas with PowerPC accelerators and required the presence of classic 68000
AmigaOS. Genesi sponsored development of a stand-alone version of Morphos for
the Pegasos. Rather than re-implementing AmigaOS from scratch, this uses an
entirely new microkernel, Quark, which hosts an Amiga-compatible environment,
ABox. Morphos looks like an updated AmigaOS, provides 68K emulation so that it
can run cleanly-written Amiga apps. There's also an emulator for code - like
games - which hits the hardware directly.
 
The Mark 1 Pegasos has problems due to the Articia northbridge chip, causing
major arguments between Genesi and chipset designer MAI Logic. Despite
production of a patch chip, "April", the Pegasos I was quickly replaced by the
Pegasos II with a different chipset.
 
So what happened with AmigaOS 4, then?

While Genesi worked on the Pegasos, Amiga made its own deal with MAI and
announced a new range of PowerPC-based Amigas. The original plan was that the
new machines would connect to an original Amiga 1200 or 4000 motherboard,
allowing the Miggy's custom chipset to be used - for compatibility with original
Amiga apps. That didn't pan out, so a simpler, all-new design was adopted based
on MAI's Teron motherboards. These were put into production by Eyetech as the
AmigaOne.
 
The snag is that AmigaOS 4 wasn't ready, so the AmigaOne shipped in 2002 with
only Linux. The first public betas of OS4 followed 18 months later.
 
Unfortunately for Amiga, MAI went bankrupt, and unable to source bridge chips,
Eyetech ended production of the AmigaOne in 2005. Only around 1,500 units were
shipped.
 
So as of the end of 2006, AmigaOS 4.0 is finally complete, but there's no
currently-shipping hardware to run it on. It's tiny, fast, can run clean classic
Amiga apps and is compatible enough with the older version that veteran Amiga
users - of which there were hundreds of thousands - will find it instantly
familiar. But because Genesi and Amiga Inc. don't exactly see eye to eye, OS4
won't run on Pegasos - only on near-identical official Amiga hardware with
Hyperion's "U-Boot" firmware.
 
And where's the Pegasos gone?

Genesi realised that the Amiga market was not only a small one but potentially
crowded, too, and changed the emphasis of the Pegasos II from being an
Amiga-compatible to being an open PowerPC desktop machine running Linux - a
course that's brought it rather greater success. After Apple's move to Intel
processors, the Pegasos II-based Open Desktop Workstation is the main desktop
PowerPC machine. But it still runs Morphos and thus Amiga apps.
 
Now, though, the ODW is the latest victim of RoHS - the Reduction of Hazardous
Substances legislation that amongst other things compels European manufacturers
to use only lead-free solder. It's hit minority platforms particularly hard and
the sad result is the end of Pegasos' flight.
 
The future

PowerPC was - and is - the main alternative workstation CPU to x86. Indeed, with
the Nintendo Wii, Microsoft XBox 360 and Sony Playstation 3 all based on PowerPC
derivatives, sales prospects for PowerPC are looking great, despite Apple
defecting to Intel processors.
 
The story of the Amiga isn't over. The successor to Pegasos II has been
announced: the Efika. This is a tiny low-end motherboard based on a PowerPC 5200
system-on-a-chip. It's not fast, but it's small, cheap, quiet and cool-running
with extremely low power requirements. It's being described as ideal for use in
tough or constrained environments, such as Third World education.
 
Amiga Inc. has also announced a similar product, codenamed "Samantha": again, a
small-form-factor, highly-integrated system based around the PPC5200 SoC.
 
Either way, PowerPC Amigas are interesting machines. Sure, they can run Linux,
from Yellow Dog to Ubuntu or Fedora, or even Gentoo if you're masochistic
enough. But running Morphos or OS4 they show their real power. These tiny,
elegant OSs occupy a few dozen meg of disk space, run happily in 128MB RAM and
boot to their graphical desktops in seconds. Both are very fast and relatively
full-featured, Internet-capable OSs, fully buzzword-compliant from MP3 to USB.
Finally, they share a library of thousands of high-quality apps from the 1980s
and 1990s and a lot of experienced users and developers.
 
The main problem they face now, though, is compatibility with one another.
Genesi has done the only sane thing - gone with open standards where they exist
and courted the Linux market. Amiga and Hyperion still fear the rife piracy of
the 1980s, when kids traded duplicated floppies of Amiga software freely. OS4
only runs on machines with Amiga firmware. It's too late for that now: it has to
run on anything with a PowerPC or it's already-meagre chances shrink to nothing.
If anything, the best market for OS4 is probably on the PowerPC consoles. They
have abundant anti-piracy measures built in.
 
If you fondly remember your old Miggy but aren't interested in this exotic
minority kit, then check out the FOSS project AROS - a reimplementation of
AmigaOS 3 for generic x86 hardware. It's not binary-compatible but Amiga code
need only be recompiled, and it will be instantly familiar to anyone who knew
the classic Amiga.
 
If plans come together, these future PPC5200 machines will offer a choice of
OSs: as well as Linux, both Morphos and AmigaOS 4 - and maybe AROS too.
Twenty-two years after its introduction, the Amiga is not quite dead yet. If you
need a low-resource, high-performance Internet-ready graphical embedded or kiosk
OS, even in 2007, you could do a lot worse than check out the world of the
Amiga.
 
 * Current Location: Douglas IoM

Tags:
 * amiga,
 * amigaos,
 * morphos,
 * pegasos,
 * powerpc

 * 
 * 

 * Link
 * 3 comments
 * Reply





"A PLEA FOR LEAN SOFTWARE" BY PROF. NIKLAUS WIRTH

Oct. 12th, 2022 04:32 pm
This is simply a text version of Wirth's paper from the IEEE "Computer"
magazine, as mentioned here and which can be found in PDF in many places, such
as here on Github. I imported the text and cleaned it up for the benefit of
those for whom a formatted PDF is inconvenient. I am not the author or
originator of any of this material.


--------------------------------------------------------------------------------



Computer: Cybersquare, February 1995


A Plea for Lean Software

Niklaus Wirth

ETH Zürich

 

Software's girth has surpassed its functionality, largely because hardware
advances make this possible. The way to streamline software lies in disciplined
methodologies and a return to the essentials.



Memory requirements of today’s workstations typically jump substantially – from
several to many megabytes—whenever there’s a new software release. When demand
surpasses capacity, it’s time to buy add-on memory. When the system has no more
extensibility, it’s time to buy a new, more powerful workstation. Do increased
performance and functionality keep pace with the increased demand for resources?
Mostly the answer is no.



About 25 years ago, an interactive text editor could be designed with as little
as 8,000 bytes of storage. (Modern program editors request 100 times that much!)
An operating system had to manage with 8,000 bytes, and a compiler had to fit
into 32 Kbytes, whereas their modern descendants require megabytes. Has all this
inflated software become any faster? On the contrary. Were it not for a thousand
times faster hardware, modern software would be utterly unusable.



Enhanced user convenience and functionality supposedly justify the increased
size of software, but a closer look reveals these justifications to be shaky. A
text editor still performs the reasonably simple task of inserting, deleting,
and moving parts of text; a compiler still translates text into executable code;
and an operating system still manages memory, disk space, and processor cycles.
These basic obligations have not changed with the advent of windows,
cut-and-paste strategies, and pop-up menus, nor with the replacement of
meaningful command words by pretty icons.



The apparent software explosion is accepted largely because of the staggering
progress made by semiconductor technology, which has improved the
price/performance ratio to a degree unparalleled by any other branches of
technology. For example, from 1978 to 1993 Intel's 80x86 family of processors
increased power by a factor of 335, transistor density by a factor of 107, and
price by a factor of about 3. The prospects for continuous performance increase
are still solid, and there is no sign that software’s ravenous appetite will be
appeased anytime soon." This development has spawned numerous rules, laws, and
corollaries, which are—as is customary in such cases—expressed in general terms;
thus they are neither provable nor refutable. With a touch of humor, the
following two laws reflect the state of the art admirably well:



· Software expands to fill the available memory. (Parkinson)

· Software is getting slower more rapidly than hardware becomes faster. (Reiser)



Uncontrolled software growth has also been accepted because customers have
trouble distinguishing between essential features and those that are just “nice
to have.” Examples of the latter class: those arbitrarily overlapping windows
suggested by the uncritically but widely adopted desktop metaphor; and fancy
icons decorating the screen display, such as antique mailboxes and garbage cans
that are further enhanced by the visible movement of selected items toward their
ultimate destination. These details are cute but not essential, and they have a
hidden cost.




CAUSES FOR “FAT SOFTWARE”



Clearly, two contributing factors to the acceptance of ever-growing software are
(1) rapidly growing hardware performance and (2) customers’ ignorance of
features that are essential-versus-nice to have. But perhaps more important than
finding reasons for tolerance is questioning the causes: What drives software
toward complexity?



A primary cause of complexity is that software vendors uncritically adopt almost
any feature that users want. Any incompatibility with the original system
concept is either ignored or passes unrecognized, which renders the design more
complicated and its use more cumbersome. When a system’s power is measured by
the number of its features, quantity becomes more important than quality. Every
new release must offer additional features, even if some don't add
functionality.




ALL FEATURES, ALL THE TIME



Another important reason for software complexity lies in monolithic design,
wherein all conceivable features are part of the system's design. Each customer
pays for all features but actually uses very few. Ideally, only a basic system
with essential facilities would be offered, a system that would lend itself to
various extensions. Every customer could then select the extensions genuinely
required for a given task.



Increased hardware power has undoubtedly been the primary incentive for vendors
to tackle more complex problems, and more complex problems inevitably require
more complex solutions. But it is not the inherent complexity that should
concern us; it is the self-inflicted complexity. There are many problems that
were solved long ago, but for the same problems we are now offered solutions
wrapped in much bulkier software.



Increased complexity results in large part from our recent penchant for friendly
user interaction. I've already mentioned windows and icons; color, gray-scales,
shadows, pop-ups, pictures, and all kinds of gadgets can easily be added.




TO SOME, COMPLEXITY EQUALS POWER.



A system’s ease of use always should be a primary goal, but that ease should be
based on an underlying concept that makes the use almost intuitive.
Increasingly, people seem to misinterpret complexity as sophistication, which is
baffling —the incomprehensible should cause suspicion rather than admiration.



Possibly this trend results from a mistaken belief that using a somewhat
mysterious device confers an aura of power on the user. (What it does confer is
a feeling of helplessness, if not impotence.) Therefore, the lure of com-
plexity asa sales incentive is easily understood; complexity promotes customer
dependence on the vendor.



It’s well known, for example, that major software houses have heavily
invested—with success—in customer service, employing hundreds of consultants to
answer customer calls around the clock. Much more economical for both producer
and consumer, however, would be a product based on a systematic concept—that is,
on generally valid rules of inference rather than on tables of rules that are
applicable to specific situations only—coupled with systematic documentation and
a tutorial. Of course, a customer who pays—in advance—for service contracts is a
more stable income source than a customer who has fully mastered a product’s
use. Industry and academia are probably pursuing very different goals; hence,
the emergence of another “law”:



 * Customer dependence is more profitable than customer education.



What I find truly baffling are manuals—hundreds of pages long—that accompany
software applications, programming languages, and operating systems.
Unmistakably, they signal both a contorted design that lacks clear concepts and
an intent to hook customers.



This lack of lucid concepts can’t alone account for the software explosion.
Designing solutions for complicated problems, whether in software or hardware,
is a difficult, expensive, and time-consuming process. Hardware’s improved
price/performance ratio has been achieved more from better technology to
duplicate (fabricate) designs than from better design technique mastery.
Software, however, is all design, and its duplication costs the vendor mere
pennies.



GOOD ENGINEERING IS CHARACTERIZED BY A GRADUAL, STEPWISE REFINEMENT OF PRODUCTS



Initial designs for sophisticated software applications are invariably
complicated, even when developed by competent engineers. Truly good solutions
emerge after iterative improvements or after redesigns that exploit new
insights, and the most rewarding iterations are those that result in program
simplifications. Evolutions of this kind, however, are extremely rare in current
software practice—they require time-consuming thought processes that are rarely
rewarded. Instead, software inadequacies are typically corrected by quickly
conceived additions that invariably result in the well-known bulk.




NEVER ENOUGH TIME



Time pressure is probably the foremost reason behind the emergence of bulky
software. The time pressure that designers endure discourages careful planning.
It also discourages improving acceptable solutions; instead, it encourages
quickly conceived software additions and corrections. Time pressure gradually
corrupts an engineer’s standard of quality and perfection. It has a detrimental
effect on people as well as products.



The fact that the vendor whose product is first on the market is generally more
successful than the competitor who arrives second, although with a better
design, is another detrimental contribution to the computer industry. The
tendency to adopt the “first” as the de facto standard is a deplorable
phenomenon, based on the same time pressure.



Good engineering is characterized by a gradual, stepwise refinement of products
that yields increased performance under given constraints and with given
resources. Software's resource limitations are blithely ignored, how- ever:
Rapid increases in processor speed and memory size are commonly believed to
compensate for sloppy software design. Meticulous engineering habits do not pay
off in the short run, which is one reason why software plays a dubious role
among established engineering disciplines.





Abstraction can work only with languages that postulate strict, static typing of
variables and functions. In this respect, C fails.




LANGUAGES AND DESIGN METHODOLOGY



Although software research, which theoretically holds the key to many future
technologies, has been heavily supported, its results are seemingly irrelevant
to industry. Methodical design, for example, is apparently undesirable because
products so developed take too much “time to market.” Analytical verification
and correctness-proof techniques fare even worse; in addition, these methods
require a higher intellectual caliber than that required by the customary “try
and fix it” approach. To reduce soft- ware complexity by concentrating only on
the essentials is a proposal swiftly dismissed as ridiculous in view of
customers’ love for bells and whistles. When “everything goes” is the modus
operandi, methodologies and disciplines are the first casualties.



Programming language methodologies are particularly controversial. In the 1970s,
it was widely believed that program design must be based on well-structured
methods and layers of abstraction with clearly defined specifications. The
abstract data type best exemplified this idea and found expression in then-new
languages such as Modula-2 and Ada. Today, programmers are abandoning
well-structured languages and migrating mostly to C. The C language doesn’t even
let compilers perform secure type checking, yet this compiler task is by far
most helpful to program development in locating early conceptual mistakes,
Without type checking, the notion of abstraction in programming languages
remains hollow and academic. Abstraction can work only with languages that
postulate strict, static typing of every variable and function. In this respect,
C fails—it resembles assembler code, where “everything goes.”




REINVENTING THE WHEEL?



Remarkably enough, the abstract data type has reappeared 25 years after its
invention under the heading object oriented. This modern term’s essence,
regarded by many as a panacea, concerns the construction of class (type)
hierarchies, Although the older concept hasn’t caught on without the newer
description “object oriented,” programmers recognize the intrinsic strength of
the abstract data type and convert to it. To be worthy of the description, an
object-oriented language must embody strict, static typing that cannot be
breached, whereby programmers can rely on the compiler to identify
inconsistencies. Unfortunately, the most popular object-oriented language, C++,
is no help here because it has been declared to be upwardly compatible with its
ancestor C. Its wide acceptance confirms the following “laws”:



 * Progress is acceptable only if it’s compatible with the current state.
 * Adhering to a standard is always safer.



Given this situation, programmers struggle with a language that discourages
structured thinking and disciplined program construction (and denies basic
compiler support). They also resort to makeshift tools that chiefly add to
software's bulk.



What a grim picture; what a pessimist! the reader must be thinking. No hint of
computing’s bright future, heretofore regarded as a given. This admittedly
somber view is realistic; nonetheless, given the will, there is a way to improve
the state of the art.



PROJECT OBERON



Between 1986 and 1989, Jurg Gutknecht and I designed and implemented a new
software system—called Oberon—for modern workstations, based on nothing but
hardware. Our primary goal was to show that software can be developed with a
fraction of the memory capacity and processor power usually required, without
sacrificing flexibility, functionality, or user convenience.



The Oberon system has been in use since 1989, serving purposes that include
document preparation, software development, and computer-aided design of
electronic circuits, among many others. The system includes:

 * storage management,
 * a file system,
 * a window display manager,
 * a network with servers,
 * a compiler, and
 * text, graphics, and document editors.



Designed and implemented—from scratch—by two people within three years, Oberon
has since been ported to several commercially available workstations and has
found many enthusiastic users, particularly since it is freely available.



Our secondary goal was to design a system that could be studied and explained in
detail, a system suitable as a software-design case study that could be
penetrated top-down, and whose design decisions could be stated explicitly.
(Indeed, there is a lack of published case studies in soft- ware construction,
which becomes all the more evident when one is faced with the task of teaching
courses.) The result of our efforts is a single book that describes the entire
system and contains the source code of all modules.



How is it possible to build a software system with some five man-years of effort
and present it in a single book?




THREE UNDERLYING TENETS



First, we concentrated on the essentials. We omitted anything that didn’t
fundamentally contribute to power and flexibility. For example, user interaction
in the basic system, is confined to textual information—no graphics, pictures,
or icons.



Secondly, we wanted to use a truly object-oriented programming language, one
that was type-safe. This, coupled with our belief that the first tenet must
apply even more stringently to the tools than to the system being built, forced
us to design our own language and to construct its compiler as well. It led to
Oberon, a language derived fromModula-2 by eliminating less essential features
(like subrange and enumeration types) in addition to features known to be unsafe
(like type transfer functions and variant records).



Lastly, to be simple, efficient, and useful, we wanted a system to be flexibly
extensible. This meant that new modules could be added that incorporate new
procedures based on calling existing ones. It also meant that new data types
could be defined (in new modules), compatible with existing types. We call these
extended types, and they constitute the only fundamental concept that was added
to Modula-2.




TYPE EXTENSION



If, for example, a type Viewer is defined in a module called Viewers, then a
type TextViewer can be defined as an extension of Viewer (typically, in another
module that is added to the system). Whatever operations apply to Viewers apply
equally to TextViewers, and whatever properties Viewers have, TextViewers have
as well.



Extensibility guarantees that modules may later be added to the system without
requiring either changes or recompilation. Obviously, type safety is crucial and
must cross module boundaries.



Type extension is a typical object-oriented feature. To avoid misleading
anthropomorphisms, we prefer to say “TextViewers are compatible with Viewers,”
rather than “TextViewers inherit from Viewers.” We also avoid introducing an
entirely new nomenclature for well-known concepts; for example, we stick to the
term type, avoiding the word class; we retain the terms variable and procedure,
avoiding the new terms instance and method. Clearly, our first
tenet—concentrating on essentials—also applies to terminology.




TALE OF A DATA TYPE



An example of a data type will illustrate our strategy of building basic
functionality in a core system, with features added according to the system’s
extensibility.



In the system’s core, the data type Text is defined as character sequences with
the attributes of font, offset, and color. Basic editing operations are provided
in a module called TextFrames.



An electronic mail module is not included in the core, but can be added when
there is a demand. When it is added, the electronic mail module relies on the
core system and imports the types Text and TextFrame displaying texts. This
means that normal editing operations can be applied to received e-mail messages.
The messages can be modified, copied, and inserted into other texts visible on
the screen display by using core operations. The only operations that the e-mail
module uniquely provides are receiving, sending, and deleting a message, plus a
command to list the mailbox directory.




OPERATION ACTIVATION



Another example that illustrates our strategy is the activation of operations.
Programs are not executed in Oberon; instead, individual procedures are exported
from modules. If a certain module M exports a procedure P, then P can be called
(activated) by merely pointing at the string M.P appearing in any text visible
on the display, that is, by moving the cursor to M.P and clicking a mouse but-
ton. Such straightforward command activation opens the following possibilities:

1. Frequently used commands are listed in short pieces of text. These are called
tool-texts and resemble customized menus, although no special menu software is
required. They are typically displayed in small viewers (windows).

2. By extending the system with a simple graphics editor that provides captions
based on Oberon texts, commands can be highlighted and otherwise decorated with
boxes and shadings. This results in pop-up and/or pull-down menus, buttons, and
icons that are “free” because the basic command activation mechanism is reused.

3. A message received by e-mail can contain commands as well as text. Commands
are executed by the recipient’s clicking into the message (without copying into
a special command window). We use this feature, for example, when announcing new
or updated module releases. The message typically contains receive commands
followed by lists of module names to be downloaded from the network. The entire
process requires only a few mouse clicks.




KEEPING IT SIMPLE



The strategy of keeping the core system simple but extensible rewards the modest
user. The Oberon core occupies fewer than 200 Kbytes, including editor and
compiler. A computer system based on Oberon needs to be expanded only if large,
demanding applications are requested, such as CAD with large memory
requirements. If several such applications are used, the system does not require
them to be simultaneously loaded. This economy is achieved by the following
system properties:



1. Modules can be loaded on demand. Demand is signaled either when a command is
activated—which is defined in a module not already loaded—or when a module being
loaded imports another module not already present. Module loading can also
result from data access. For example, when a document that contains graphical
elements is accessed by an editor whose graphic package is not open, then this
access inherently triggers its loading.

2. Every module is in memory at most once. This rule prohibits the creation of
linked load files (core images). Typically, linked load files are introduced in
operating systems because the process of linking is complicated et and
time-consuming (sometimes more so than compilation). With Oberon, linking cannot
be separated from loading. This is entirely acceptable because the intertwined
activities are very fast; they happen automatically the first time a module is
referenced.






THE PRICE OF SIMPLICITY



The experienced engineer, realizing that free lunches never are, will now ask,
Where is the price for this economy hidden? A simplified answer is: in a clear
conceptual basis and a well-conceived, appropriate system structure.



If the core—or any other module—is to be successfully extensible, its designer
must understand how it will be used. Indeed, the most demanding aspect of system
design is its decomposition into modules. Each module is a part with a precisely
defined interface that specifies imports and exports.



Each module also encapsulates implementation techniques. All of its procedures
must be consistent with respect to handling its exported data types. Precisely
defining the right decomposition is difficult and can rarely be achieved without
iterations. Iterative (tuning) improvements are of course only possible up to
the time of system release.



It is difficult to generalize design rules. If an abstract data type is defined,
carefully deliberated basic operations must accompany it, but composite
operations should be avoided. It’s also safe to say that the long-accepted rule
of specification before implementation must be relaxed. Specifications can turn
out to be as unsuitable as implementations can turn out to be wrong.



IN CONCLUDING, HERE ARE NINE LESSONS LEARNED from the Oberon project that might
be worth considering by anyone embarking on a new software design:



1. The exclusive use of a strongly typed language was the most influential
factor in designing this complex system in such short time. (The manpower was a
small fraction of what would typically be expended for comparably sized projects
based on other languages.) Static typing (a) lets the compiler pinpoint
inconsistencies before program execution; (b) lets the designer change
definitions and structures with less danger of negative consequences; and (c)
speeds up the improvement process, which could include changes that might not
otherwise be considered feasible.

2. The most difficult design task is to find the most appropriate decomposition
of the whole into a module hierarchy, minimizing function and code duplications.
Oberon is highly supportive in this respect by carrying type checks over module
boundaries.

3. Oberon’s type extension construct was essential for designing an extensible
system wherein new modules added functionality and new object classes integrated
compatibly with the existing classes or data types. Extensibility is
prerequisite to keeping a system streamlined from the outset. It also permits
the sys- tem to be customized to accommodate specific applications at any time,
notably without access to the source code.

4. In an extensible system, the key issue is to identify those primitives that
offer the most flexibility for extensions, while avoiding a proliferation of
primitives.

5. The belief that complex systems require armies of designers and programmers
is wrong. A system that is not understood in its entirety, or at least to
significant degree of detail by a single individual, should probably not be
built.

6. Communication problems grow as the size of the design team grows. Whether
they are obvious or not, when communication problems predominate, the team and
the project are both in deep trouble.

7. Reducing complexity and size must be the goal in every step—in system
specification, design, and in detailed programming. A programmer's competence
should be judged by the ability to find simple solutions, certainly not by
productivity measured in “number of lines ejected per day.” Prolific programmers
contribute to certain disaster.

8. To gain experience, there is no substitute for one’s own programming effort.
Organizing a team into man- agers, designers, programmers, analysts, and users
is detrimental. All should participate (with differing degrees of emphasis) in
all aspects of development. In particular, everyone—including managers—should
also be product users for a time. This last measure is the best guarantee to
correct mistakes and perhaps also to eliminate redundancies.

9. Programs should be written and polished until they acquire publication
quality. It is infinitely more demanding to design a publishable program than
one that “runs.” Programs should be written for human. readers as well as for
computers. If this notion contradicts certain vested interests in the commercial
world, it should at least find no resistance in academia.





With Project Oberon we have demonstrated that flexible and powerful systems can
be built with substantially fewer resources in less time than usual. The plague
of software explosion is not a “law of nature.” It is avoidable, and it is the
software engineer’s task to curtail it.






REFERENCES



1. E, Perratore et al, “Fighting Fatware,” Byte, Vol. 18, No. 4, Apr. 1993, pp.
98-108.

2. M, Reiser, The Oberon System, Addison-Wesley, Reading, Mass., 1991.

3. N. Wirth and J. Gutknecht, Project Oberon—The Design of an Operating System
and Compiler, Addison-Wesley, Reading, Mass., 1992.

4. M. Reiser and N. Wirth, Programming in Oberon—Steps Beyond Pascal and Modula,
Addison-Wesley, Reading, Mass., 1992.



Niklaus Wirth is professor of computer science at the Swiss Federal Institute of
Technology (ETH) in Zürich. He designed the programming languages Pascal (1970),
Modula (1980), and Oberon (1988), and the workstations Lilith (1980) and Ceres
(1986), as well as their operating software.



Wirth received a PhD from the University of California at Berkeley in 1963. He
was awarded the IEEE Emmanuel Piore Prize and the ACM Turing Award (1984). He
was named a Computer Pioneer by the IEEE Computer Society and is a Foreign
Associate of the National Academy of Engineering.



Readers can contact the author at Institut für Computersysteme, ETH CH-8092
Zürich, Switzerland; e-mail wirth@inf.eth.ch.



Tags:
 * wirth

 * 
 * 

 * Link
 * 1 comment
 * Reply



 * Previous 20


PROFILE

Liam_on_Linux
LinkedIn profile & information
 * Recent Entries
 * Archive
 * Reading
 * Tags
 * Memories
 * Profile


OCTOBER 2024

S M T W T F S   12345 67891011 12 13141516171819 20212223242526 2728293031  


SYNDICATE




MOST POPULAR TAGS

 * acorn - 6 uses
 * amiga - 9 uses
 * amigaos - 3 uses
 * android - 6 uses
 * apple - 7 uses
 * archimedes - 4 uses
 * arm - 8 uses
 * atari - 4 uses
 * beos - 4 uses
 * blog - 3 uses
 * commodore - 4 uses
 * desktops - 6 uses
 * distros - 3 uses
 * dos - 8 uses
 * dr-dos - 5 uses
 * fosdem - 5 uses
 * foss - 5 uses
 * gem - 4 uses
 * gnome - 11 uses
 * history - 3 uses
 * humour - 4 uses
 * ibm - 6 uses
 * ios - 3 uses
 * linux - 22 uses
 * lisp - 8 uses
 * lxde - 3 uses
 * mac - 6 uses
 * mac os x - 3 uses
 * macos - 7 uses
 * microsoft - 4 uses
 * mint - 4 uses
 * nostalgia - 4 uses
 * oberon - 5 uses
 * os/2 - 6 uses
 * pc dos - 4 uses
 * plug - 4 uses
 * ql - 3 uses
 * rant - 6 uses
 * risc os - 5 uses
 * sinclair - 5 uses
 * smalltalk - 4 uses
 * st - 5 uses
 * ubuntu - 13 uses
 * unity - 9 uses
 * usb - 3 uses
 * video - 3 uses
 * virtualbox - 6 uses
 * windows - 17 uses
 * writing - 17 uses
 * xfce - 6 uses


PAGE SUMMARY

 * Outliner notes
 * Inferno notes
 * Chris da Kiwi's personal history of computers
 * The second and final part of Chris' personal history with Linux
 * Guest post: "Some thoughts on computers", by Chris da Kiwi
 * To a tiling WM user, apparently other GUIs are like wearing handcuffs
 * Bring back distro-wide themes!
 * "Computer designs, back then": the story of ARRA, the first Dutch computer
 * Another day, another paean of praise for the Amiga's 1980s pre-emptive
   multitasking GUI
 * I was a Hackintosher
 * FOSDEM 2024
 * What makes a Linux distro light?
 * TIL that some people can't remember the difference between the 386 & 486
 * Anti-aliasing and subpixel allocation and how it's all going away
 * Dell Precision 420 with Red Hat Linux (Personal Computer World • September
   2000)
 * What if... someone made a Plan 9 that could run Linux apps?
 * Comparing niche programming languages to the mainstream
 * Evaluating Plan 9 (and Inferno)
 * The Amiga is dead. Long live the Amiga! (The Inquirer, January 3 2007)
 * "A Plea for Lean Software" by Prof. Niklaus Wirth


STYLE CREDIT

 * Style: Neutral Good for Practicality by timeasmymeasure


EXPAND CUT TAGS

No cut tags


Page generated Nov. 16th, 2024 01:10 am
Powered by Dreamwidth Studios


Top of page