The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or clickhereto continue anyway

Rise Of The Device

Rise Of The Device

Rise Of The Device

John David Pressman

December 8, 2015

It seemed inevitable for a while. With the media going on breathlessly about the Death of the PC for the last five years it would be hard to have missed the memo by now. Smartphones are going to kill the PC. Tablets are going to kill the PC. The PC is dying, no no dead. And there isnt anything you can do about it.

That inevitability seems to have all but collapsed. The parade of articles about the slow in tablet sales has become as breathless and shrill as the ones forecasting inevitable death for the PC. If you squint you can even see the seeds of heresy starting to sprout out of the winter snow.

Of course even with the influence of shadowy public reception firms lurking in the background we can be fairly sure there is genuine signal in this noise. These firms dont quite make their money on telling lies, so much as selective truths that we want to hear. For a story to run this widely and this long it needs to be something we in some sense want or believe to be true. This bodes much worse for the personal computing market than real competition, because where there is desire the market is incentivized to fulfill it. Somebody really really wants to see traditional computer systems go away and its not just the slick marketing execs selling smartphones in stores across the country, but who?

To start, the sort of people that wouldnt identify as a computer person. It doesnt take a genius to see why they would go for the mobile experience over a traditional desktop. In fact exceptional intelligence might obscure the value. For most users who make up the invisible dark matter of people that are not interested in novelties like blinkenlights and find the use of computers to be of instrumental value at best, the desktop experience is slow and confusing. It involves arcane sequences of commands in an environment that seems to follow no hard rules of navigation or predictability.

If this seems too harsh a judgement for you, here is an explanation of how to change your environment variables in windows 7 and here is a video showing the average New Yorker fumble around embarassingly trying to explain the difference between a web browser and a search engine. However, diagnosing the problem as computer complicated, people dumb gives far too much credit to the computer industry and far too little to users.

The basic fact is that smartphones and tablets are capitalizing on the computer industry being unable to put out systems which are not a full time job to understand at a user proficient level.

A Tale of Two Architectures

People forget that the Iphone originally had no third party applications and no intention of ever having them. Apple stumbled into the now entrenched model of the app store basically by accident. 3rd party software for smartphones is now their defining feature and should not be confused with the PC software model. Smartphones tablets and other smart devices are coming from an entirely different design architecture than the Personal Computer. The roots of which can be traced to the beginning of personal computing.

When Intel began producing their 4004 Microprocessor, a personal computer was the last thing on their mind. The market they were aiming for was calculators. Calculators are sort of like a personal computer, except for a very narrow purpose. It took many years for people to understand that computers were not about numbers in the strict sense of calculation. With their next iteration the 8008 Intel was aiming for a broader market that we would think of now as embedded computing. Traffic lights, soda bottling, the brains for a robot. Things that had their own purpose besides computing and used the general computing chip offered by Intel as a component.

The idea that the computer could be a standalone product was not on the radar. The development of the 8008 was sponsored by Computer Terminal Corporation (CTC). CTC sold computer terminals, which were a computer screen and keyboard combination that you connected to a larger computer over a phone or serial line. The first version of their product had heating issues so they turned to microelectronics in the successor product to correct them. The design called for a general purpose computer to serve the limited functions of a computer terminal. In effect this was a computer whose sole purpose was to connect you to a larger remote computer.

In the 1970s computer scientists had a vision of the future where users would log into large machines to use the Internet and operate computer programs from their home terminal. The dumb home terminal would have no computing power of its own instead being a portal through which more powerful machines would be accessed. Users would be billed for the resources they used and the time they spent on the machine. The personal computer turned this idea on its ear. It turned out that in the short run, processors originally designed for embedded applications could be coaxed into performing like minicomputers from the early 60s, which were half millon dollar machines and more than adequate for the interested hobbyist to stay interested.

In the long run, it meant that the timesharing mainframes were not necessary to have networked applications or usable home computing. Of course, timesharing mainframes never really went away and their more nimble successors are better known as the metal on which web services are hosted. It also firmly established a competition between two competing classes of machine, and two competing visions of the way that users should interact with computer systems.

One kind of machine is a Smart Device, it has some core purpose which is enhanced by having a general purpose computer as a component. The general purpose computer is not meant to be arbitrarily programmed but to run a set of canned sequences and routines in the service of its core function. A smart phone or a smart television, game consoles and tablets, smart devices have applications. Activities are tightly segregated and defined as part of an operating model. The entire experience is kept under strict control by the manufacturer to make sure that the device continues to perform its core functions as an appliance.

Another kind is the General Purpose Computer. The computer exists as a standalone unit, Turings universal machine. Rather than a component plugged into the architecture of existing appliances the appliances and periphreals are plugged into it. While we might call the activities on a computer applications, the segregation is nowhere near as absolute as on a Smart Device. Computers are the sort of machine which will tolerate a windowing system, multitasking, complex input and output. Rather than being damaged by exotic hardware and risks, the computer thrives on it. Steve Jobs characterized the GPC as a truck but this is a dishonesty. The General Purpose Computer is the Master Programmer of all other devices including itself.

A smartphone is not a PC replacement, a smartphone never will be a PC replacement for as long as it insists on being a smartphone. To the extent that a smartphone replaces a PC it borrows certain curated aspects of what makes a PC useful but will never host its own development and growth in the way a PC does. However, even if a smartphone cannot replace the role of a PC it can certainly replace physical PCs in peoples homes and in their minds. Its not implausible that this could happen, and it represents a depressing set of failures in usability and innovation in the personal computer space.

Creation Of The Digital Eden

Of course, none of this is new. The major innovation on Apple and cos part seems to be marketing and the frame theyre using to promote this model. For the first decade of the 21st century Microsoft had their Palladium initiative to secure desktop computers by stopping any software not explicitly signed by Microsoft from executing. This is basically analogous to the app store model. PC users hated it and hated the concept and rejected it. Going from the example of Apple, the problem seems to be that Micrsoft was approaching from the wrong direction. If you take a general purpose computer and lock it down so that it can only run software from the manufacturer, thats obvious damage to what a computer is supposed to be.

On the other hand, if you take a previously dumb device like a phone and give it general computation abilities, thats an improvement. You can get users to accept a situation that they would have never accepted in the desktop computing environment. But to defeat an entrenched technology the newcomer has to improve on the value of using the old. And the walled garden approach taken by smartphone manufacturers has certain intrinsic advantages over the PC model.

For one thing, it locks out all the nasty malware authors and junk people have become accustomed to on desktop. The relentless torrent of crud that slows down ones computer isnt there when everyone steps away from desktops to use smartphones. Its the birth of a digital eden, unencumbered by the pests and buzzards feeding on the desktop. (Anybody familiar with the history of Apple will find the garden analogy particularly amusing.) The mobile device ecosystem has a better immunity to 3rd party maliciousness as a direct result of having a better immunity to 3rd party control.

Not to be ignored of course is the new dimension of freedom offered by mobile devices. The freedom to use it in a cafe, in bed, walking down the street. Mobile takes computing off the desktop and into totally new environments. Perhaps if a computer is a universal machine, then the smartphone is a universal gadget. It replaces many previous staples of everyday carry like seperate cameras and audio devices.

But if a smartphone is a universal gadget, whats a tablet? A tablet doesnt quite fit into the model of a dumb device turned smart, its a pastiche from the beginning. It bears little relation to the Personal Data Assistants which one might hypothesize it takes ancestry from. The tablet is a smart device whose function is to be a sanitized centrally managed version of the personal computer experience in a cute mobile form factor. It is the model which the personal computer must defeat to stay relevant and the industry has not risen to the challenge.

Paradise Lost and Chasing Eden

But things are changing.

First to come was the reintroduction of PC crapware to phones with carrier installed spyware like CarrierIQ and difficult or impossible to delete preinstalled junk. Like its PC counterpart, this junk has a tendency to soak up disk space and other resources that users would like to spend on things which are not junk. (Or at the very least on their junk.) Given that phones are generally more resource constrained than desktop or laptop computers the waste is even more offensive.

Then came the new forms of crapware we had to invent because we neutered the old ones. Like a bacterium that mutates to adapt itself against environments with widespread antibiotics, malware and junk has evolved. The crapware is cross platform and on the web now, malware authors and media heads read the newspapers too and theyre well aware of where the action is. Everybody and their grandmother wants you to install their app, follow them on twitter, accept their push notifications, click their ads. Its all just as slimy and intrusive as anything on desktop ever was. If anything the sense of violation is greater. When youre dealing with a fraction of the screen real estate you had before everything is magnified. The rumblings have been there for a while. With google coming down on intrusive ads for grandmas mobile app on every site you go, a tipping point has been reached. When even the Mobile Moguls who have a vested interest in app installs are being forced to reign in the excess of their lieutenants you know the situation is out of control.

Theres a sort of collective realization that mobile isnt going to save us from slimy marketing departments, bad code, and bad design. Mobile is in just as much need of saving as the PC platform and its clear that mobile computing will not be the savior. So how can we be saved from the usability nightmare were experiencing with personal computing without compromising what makes a computer distinct from a smart device? I have some ideas.

In my proposal I will start with the simplest most practical ideas which can be implemented without any new innovation or conceptual understanding. I will then move on to conceptual innovation oriented towards people who would like to be user-proficient with computers without making a career of it. Then I will discuss plausible new avenues for personal computer software to take which are based on the natural advantages of the personal computer platform.

First Steps

In general, the easiest things are the ones largely controlled by experts and manufacturers that can be done on behalf of users without their active involvement. The tablet computer security model is basically entirely predicated on this dynamic. The personal computer security model should not be but elements of it would be useful.

End Crapware

In particular, preinstalled junk that comes with the computer needs to go. The reason why it exists in the first place is a classic example of a coordination problem. Computer manufacturers can shave money off the price of a new computer by installing junk on it before sending it out the door. Buyers are in no position to evaluate the level of junk that will come with their computer so its a hidden cost of purchase. Since one manufacturer doing this drives their prices down, it makes other manufacturers look overpriced for what is seemingly the same product. Therefore to compete other manufacturers have to follow suit in what quickly becomes a spiral resulting in the complete crapification of the default systems shipped with virtually every personal computer.

One potential avenue for remedy is legislation. Making it illegal to ship crapware with computer systems would presumably stop the practice overnight. Unfortunately its not so simple. For one thing, it is difficult to define exactly what crapware is. One mans trash is another mans treasure and any definition of crapware conservative enough not to impinge on legitimate software would be too mild to solve the problem entirely. There might be potential in outlawing the most abusive forms but any legislation is a heavyhanded measure to be avoided. A more subtle form of legislation might be to force manufacturers to provide itemized lists of software installed on computers, with fines for ommissions. This way it would no longer be entirely a hidden cost of purchase to have lots of junk on the system. It might even be possible to make this the status quo without the blunt instrument of state intervention. Manufacturers could voluntarily provide an itemized list and rely on the loss of reputation that would result from lying as a way to make their listing credible. This would have largely the same effect as a law would, in that it would hopefully recalibrate the market towards the true cost of purchase for computer systems. (Since ultimately you pay for every penny saved when it comes to crapware.)

General consciousness raising about this issue will help bring about a solution. The process has already started in earnest with Lenovo having been caught with their hand in the cookie jar in regards to preinstalled spyware and making a public statement vowing an end to preinstalled junk and to provide itemized lists of all software that comes with their systems. Lenovo then went on to get caught with their hand in the cookie jar again and then a third time in the same year just for good measure. With luck theyll make good on their promise and kickstart a widespread move away from crapware and towards more accountable computer sales.

Software Repositories

I spent years arguing with Windows sysadmins about the merits of software repositories always to hear them tell me that theyre everything from archaic to bad design, so of course it made me laugh maniacally to see Microsoft include one in Windows 10. Whats a software repository you ask? Instead of individual users individually going around using a search engine to find each individual software package they would like to install and clicking next twenty times to install it you have a central software hub that packages the most popular software for easy installation. Instead of running around the Internet and exposing yourself to accidentally downloading a malicious version of a legitimate program you simply type the programs name into a command line installer, or check a box in a graphical installation program and the rest is handled for you.

One downside of a software repository is that it suffers from some of the same weaknesses that an appstore does. It takes a certain amount of control away from developers and introduces a 3rd party into the relationship between the user and the developer of a piece of software. Another problem with this model is that the repository can often lag behind the official releases for the software by a large margin, meaning that developers have difficulty shipping new features to users. In the worst case scenario bug fixes and security holes go unpatched because the maintainers are not updating from the upstream main project releases. In some repository managment schemes such as that of Debian Linux this is an explicit feature rather than a bug.

There are ways to mitigate some of these issues. Ubuntu Linux, a project based on Debian solves this problem by allowing developers to provide their own repositories for software, so that updates and bug fixes can be pushed to users on the schedule preferred by the original developers. Part of what seperates a software repository model from an app store model is that a package manager (the software used to interact with a repository) can be used for as many providers as the user would like to install software from, an app store is a mandatory single intermediary for all software installed on the device.

Software repositories would help restore some of the trust that has been lost to malware, adware, and shoddy software. Inclusion into the repository is not mandatory, but does provide a seal of approval. Having a base of known-good software for casual users that do not want to go trawling through a sea of crap is a massive usability win and helps put an end to rampant scam versions of legitimate software programs. Its probably better in the case of Windows that a 3rd party maintain the repositories, to avoid the obvious conflict of interest and another visit from the Department of Justice.

Improvements In User Education

The ideal computer system would be so intuitive that no part need be explained. Were not there yet, even the Macintosh came with a manual. In fact if anything documentation and manuals have declined since then. The manual shipped with a modern Wintel system is useless, basically explaining how to turn the computer on and some system specific functions but nothing of importance to most users. It seems ridiculous to ship users a copy of any operating system without a detailed manual on how to use it. Just compare the Users Guide shipped with a 1982 Commodore 64 with one from a 2015 Dell XPS 13. Actually reading either is barely required, just compare the tables of contents!

The restoration of decent manuals would go a long way towards helping people understand their computers, but if manufacturers arent up to the task they dont need to be relied on. As part thought experiment part realistic proposal I would suggest a book. This book would be published annually by a panel of computer industry experts in various disciplines who would coauthor it. It would describe the most important user-facing concepts in computing, with annual updates to account for new developments in the field. Ideally this book would have world class technical writing, but it could probably be executed with merely adequate technical writing. This book would try its best to be platform agnostic without being so abstract as to not speak to the reality of systems in the field.

The book would be published in two formats, one the complete and whole text. The other would be a version covering only what has changed since the last edition. In this way somebody who has read the book once may keep themselves updated on its contents indefinitely until it ceases to be published. Ideally this book would be free or subsidized so it could be sold at low cost. Here are some topics this book might cover:

The book should be technical enough not to be inaccurate, but not so detailed or extraneous as to be unreadable to its target audience of interested laypeople and those who need to understand these systems to function day to day. It could probably be accomplished by one dedicated person in its first edition but would become difficult to publish sustainably year after year without a staff. We gave just about everyone in the first world a computer and then forgot to tell them how to use it. Most of the real gains from these machines wont be realized until we have a wide population of people who understand how to use them. If we fail to explain long enough these gains may never be seen.

But the biggest present crisis is without a doubt computer security. One of the largest chapters in our hypothetical book should be dedicated to up-to-date security information and what the latest threats and scams look like. This work is so important that it might even need to be split off into its own volume, a bestiary of malware, scams, and frauds targeted towards the people on the front line fighting them: The average naive computer user. I plan to at some point publish a series of articles or short stories explaining computer security through the analogy of programs as Asimov-esque robots. Visualizing the workings of computer programs as disembodied phantoms confuses people, but once you put the problem in terms of physical machines that need to trust other machines it becomes tangible and obvious.

In the past when I have discussed this idea with computer professionals they would tell me that this is something users do not need to concern themselves with. Users should listen to the advice of their local system administrator, apply their updates and go about their business. This is a bit like saying that people do not need to understand germ theory, they should follow the advice of their doctor and wash their hands periodically in specific situations where recommended by experts. Nevermind that there is an uncountable number of situations that would not be covered by these instructions where one should wash their hands. Nevermind that this puts the advice of experts on the same evidential footing as magic and people are less likely to believe something they do not think they understand.

This dismissal ignores the effects that ignorance has on the marketplace. The widespread understanding of germ theory is what put medicine men, shamans, faith healers and other placebo peddlers largely out of business. Someone who wants to learn about how to secure their computer right now is awash in a sea of snake oil. If the real signal is out there, its drowned out in a sea of monetary incentive to push archaic non-solutions like antivirus software.

The Contract Between Program Author And User Needs To Be Rewritten

One of the biggest positive changes brought on by mobile computing is built in permissions systems for programs. Of course us nerds had a technical term for this before Steve Jobs even imagined an Iphone. Mandatory Access Control (MAC), rolls right off the tongue like all the other things you cant say ten times fast. You might not know it but this is a feature that the NSA and other security conscious organizations lobbied for from operating system vendors and got. The reason why youve never heard of it is that setting up the implementations of MAC that exist on desktop is so hard that even experts dont bother. A good MAC lets you control what files a program is allowed to access, what operating system functions it can use, and everything else you would expect from a good permissions system but better.

Part of the problem with setting one of these up is that it requires you to know what resources each program you use on your computer accesses, which isnt really obvious even to an expert. One method in use to make things a bit easier is to have a program that lets the application access everything while it records what it does, and then creates a restrictive profile based on the observed behavior. This is all well and good but who wants to sit there for hours clicking every button in their favorite photo editor to make sure that its security profile is complete and doesnt leave anything out? How boring.

Unfortunately, desktop computing needs this. Its huge, most of the un-fun of using a desktop or laptop computer is how much time you have to spend worrying about malware. Most malware gets onto your computer by exploiting flaws in the programs you use to install itself. The current default security paradigm was invented in the 80s when there was a reasonable expectation of trust from the programs you ran, this is no longer the case. In this archaic security model one program running under a user account can access anything that user account can. It would be like if to hire someone to wash your car you had to give them the ability to see your bank statements, act as your accountant and redo the electrical work in your house.

Mandatory Access Control rewrites this contract between program author and program user, it says that no, just because you have a program on my computer and you show me funny cat pictures does not mean you get to peek at my tax records. With this in mind I think that the onus should be on software authors to provide security profiles for the wares they create. They know better than anyone else exactly what resources their program needs to consume, and just like on mobile these resource accesses should be declared up front and shown to the user. To go further the user should have the ability to deny the application just about any request it makes and if that causes the application to fail then let the application fail.

Speculatory New Directions For Personal Computing

Omar Rizwan at Stanford recently published a paper outlining one of the most powerful ideas of the computer revolution that fell by the wayside, simulation. Simulations were one of the major ways it was expected that people would use computers to model the world around them. Students could have complicated systems explained to them through interactive models of government, ecosystems, electrical engineering, and more. Managers could model their assembly line as a computer simulation and gain new insight by adjusting parameters of the model.

One reason why a device like a tablet can safely dispense with the programmability of a general purpose computer is that the general purpose computer isnt taking full advantage of its programmability in the first place. Entire classes of system arent being developed and arent being used because users are not trusted or do not have the education to operate them. (Or of course, the systems are being designed poorly.) Even if you take it on faith that the average user is just too stupid to take advantage of the computer as a medium there is still an entire population of doctors, lawyers, managers and other smart motivated people who would love to have this power and could make good use of it who are not traditional computer people.

These systems will require good input devices, which mobile systems are loathe to provide, and will be enhanced by better ergonomics and comfortable screens that mobile systems do not have.

Hardware wise, general purpose computers need to get back into the business of weird hardware and should ship more novel hardware with systems by default. Laptops should incorporate cellular technology and offer optional dataplans, GPS. Desktops have been long overdue for incorporated GPS units. What about FPGAs? I feel like a shoutout goes to PCDoesWhat for trying to move the needle, though I feel its a bit too conservative in the vein of more of what you already have its a start.

Ill elaborate on more ideas at length in future blog posts, but Id be lying if I said I knew what exactly will and wont turn out to be a powerful addition to the PC platform. Thats why we need to experiment.

Wrapping Up

The worst case outcome looks less bad than it did even five years ago. In 2011 we had to contend with SOPA and as scary as that was we beat it, burning a lot of hollywoods political capital in the process. The Raspberry Pi was nowhere in sight, it seemed like a real possibility that market forces would kill the ability for anybody other than a dedicated professional to buy a general purpose computer. Now theres a glut of Raspberry Pi-esque devices and the PC platform is running very strong when it comes to gaming and high end applications. Tablet sales fizzled out from their rise and rise, the iPhone is in incremental innovation mode. Suddenly this fight seems much more winnable than it did.

Continue reading on softholmsyndrome.com