Repost: The end of software development

Originally posted on Medium, Apr 2, 2017

Computers, to be useful for any particular task, need software. The practice of creating that software, developing it, programming it, happens to be considered as a specialised one. That wasn’t always the case. Microcomputers (PCs from the 70s and 80s) used to boot into a programming environment, such as some variety of BASIC. The user manual that came with the Commodore 64 included programming instructions. The expectation was that, to a much greater extent than now, computer-users would be programmers. Programming, as in writing code, was part of standard procedure for operating a computer.

What’s changed?

The perception of the situation, at least. The division of labour here has intensified. Software development became a role separate and distinct from regular, productive uses of a computer. Everyone (including the developer) uses software created by others. That software may be generally available, for free or commercially. Or it might be developed bespoke.

An organisation that requires bespoke software for its business has several options. It can outsource the task to an external agency. It may decide to undertake the project in-house. It may have developers on staff ready to go. Or it may employ some, or train some.

These options aren’t really so distinct. A bespoke software project involving external developers will necessarily be a collaboration between the companies. It’ll involve training existing staff, because, clearly, they’ll need to learn to be able to use the new software. It didn’t exist before.

In theory, software development comes to an end when the project is done. The software’s functionality satisfies the requirements. Perhaps one day we’ll have all the software we need, and the role of ‘software developer’ will be obsolete.

Back to reality…

Software is never finished. It is only abandoned.

Software projects tend to go through multiple phases of iterative development and use. Requirements evolve, so software needs to be adaptable. The substance of software is highly malleable, changeable stuff. Any possible program can, with the necessary code-changes, be transformed into any other. How easily that process can be done is a function of structural design and complexity. A well-designed, simpler system is more flexible.

One way to gain flexibility is to provide user-operable configuration.

Software is, to varying extents, configurable. That means a user may adjust its functionality within a set of defined (by the developer) parameters. This activity is generally considered part of ordinary usage of software. It doesn’t directly accomplish the purpose of the software. It serves the goal in a secondary way, if the adjustable options can be set to more preferred ones.

Configurability is a double-edged sword. The more configurable a system is, the more potential it has for users to adapt it to better serve their needs, without the need for specialised development skills. But a more powerful configurable system is more complex. Highly-complex system configuration becomes a specialised skill unto itself. Systems like Drupal allow different users to be restricted to subsets of the vast, intimidating array of available configuration features, for the sake of mental health, as well as for system security.

Maximal configurability is where the specialised developer works, i.e. a programming environment that permits unrestricted transformation of a system.

The task of incorporating new functionality, when exceeding the scope of configuration, of course falls to the programmer. The complexity of a system’s configuration options will be, to some extent, reflected in the structure of the program code. A more complex system is more difficult to adjust without breaking stuff. That means development work becomes more risky and expensive.

Reducing a system’s configurability, by removing unwanted options, makes a system simpler to use, potentially enhancing productivity and reducing training costs. The program-level aspect of this might involve deleting code, reducing the program’s overall size and complexity. For a developer, this is a very pleasing notion.

Feature creep and bloat

Consider this well-known dysfunction of software development, the tendency for a project to grow in scope excessively.

Growth is good when it means a software system gaining more capabilities in a healthy way. So I’ve qualified my description of ‘creep’ and ‘bloat’ to include the concept of excess. But what does that really mean?

We may use ‘feature creep’ or ‘scope creep’ to refer to a phenomenon in the course of the development process where new requirements are added, which isn’t a problem in and of itself. Problems arise when extra resources are inadequately allocated, and are thus stretched to excess. Some additional requirements and resource-stretching is to be expected in real-world software development. Keeping that within manageable limits falls to the discipline of project management. Totally eliminating ‘creep’ isn’t the point — adaptable, flexible software is what we’re trying to make, and that takes an adaptable, flexible development team.

Mature software systems that have grown well beyond their original version may be characterised as ‘bloated’. One might cite, in particular cases, objective, technical reasons for this designation. E.g.:

  • it’s too resource-intensive; the software uses excessive processing, memory, bandwidth, etc.
  • with rising complexity, it’s grown too difficult to use
  • it’s code has grown too complex, stalling further development

These are all matters of judgment. Alas, the question of software bloat does not admit of clear, unambiguous answers derived from some universally-accepted calculation of factors.

Increasingly resource-intensive software can be run on better hardware.

Difficult-to-use software can be delivered with additional training.

A tangled, rusty codebase can be refactored, given sufficient development resources.

That is, if the project owner has the necessary resources to spend.


Zawinski’s Law: “Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.” Coined by Jamie Zawinski (who called it the “Law of Software Envelopment”) to express his belief that all truly useful programs experience pressure to evolve into toolkits and application platforms (the mailer thing, he says, is just a side effect of that).

“We won’t have the resources to develop and maintain a mail-reader in our bee colony-monitoring application”. That line of argument might well be convincing to the director of the bee-management institution. Especially if we have supporting demonstrable calculations, as seems plausible in this case.

Dedicated teams are building mail software. Our bee system and some other mail system can be made to cooperate. Instead of duplicating their efforts, we want to make use of them, through APIs.

The efficiency argument seems clear.

So we have good reasons to resist the temptation to add non-core functionality to some system, where that functionality is arguably better-served by other software. Why does that temptation arise in the first place?

Modern operating systems provide filesystems. So, a user can draw a picture with a graphics program, save it to a file, then send that file to someone else using an email program. The developers of the graphics and email programs didn’t need to directly cooperate for that to happen.

This exemplifies the idea that with proper infrastructure provided by the lower-level system, a non-specialist software-user can take multiple programs and work with them in combination. When they can’t, then they need to call up specialist developers, systems integrators.

Sufficiently decentralised architecture will let non-specialist users combine a set of simple tools to achieve their desired system functionality. That’s the next frontier for the computing world, and seems to belong to the disciplines of designing operating systems and networks. As they evolve, certain sorts of specialised software development work will become obsolete. Then we’ll move to a new level of complex system-building.

Anticipated challenges

It’s not enough to merely invent or discover a better basic system architecture. You also need to convince other developers to cooperate, to write their software so it plays nicely in that new environment. How is that done? They need some inventive. The new system can’t be just a little bit better than existing alternatives with which developers are already quite comfortable, thankyouverymuch. It needs to be a vast leap in technical capability.

Or, you could pay them. Microsoft developers write programs for Microsoft’s Windows systems. They tried to encourage developer support for their phone platform by means of cash incentives. That initiative was unsuccessful. Maybe if they’d spent more money, it would have worked. Who can tell?

Most software development organisations lack the funds for this crude approach. But many companies besides Microsoft (and including them) invest much in the struggle for hearts and minds of software users, and that subset who are developers. Information technology is rife with rival schools of thought and the politicking which is necessary for their propagation.

Ted Nelson has discussed this extensively and entertainingly:

Every faction wants you to think they are the wave of the future and because there are no objective criteria, as in religion there are no objective criteria, there are thousands of sects and splinter groups.

When we’re trying to build something, shouldn’t our technological choices be determined by objective, factual, criteria? Certainly, we may agree on many facts about the present technological landscape. What about the future? We’re building the future. Its shape is indeterminate. It’s made of malleable stuff, and the process of its shaping is one of much creative freedom.

Or, if you’re a hardcore cynic, it used to be. The technologically-creative formations of yesterday shape and guide and contrain our present-day options.


Further reading:

Unix philosophy – Wikipedia
The Unix philosophy, originated by Ken Thompson, is a set of cultural norms and philosophical approaches to minimalist…en.wikipedia.org

… and watching:

Cole Thomas The Consummation The Course of the Empire 1836

On hypertext, its origins, as it is, and as it could be

Originally posted on Medium, Apr 2, 2017

EVERYTHING IS DEEPLY INTERTWINGLED. In an important sense there are no “subjects” at all; there is only all knowledge, since the cross-connections among the myriad topics of this world simply cannot be divided up neatly.

— Ted Nelson, Computer Lib/Dream Machines 1974

What is hypertext?

The term was coined by Ted Nelson, ‘to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.’ [1]

According to the founder of the World Wide Web, Tim Berners-Lee, hypertext is: ‘Nonsequential writing; Ted Nelson’s term for a medium that includes links. Nowadays it includes other media apart from text and is sometimes called hypermedia.’ [2]

Ideas about non-linear ways of representing information, stemming from the desire to transcend constricting structures that tend to be imposed by writing, precede those guys. They precede the invention of the computer. But it was computer technology that enabled a blossoming of competing hypertext projects, various network protocols and commercial software products.

Here I’ll talk about two of these, the web and Xanadu.

Xanadu is Nelson’s own baby, founded in 1960. The Xanadu vision is one of a publicly-accessible, globally-distributed archive of knowledge that preceded the Internet. It included an identity and payment system. Content in Xanadu could be accessed for free, or rightsholders could require payment for (permanent) access.

Xanadu documents would be linked together, and links were to be two-directional. Another sort of document linking was ‘transclusion’, a method for quoting a text that always links back to the original source. As these features imply, Xanadu content would be, by necessity, permanent. Documents could be added to, but never deleted (wholly or partially).

While the Xanadu project suffered setbacks over the years, Berners-Lee’s Web beat it to market. Born in the ’90s and flourishing well into the 2010s, it has prevailed as the champion in the global hypertext system competition.

Nelson still persistently pursues his own alternative. He’s developed incisive criticisms of the web, and of other parts of our prevailing computer paradigm.

https://qz.com/778747/an-early-internet-pioneer-says-the-construction-of-the-web-is-crippling-our-thinking/

I recommend his YouTube series, Computers for Cynics.

Hypertext in the web

HTML, Hypertext Markup Language, is the basic substance of Web pages. Traditionally, HTML documents include all the readable text for a page, plus ‘markup’, which provides extra information to the browser concerning structural semantics — e.g. the start and end points for paragraphs, heading text, lists, and stress emphasis.

The primary hypertext-ish thing about HTML is one piece of markup, the <a> tag. It’s used to make links. Web links are wholly contained within their pages. They can link to sections within the page, or out to other pages. Nothing constrains the author of a page from linking to pages anywhere else on the Web, whether they’re on the same site, or on another one on a different server, on a different continent, and controlled by an unrelated party.

Web links are one-way, so for me to link to your site from mine, there’s no need for your site to coordinate with mine. It functions unilaterally. You might remove your page, and then my link becomes broken, ‘dead’ — the rest of my page remains fine, of course. I won’t know that the link is non-functional until someone tries to click it.

My act of linking to you doesn’t affect your site. At least not directly. Not until Google came along, but that’s another story (and here I’m not referring to the absurdity of news publishing companies wanting to charge Google a levy for linking to their free content, but that’s worth mentioning; compare this with Xanadu, where the act publishing carries explicit permission to freely link content, and an integrated payment system is part of the basic infrastructure, potentially invaluable to the presently-suffering news industry.)

I highly recommend Lo and Behold (video clip), the 2016 documentary about the Internet. Director and noted luddite Werner Herzog spoke with Ted Nelson for the film. We could sum up Ted’s critique of the Web and modern computing with the bold statement he gives:

Humanity has no decent writing tools”

Is this hyperbole? The best books in existence were probably written before computers existed. We’ve still got the low-tech writing tools. Computers have expanded our toolset. Ted’s written several influential books. I think this here essay is pretty good. Can we not claim that the writing tools we possess are serviceable, with room for improvement?

Hypertext: who needs it?

Well, look at this. Computers, being general-purpose machines, have done more than increase our writing capacity. They are everywhere, transforming all aspects of life in unpredictable ways. The game has changed. The landscape is shifting. There’s no possibility of being able to deal with this exponentially scaling complexity unless we level-up our writing (and/or thinking) methods.

Computers have already revolutionised writing, in some ways. Instantaneous global publication is an advance, for sure. And that’s yet another major contribution to the seemingly intractable miasma of context surrounding and bleeding into any particular subject matter you may care to address! That’s why I want something like Xanadu, a tool that manages text with natural, free-form interrelationality: allowing anything to be connected to anything else, without imposing linear or hierarchical structures. Real hypertext.

Digital text system evolution has neglected the ‘deep structure’ that Nelson has been discussing since the ’60s, and instead nurtured development of presentational and decorative aspects: typefaces, page layout, animations, etc. He often seems to speak dismissively of such advances.

These things are why I have my job (as a frontend developer). And I love working on beautifully-designed stuff. Lacking a particular aptitude for visual design myself, I’m thankful that I get to work with great designers. I’m not as cynical as Ted about this stuff. What’s fuelling my view and agreement with Nelson is the opposite of cynicism, it’s a deep optimism that improvements to hypertext as we know it are possible, and will prove to be immensely valuable.

But here’s what a cynical view might look like.

1. In the struggle to represent information about an increasingly complex world, we build a better text system. But couldn’t that just complete a cycle on an endless feedback loop of complexity-management tools generating more tangled complexity, necessitating the development of yet more sophisticated tooling, ultimately solving nothing?

2. Consider the possibility that digital augmentation of the process of writing tends to degrade rather than improve it — distracting us, cluttering our perspectives, enabling useless information-hoarding.

(For a developer like me, there’s a temptation to use the lack of a decent hypertext system, a great CMS or a well-designed blog of my own, as an excuse to procrastinate rather than just write down what needs to be written. And on the other side, to use the lack of a cache of good writing to defer the development of that website to publish it…)

The original hypertext system is the human brain. Specifically a gifted, educated one. Interactive navigation of hypertext the old-fashioned way is: thinking or conversation. Can these natural capacities be significantly improved by mechanical automation?

We’ll keep trying, and find out.


Hundreds of companies are still working on expanding the capabilities of the web.

Lots of it is presentational stuff, like CSS layout functionality that’ll allow for more Magazine-like designs. There’s lots of features that enhance the Web as a software platform. People are working on virtual reality features.

And some stuff that’s more in the Xanadu direction: annotation, payments, identity authentication. Finally!

Ted’s still working on Xanadu. The latest implementation happens to be a web app.


1. Theodor H, Nelson, “A File Structure for the Complex, the Changing and the Indeterminate,” Association for Computing Machinery: Proceedings of the 20th National Conference, 84–100. Ed. Lewis Winner, 1965.

2. Tim Berners-Lee, Weaving the Web Glossary, 1999 https://www.w3.org/People/Berners-Lee/Weaving/glossary.html

N.B. I re-titled this essay today (13 November 2017) and demoted the original (hyperbolic, probably misunderstanding-provoking) title — Hypertext: who needs it — down to a subtitle.

Non-hierarchical file system

Long ago, as the design of the Unix file system was being worked out, the entries . and .. appeared, to make navigation easier. I’m not sure but I believe .. went in during the Version 2 rewrite, when the file system became hierarchical (it had a very different structure early on).

Rob Pike — https://plus.google.com/u/0/+RobPikeTheHuman/posts/R58WgWwN9jp

Emphasis added. Intriguing!


Low-friction publishing

Many years ago (in 2015!) I published this listicle: Apps for super fast Web publishing. I became interested in Pastebin-like web services, and thought it’d be cool to list the ones I’d discovered, with some commentary. That’s 9 of them. Now, in late 2018, 5 of them have disappeared from the face of the web. one (pen.io) was rebooted, its content flushed.

Zillions of pages of content destroyed. For sure, most of it was worthless. It’s good to have another reminder of the ephemeral nature of online stuff we’re tempted to depend upon.

I’ve made a new list. I picked a more useful format, a Google Docs spreadsheet. You can edit it to add new ones.

https://docs.google.com/spreadsheets/d/1Md26-HXS3c3EWNCMwpX7hANvtPysAhg9ZqBjCE-pPX8/edit#gid=0

It also has a Graveyard page for dead services. Let’s not forget them.

My old list excludes services that require any sort of sign-up, so they all allowed anonymous posting. I’ve expanded my remit to include services that ask users to sign in via a common third-party authorisation. So, TwitLonger is on the new list. It was around in 2015, and it’s still here. It’s ad-supported.

All these services, except Pastebin and Tinypaste are ad-free on content pages. I guess they’re cheap to maintain.

I guess not cheap enough!

(Commentary on software updates will resume)

Software updates – part II

Continued from part i

Frequently-updated software is the norm. The frequency varies, of course, depending on the particular software. Generally, developers depend on it.

Without updates, internet-connected (read: pretty much all) software would be increasingly vulnerable to hacking as security flaws are continually discovered. And without the ability to continually improve software through updates, a developer would find their products left behind, outpaced by relentless competition.

The user experience of software updates

In the bad old days, software updates were distributed as separate programs. You went to the publisher or developer’s website to download a newer version. How did you know to do this? Maybe a colleague told you. Maybe the developer emailed you. Maybe you didn’t know.

The next step was patches, updates delivered as programs that update your currently installed version to a later one. So you didn’t need to reinstall the software, and there was a smaller file to download.

Then, programs started checking for available updates. Thus began the process of update delivery mechanisms becoming more closely integrated into software. Self-updating software wasn’t far off, as is common in web browsers nowadays.

Should software simply automatically update itself, without any action on the part of users? Opinions vary. It’s perhaps more appropriate for some sorts of software than others. Users can understandably get annoyed at software updates–changed features may not necessarily be experienced as improvements. Sometimes users just want things to stay the way they are. So even though it’s possible nowadays to develop software that automatically keeps itself up to date as long as it’s connected to the internet, developers often give users the final decision.

Apple software update

Having each individual item of software handle its own updates was an unwieldily, convoluted situation. It made sense to try to de-duplicate this functionality, to combine shared functionality. So, companies like Apple made software update management programs, like that above.

And in the free open sauce world, Linux program updates are generally managed all in one unified manner, by ‘package mangers‘. There are multiple package managers, but user will generally use only one. Different distributions of the OS come with different PMs, administered by different organisations.

In the commercial world Apple and Adobe have their own separate update managers and users of Apple and Adobe software on the same computer will have both. And there are other commercial software update managers to deal with…

There could have been an awful, unmanageable proliferation of these, but luckily we were saved! Steam sorted out the situation in gaming, by becoming the undisputed leader in PC game distribution. Steam sends us the games, updates and all.

Mobile phone and tablet operating systems are more locked-down than PCs (including Macs). On iOS, all software comes via the official App Store. On Android, Google’s store is the default but not the only option.

Apple and Microsoft have their PC operating systems too. They come with app stores, which helpfully unify and coordinate the update process for software within their remit, but they are still only one of many ways to get software onto those systems.

The Wild Web

The Web app update procedure is seamless and effortless for a user. You open the page, and an updated version of the app is just there.

They are updated whenever their owner decides to update them, and too bad for any user who preferred the old one. For larger applications, when there’s a big version change, a developer with lots of resources may give users the option to stick with the older version for a while.

Keeping web backend software updated is an unsolved problem. Popular content management systems like WordPress and Drupal get a steady stream of updates to address security issues, but they’re generally not automatically updated. Many instances remain out of date. Sometimes they get hacked, and are used to attack other sites.

An update might screw something up and cause the system, say, a blog with 100s of treasured posts, to lose data. So the onus is placed on the site owner (or their delegated administrator) to press the update button and accept the consequences. If the database gets corrupted, well, you should have updated it.

Reversibility

Wouldn’t it be great if we could rewind and undo a software update? Restore to a previous state. It should be easy! But sadly, the complexity of systems doesn’t allow this. Not yet, anyway.

Part III coming this week!

Software updates – part I

This essay is about how the internet has accelerated aspects of software development. The net is a means for much faster, more widespread propagation of software and software updates than previously available. Before the net, when software was mainly distributed via physical media, updates could only be delivered via similar means: magnetic discs, CD-ROMs, printed code in books and magazines, and sometimes TV and radio transmissions. These methods were slow and unreliable. So there was less reliance on them, and more emphasis on ensuring first versions of software to be complete and bug-free. Now, online update infrastructure works nearly instantly, globally, with relative ease for end consumers.

Arising from this accelerated situation, updates are much more important in software development.

Updates fix security issues, vital now that so much sensitive data, commercial and otherwise, is stored on our personal electronic devices, these devices being used to negotiate all manner of important administrative tasks and transactions online.

Updates allow improvements to functionality, allowing for incremental development of software, with development effort being tactically deployed in response to feedback, or ‘telemetry’: data gathered from usage of the software, sent back to the software vendor in near realtime.

Unfinished ‘alpha’ or ‘beta’ software can be released into the hands of users for testing, those early-adopter types being generally more technically-adept and enthusiastic about pushing the software to its limits, and contributing feedback (or even development assistance) toward developing the software to maturity.

The release-observation-improvement cycle is sped up. Does that mean software development is now easier, perhaps more scientific than before? lol no. Everyone, in principle, has this same expanded toolbox, and we’re dealing with programs orders of magnitude more complex (some say unnecessarily so–an issue for another essay) than times past. The stakes are higher, and the competition is fiercer.

Nowadays, a lack of recent updates is taken by some as prima facie evidence that some software, say, a programming library (i.e. reusable package of code to use in writing new programs), is out of date, not worth bothering with–much to the chagrin of programming subcommunities who have held fast to the older principles of the discipline, taking pride in software completeness and correctness and stability. For example, see this HN thread:

The plague of modern software engineering is “there are no updates, it must be unmaintained”. This attitude makes tons of solid, old, working software seem “outdated” and creates a cultural momentum towards new, shiny, broken shit. The result is ecosystems like js. Maybe we should believe software can be complete?

What are updates like for software users? Ideally they’d deliver an unmitigated good: constantly making software better. But everyone who uses modern technology knows this isn’t the case.

To be continued.

P.S.

  • This seems relevant: Why We Dread New Software Updates by Angela Lashbrook. I haven’t read it–I don’t have access. I’m not (yet?) a Medium.com subscriber…
  • The follow-up will discuss package managers, app stores, trust, software freedom, and realistic constructive proposals!

After Facebook

I don’t have a great reason for not having a Facebook account, for deleting mine, as I did, a couple of months ago. But I’m okay with that. I’m not particularly interested in convincing other people to follow my lead, at least not right now.

But I do think Facebook is rather bad.

Here’s some nice anti-Facebook propaganda.

  1. Against Facebook
  2. Out to get you
  3. Ten Arguments for Deleting Your Social Media Accounts Right Now

(I haven’t read Jaron Lanier’s book yet.)

If I was going to join Facebook again, to take advantage of some of its merits while carefully moderating my usage somehow to avoid the known harms, perhaps I should promote more anti-FB messages, like those above. And perhaps write some of my own…

Would that be particularly useful? Questionable. When better systems arise or are rediscovered, smart people will just leave FB and just use the better stuff, right? Eventually. Facebook might fix itself in the meantime.

Facebook’s attractive features, as an ex-user: mainly its event system. Invitations, confirmations, calendar items. The tie to real-world identities.

I don’t need another chat system, so I’d avoid Messenger. I only just learned about the ‘see first’ feature, for making the newsfeed more useful. I’d make minimal use of the newsfeed, because that’s a bottomless pit, controlled by an engagement-maximising algorithm.

I don’t want my engagement maximised.

Better Facebook replacements

What’s the state of the art? Urbit? Holochain? Those are the alternative decentralised network toolkits that interested me for a while. I should look at them again.

What I’m doing here, under the Operating Space name, is building a space to publish my technology writing. It’s a WordPress blog now, nothing fancy. The next steps for its evolution are: an advanced category system that presents multiple hierchical taxonomies. WordPress plugins will be the basis, to start with. No fancy tech needed, I believe.

I’ll load in my tech-related Pinboard bookmarks that I’ve collected over the years, so I can start with some real live content. Two taxonomies I’ll start with are John Lange’s Challenges of the future (loosely transhumanism-related stuff) and the six layers of The Stack by Benjamin H. Bratton. I’ll also need some sensible scheme that includes space stuff, because of course I’m going to post about space science on a blog named Operating Space.

What about an AI-curated newsfeed? I think a simpler solution will suffice. RSS with basic filtering.

I think the multi-category thing will be generally useful, for various blogs and sites. So a WordPress plugin is a fine choice, for purposes of maximising reach. I’ll deploy it on at least two other sites, which will operate as microblogs. One for ‘life’ stuff, like Facebook. One for game screenshots and art. And I need a miscellaneous one, for everything else, perhaps? Maybe that’ll be the master database from which every other source draws.

Love in the age of decentralised personal computing

How will the distributed network revolution impact online dating?

Services like OKCupid, Tinder and Match.com operate on centralised, client-server models. Daters sign up to a service and give it some personal information: photos, biography text, age, sex, location, and preferences. The service stores the info, and gives the user an interface for checking out profiles of other, algorithmically-chosen, suitable daters and starting to chat with them. They typically run on revenue from advertising, and/or charge fees from users.

Commercial online dating services offer security, not through encryption, but being responsible for kicking out miscreants. They set and enforce rules for decent conduct, to tackle problems like fake profiles, inappropriate photos, scams, stalking, harassment, and catfishing. Bad offline behaviour, too, may be subject to their disciplinary measures: Facebook bans all sex offenders, and several dating apps (e.g Tinder and Bumble) require the use of a Facebook profile for identity verification. OKCupid banned some dude for involvement in neo-Nazi/alt-right activities.

The centralised structure of these services is not merely a technical implementation detail, but the basis for enforcing the social orderliness that makes these platforms worth using in the first place. That is, some degree of safety, through each of the platforms’ benevolent dictatorial oversight.

What would a distributed, decentralised platform for online dating offer? Secure, end-to-end encrypted messaging is a plausible feature. Assuming we want an alternative that appeals to an audience wider than a bunch of crypto-nerds, this isn’t enough to compete. How would it be made safe?

It’s a challenge! I think a decentralised system can tackle it. Eventually, it’ll even beat centralised ones.

I first started thinking about the shape of a possible solution in terms of Urbit. I more recently learned about Holochain, which also seems to have the right ingredients for a similar approach. Either of those platforms can straightforwardly support a peer-to-peer, free-for-all of unfiltered, encrypted communication. This is clearly insufficient as a protocol to support even dozens of strangers socialising. From Urbit literature:

Bringing people together is an easy problem for any social network. The hard problem is keeping them apart. In other words, the hard problem is filtering. Society is filtering.

I propose this decentralised dating approach: daters and matchmakers are peers on a network.

In a centralised dating platform, there’s just one matchmaker. It owns and runs the platform. Its job is to virtually introduce potential matches to one another, and keep the platform safe (by setting and enforcing rules).

In a decentralised system, any peer on the network can set themselves up as a matchmaker. Daters on the network would pick and choose one or several matchmakers, entrusting them with the sorts of responsibilities that users of OKCupid, Tinder, etc. do with those platforms. Namely:

  • Save a copy of my dating profile — perhaps one conforming to a provided schema.
  • Show me other profiles.
  • Put me in contact with other suitable people connected to you (–suitability as determined by some rules of engagement: profile info matching our expressed preferences. But the personal touch added by a personal matchmaker opens the doors to possibilities beyond algorithmic box-checking…)

Who are these matchmakers? Some could be unpaid volunteers, just there to help get dates for their friend: matching them with friends, or friends-of-friends. Or fellow hobbyists, or fellow churchgoers, or fellow professionals in some field, or associates of any other sort.

(Matchmakers, crucially, would not control the user interfaces. Users ultimately control their own UIs. They’re modifiable programs that sits on the users’ own machines.)

This system is also open to the possibility of commercial matchmaking operations. They’d compete on the basis of their respective reputations for offering a high-quality service, and differentiated target audiences. They could also co-operate, perhaps by merging together their respective pools of clients. One would expect commercial information-sharing of this sort to be regulated by data-protection laws. But what about when it’s a non-commercial operator? It seems that non-legislative means will be needed: protocols, filtering, and reputation systems for encouraging trustworthy matchmaking standards.

But perhaps much of this will prove unnecessary when we’ve got robust distributed social networking, one key factor being an identity system. Holochain is building its own distributed public key infrastructure. When you join Urbit, you get a new alien pseudonym. Probably a planet like ‘~mighex-forfem‘, which is a ‘permanent’ personal identity (and eventually, an asset with a price tag).

These potentially can serve as the basis for a range of multi-purpose reputation systems. They would provide assurances that could relieve some of the burdens from matchmakers, and users choosing matchmakers. And, perhaps, sometimes make dedicated matchmakers redundant? I suspect decentralised networking will make many centralised dating sites obsolete, but perhaps I’m too conservative in my estimations. It could make ‘online dating’ as such obsolete: absorbed into general-purpose social networking.

The system of ‘daters’ and ‘matchmakers’ could also be applied to non-dating contexts, e.g. professional networking. This is what one would expect from general-purpose social networking. Bumble has already expanded its functionality to include networking for business  and friendship. It may well try to grow and subsume the functionality of Facebook, LinkedIn, and Meetup.com. There’s no limit to the potential voraciousness of any of these platforms. For the as-yet most highly-evolved apex of this trend, see WeChat (video). WeChat is centralised.

Meanwhile in decentralised tech love

LegalFling. An app that records sexual consent on a blockchain. Ridiculous, sounds like a joke, but here we are.

Luna. A blockchain-based dating app. Seems more convoluted and centralised than the scheme I’ve outlined, but doesn’t seem completely stupid. Maybe it’ll work.

Marriage recorded on blockchain. This sounds like another joke, but it really makes sense. One can imagine a cryptocurrency that automatically reroutes funds sent to either of two wedded wallets into a couple’s shared wallet. And then, that wallet’s contents being split up according to a smart contract, when a divorce is marked on the chain. No lawyer required!

The decentralised social protocol Scuttlebutt explains itself with a love story (video).

 

Hacking on Holochain: first impressions

Here’s an exciting player in the ascendant decentralised computing space: Holochain. It’s a ‘post-blockchain’ platform for apps that communicate peer-to-peer, with secure user identities and cryptographically-validated shared data.

This week, key Holo people and creative collective darVOZ are running a sprint-athon in London. This is where I met them (people in both groups) for the first time, including Holo primary architect Art, and Connor, developer of Holo apps like Clutter (decentralised Twitter clone). And I got my own paws into developing in the system.

It’s alpha software, open source (of course), with dev tools that are already suitable for tinkering. They provide testing tools and seem to encourage a test-driven approach. Holo apps have configuration in JSON and code in JavaScript. Running instances of the app run the JS code in their VM. They also can provide a web UI.

Holo development involves writing and reading from the app’s DHT, which is a append-only data structure that’s automatically shared among connected peer apps that have the same ‘DNA’, which is a hash of the app’s code (Holo loves biological metaphors). Proper handling of this DHT seems to be the new core discipline that Holo demands from developers, and the key to unlocking its peculiar powers.

I’ve only scratched the surface, and I intend to contribute more to the effort of porting  Federated Wiki to Holochain, which is in progress. Then I’ll see how I can incorporate it into my little side project, the glorious Operating Space initiative (part of which is: this blog).

Using nginx to give your Urbit page a nice URL

Here’s the newest component of my little media empire, a chat room:

chat.operatingspace.net (dead link removed)

It runs on Urbit, which is a fascinating, complex project which I’ll sum up here as: a decentralised, programmable social network. This blog post is a tutorial for something I just learned how to do: set up nginx to give a nice URL to a page on my Urbit ship.

Prerequisites

Details for getting here are beyond the scope of this post, but here are some helpful links:

  • We have some cloud hosting (I use a 2048 MB server on Vultr)
  • We have a domain name (I use Hover) pointing to our server
  • We’ve got an Urbit ship running on our server (see: Install Urbit)
  • We’ve got an nginx server running there too
  • The Urbit ship is serving a web page

(Those first two links are affiliate links.)

So we’ve got two servers, nginx and Urbit, running. We can see our urbit’s web interface by going to http://$ourdomain.net:8080. We can get to the page of interest by appending /pagename or /page/path to that url.

E.g. operatingspace.net:8080/chat/

Goal: get rid of the ‘:8080’ there, and use ‘chat’ as a subdomain instead of a path.

Procedure

We need nginx to be listening on port 80, the standard web port (so there’s no need for any port number to be used in our url).

Edit the nginx config file, which is at /etc/nginx/nginx.conf.

Inside the http block, add a server block like this:

http {
  # (There'll be some stuff already here. Ignore it.)

  # (add this:)
  server {
    listen  80;
    server_name  nice-subdomain.your-domain.net;

    location = / {
      proxy_pass  http://localhost:8080/your-page;
    }

    location / {
      proxy_pass  http://localhost:8080;
    }
  }
}

Substitute ‘nice-subdomain’ with your preferred name, and substitute ‘your-domain’ and ‘your-page’ with the appropriate names according to your setup.

Save the file and reload nginx with our new config:

$ service nginx reload

That should be all. Getting here was a lot of trial and error for me, so I hope this post saves someone all that trouble. I am informed that a future Urbit update will make this all quite unnecessary, but some of us like to be early-adopters :)

If following these steps hasn’t worked:

If I’ve missed something, please let me know so I can fix this post (while it’s not yet obsolete). Feel free to contact me — pop into the op-space chat room!

Bonus: secure the connection with SSL

For maximal coolness, ensure a secure connection between client and server. You can get a free, automatically-updating certificate, and enable https on your page, with Let’s Encrypt. The process is very streamlined with Certbot.