On hypertext, its origins, as it is, and as it could be

Originally posted on Medium, Apr 2, 2017

EVERYTHING IS DEEPLY INTERTWINGLED. In an important sense there are no “subjects” at all; there is only all knowledge, since the cross-connections among the myriad topics of this world simply cannot be divided up neatly.

— Ted Nelson, Computer Lib/Dream Machines 1974

What is hypertext?

The term was coined by Ted Nelson, ‘to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.’ [1]

According to the founder of the World Wide Web, Tim Berners-Lee, hypertext is: ‘Nonsequential writing; Ted Nelson’s term for a medium that includes links. Nowadays it includes other media apart from text and is sometimes called hypermedia.’ [2]

Ideas about non-linear ways of representing information, stemming from the desire to transcend constricting structures that tend to be imposed by writing, precede those guys. They precede the invention of the computer. But it was computer technology that enabled a blossoming of competing hypertext projects, various network protocols and commercial software products.

Here I’ll talk about two of these, the web and Xanadu.

Xanadu is Nelson’s own baby, founded in 1960. The Xanadu vision is one of a publicly-accessible, globally-distributed archive of knowledge that preceded the Internet. It included an identity and payment system. Content in Xanadu could be accessed for free, or rightsholders could require payment for (permanent) access.

Xanadu documents would be linked together, and links were to be two-directional. Another sort of document linking was ‘transclusion’, a method for quoting a text that always links back to the original source. As these features imply, Xanadu content would be, by necessity, permanent. Documents could be added to, but never deleted (wholly or partially).

While the Xanadu project suffered setbacks over the years, Berners-Lee’s Web beat it to market. Born in the ’90s and flourishing well into the 2010s, it has prevailed as the champion in the global hypertext system competition.

Nelson still persistently pursues his own alternative. He’s developed incisive criticisms of the web, and of other parts of our prevailing computer paradigm.

https://qz.com/778747/an-early-internet-pioneer-says-the-construction-of-the-web-is-crippling-our-thinking/

I recommend his YouTube series, Computers for Cynics.

Hypertext in the web

HTML, Hypertext Markup Language, is the basic substance of Web pages. Traditionally, HTML documents include all the readable text for a page, plus ‘markup’, which provides extra information to the browser concerning structural semantics — e.g. the start and end points for paragraphs, heading text, lists, and stress emphasis.

The primary hypertext-ish thing about HTML is one piece of markup, the <a> tag. It’s used to make links. Web links are wholly contained within their pages. They can link to sections within the page, or out to other pages. Nothing constrains the author of a page from linking to pages anywhere else on the Web, whether they’re on the same site, or on another one on a different server, on a different continent, and controlled by an unrelated party.

Web links are one-way, so for me to link to your site from mine, there’s no need for your site to coordinate with mine. It functions unilaterally. You might remove your page, and then my link becomes broken, ‘dead’ — the rest of my page remains fine, of course. I won’t know that the link is non-functional until someone tries to click it.

My act of linking to you doesn’t affect your site. At least not directly. Not until Google came along, but that’s another story (and here I’m not referring to the absurdity of news publishing companies wanting to charge Google a levy for linking to their free content, but that’s worth mentioning; compare this with Xanadu, where the act publishing carries explicit permission to freely link content, and an integrated payment system is part of the basic infrastructure, potentially invaluable to the presently-suffering news industry.)

I highly recommend Lo and Behold (video clip), the 2016 documentary about the Internet. Director and noted luddite Werner Herzog spoke with Ted Nelson for the film. We could sum up Ted’s critique of the Web and modern computing with the bold statement he gives:

Humanity has no decent writing tools”

Is this hyperbole? The best books in existence were probably written before computers existed. We’ve still got the low-tech writing tools. Computers have expanded our toolset. Ted’s written several influential books. I think this here essay is pretty good. Can we not claim that the writing tools we possess are serviceable, with room for improvement?

Hypertext: who needs it?

Well, look at this. Computers, being general-purpose machines, have done more than increase our writing capacity. They are everywhere, transforming all aspects of life in unpredictable ways. The game has changed. The landscape is shifting. There’s no possibility of being able to deal with this exponentially scaling complexity unless we level-up our writing (and/or thinking) methods.

Computers have already revolutionised writing, in some ways. Instantaneous global publication is an advance, for sure. And that’s yet another major contribution to the seemingly intractable miasma of context surrounding and bleeding into any particular subject matter you may care to address! That’s why I want something like Xanadu, a tool that manages text with natural, free-form interrelationality: allowing anything to be connected to anything else, without imposing linear or hierarchical structures. Real hypertext.

Digital text system evolution has neglected the ‘deep structure’ that Nelson has been discussing since the ’60s, and instead nurtured development of presentational and decorative aspects: typefaces, page layout, animations, etc. He often seems to speak dismissively of such advances.

These things are why I have my job (as a frontend developer). And I love working on beautifully-designed stuff. Lacking a particular aptitude for visual design myself, I’m thankful that I get to work with great designers. I’m not as cynical as Ted about this stuff. What’s fuelling my view and agreement with Nelson is the opposite of cynicism, it’s a deep optimism that improvements to hypertext as we know it are possible, and will prove to be immensely valuable.

But here’s what a cynical view might look like.

1. In the struggle to represent information about an increasingly complex world, we build a better text system. But couldn’t that just complete a cycle on an endless feedback loop of complexity-management tools generating more tangled complexity, necessitating the development of yet more sophisticated tooling, ultimately solving nothing?

2. Consider the possibility that digital augmentation of the process of writing tends to degrade rather than improve it — distracting us, cluttering our perspectives, enabling useless information-hoarding.

(For a developer like me, there’s a temptation to use the lack of a decent hypertext system, a great CMS or a well-designed blog of my own, as an excuse to procrastinate rather than just write down what needs to be written. And on the other side, to use the lack of a cache of good writing to defer the development of that website to publish it…)

The original hypertext system is the human brain. Specifically a gifted, educated one. Interactive navigation of hypertext the old-fashioned way is: thinking or conversation. Can these natural capacities be significantly improved by mechanical automation?

We’ll keep trying, and find out.


Hundreds of companies are still working on expanding the capabilities of the web.

Lots of it is presentational stuff, like CSS layout functionality that’ll allow for more Magazine-like designs. There’s lots of features that enhance the Web as a software platform. People are working on virtual reality features.

And some stuff that’s more in the Xanadu direction: annotation, payments, identity authentication. Finally!

Ted’s still working on Xanadu. The latest implementation happens to be a web app.


1. Theodor H, Nelson, “A File Structure for the Complex, the Changing and the Indeterminate,” Association for Computing Machinery: Proceedings of the 20th National Conference, 84–100. Ed. Lewis Winner, 1965.

2. Tim Berners-Lee, Weaving the Web Glossary, 1999 https://www.w3.org/People/Berners-Lee/Weaving/glossary.html

N.B. I re-titled this essay today (13 November 2017) and demoted the original (hyperbolic, probably misunderstanding-provoking) title — Hypertext: who needs it — down to a subtitle.

Non-hierarchical file system

Long ago, as the design of the Unix file system was being worked out, the entries . and .. appeared, to make navigation easier. I’m not sure but I believe .. went in during the Version 2 rewrite, when the file system became hierarchical (it had a very different structure early on).

Rob Pike — https://plus.google.com/u/0/+RobPikeTheHuman/posts/R58WgWwN9jp

Emphasis added. Intriguing!


Low-friction publishing

Many years ago (in 2015!) I published this listicle: Apps for super fast Web publishing. I became interested in Pastebin-like web services, and thought it’d be cool to list the ones I’d discovered, with some commentary. That’s 9 of them. Now, in late 2018, 5 of them have disappeared from the face of the web. one (pen.io) was rebooted, its content flushed.

Zillions of pages of content destroyed. For sure, most of it was worthless. It’s good to have another reminder of the ephemeral nature of online stuff we’re tempted to depend upon.

I’ve made a new list. I picked a more useful format, a Google Docs spreadsheet. You can edit it to add new ones.

https://docs.google.com/spreadsheets/d/1Md26-HXS3c3EWNCMwpX7hANvtPysAhg9ZqBjCE-pPX8/edit#gid=0

It also has a Graveyard page for dead services. Let’s not forget them.

My old list excludes services that require any sort of sign-up, so they all allowed anonymous posting. I’ve expanded my remit to include services that ask users to sign in via a common third-party authorisation. So, TwitLonger is on the new list. It was around in 2015, and it’s still here. It’s ad-supported.

All these services, except Pastebin and Tinypaste are ad-free on content pages. I guess they’re cheap to maintain.

I guess not cheap enough!

(Commentary on software updates will resume)

Software updates – part II

Continued from part i

Frequently-updated software is the norm. The frequency varies, of course, depending on the particular software. Generally, developers depend on it.

Without updates, internet-connected (read: pretty much all) software would be increasingly vulnerable to hacking as security flaws are continually discovered. And without the ability to continually improve software through updates, a developer would find their products left behind, outpaced by relentless competition.

The user experience of software updates

In the bad old days, software updates were distributed as separate programs. You went to the publisher or developer’s website to download a newer version. How did you know to do this? Maybe a colleague told you. Maybe the developer emailed you. Maybe you didn’t know.

The next step was patches, updates delivered as programs that update your currently installed version to a later one. So you didn’t need to reinstall the software, and there was a smaller file to download.

Then, programs started checking for available updates. Thus began the process of update delivery mechanisms becoming more closely integrated into software. Self-updating software wasn’t far off, as is common in web browsers nowadays.

Should software simply automatically update itself, without any action on the part of users? Opinions vary. It’s perhaps more appropriate for some sorts of software than others. Users can understandably get annoyed at software updates–changed features may not necessarily be experienced as improvements. Sometimes users just want things to stay the way they are. So even though it’s possible nowadays to develop software that automatically keeps itself up to date as long as it’s connected to the internet, developers often give users the final decision.

Apple software update

Having each individual item of software handle its own updates was an unwieldily, convoluted situation. It made sense to try to de-duplicate this functionality, to combine shared functionality. So, companies like Apple made software update management programs, like that above.

And in the free open sauce world, Linux program updates are generally managed all in one unified manner, by ‘package mangers‘. There are multiple package managers, but user will generally use only one. Different distributions of the OS come with different PMs, administered by different organisations.

In the commercial world Apple and Adobe have their own separate update managers and users of Apple and Adobe software on the same computer will have both. And there are other commercial software update managers to deal with…

There could have been an awful, unmanageable proliferation of these, but luckily we were saved! Steam sorted out the situation in gaming, by becoming the undisputed leader in PC game distribution. Steam sends us the games, updates and all.

Mobile phone and tablet operating systems are more locked-down than PCs (including Macs). On iOS, all software comes via the official App Store. On Android, Google’s store is the default but not the only option.

Apple and Microsoft have their PC operating systems too. They come with app stores, which helpfully unify and coordinate the update process for software within their remit, but they are still only one of many ways to get software onto those systems.

The Wild Web

The Web app update procedure is seamless and effortless for a user. You open the page, and an updated version of the app is just there.

They are updated whenever their owner decides to update them, and too bad for any user who preferred the old one. For larger applications, when there’s a big version change, a developer with lots of resources may give users the option to stick with the older version for a while.

Keeping web backend software updated is an unsolved problem. Popular content management systems like WordPress and Drupal get a steady stream of updates to address security issues, but they’re generally not automatically updated. Many instances remain out of date. Sometimes they get hacked, and are used to attack other sites.

An update might screw something up and cause the system, say, a blog with 100s of treasured posts, to lose data. So the onus is placed on the site owner (or their delegated administrator) to press the update button and accept the consequences. If the database gets corrupted, well, you should have updated it.

Reversibility

Wouldn’t it be great if we could rewind and undo a software update? Restore to a previous state. It should be easy! But sadly, the complexity of systems doesn’t allow this. Not yet, anyway.

Part III coming this week!

Software updates – part I

This essay is about how the internet has accelerated aspects of software development. The net is a means for much faster, more widespread propagation of software and software updates than previously available. Before the net, when software was mainly distributed via physical media, updates could only be delivered via similar means: magnetic discs, CD-ROMs, printed code in books and magazines, and sometimes TV and radio transmissions. These methods were slow and unreliable. So there was less reliance on them, and more emphasis on ensuring first versions of software to be complete and bug-free. Now, online update infrastructure works nearly instantly, globally, with relative ease for end consumers.

Arising from this accelerated situation, updates are much more important in software development.

Updates fix security issues, vital now that so much sensitive data, commercial and otherwise, is stored on our personal electronic devices, these devices being used to negotiate all manner of important administrative tasks and transactions online.

Updates allow improvements to functionality, allowing for incremental development of software, with development effort being tactically deployed in response to feedback, or ‘telemetry’: data gathered from usage of the software, sent back to the software vendor in near realtime.

Unfinished ‘alpha’ or ‘beta’ software can be released into the hands of users for testing, those early-adopter types being generally more technically-adept and enthusiastic about pushing the software to its limits, and contributing feedback (or even development assistance) toward developing the software to maturity.

The release-observation-improvement cycle is sped up. Does that mean software development is now easier, perhaps more scientific than before? lol no. Everyone, in principle, has this same expanded toolbox, and we’re dealing with programs orders of magnitude more complex (some say unnecessarily so–an issue for another essay) than times past. The stakes are higher, and the competition is fiercer.

Nowadays, a lack of recent updates is taken by some as prima facie evidence that some software, say, a programming library (i.e. reusable package of code to use in writing new programs), is out of date, not worth bothering with–much to the chagrin of programming subcommunities who have held fast to the older principles of the discipline, taking pride in software completeness and correctness and stability. For example, see this HN thread:

The plague of modern software engineering is “there are no updates, it must be unmaintained”. This attitude makes tons of solid, old, working software seem “outdated” and creates a cultural momentum towards new, shiny, broken shit. The result is ecosystems like js. Maybe we should believe software can be complete?

What are updates like for software users? Ideally they’d deliver an unmitigated good: constantly making software better. But everyone who uses modern technology knows this isn’t the case.

To be continued.

P.S.

  • This seems relevant: Why We Dread New Software Updates by Angela Lashbrook. I haven’t read it–I don’t have access. I’m not (yet?) a Medium.com subscriber…
  • The follow-up will discuss package managers, app stores, trust, software freedom, and realistic constructive proposals!

After Facebook

I don’t have a great reason for not having a Facebook account, for deleting mine, as I did, a couple of months ago. But I’m okay with that. I’m not particularly interested in convincing other people to follow my lead, at least not right now.

But I do think Facebook is rather bad.

Here’s some nice anti-Facebook propaganda.

  1. Against Facebook
  2. Out to get you
  3. Ten Arguments for Deleting Your Social Media Accounts Right Now

(I haven’t read Jaron Lanier’s book yet.)

If I was going to join Facebook again, to take advantage of some of its merits while carefully moderating my usage somehow to avoid the known harms, perhaps I should promote more anti-FB messages, like those above. And perhaps write some of my own…

Would that be particularly useful? Questionable. When better systems arise or are rediscovered, smart people will just leave FB and just use the better stuff, right? Eventually. Facebook might fix itself in the meantime.

Facebook’s attractive features, as an ex-user: mainly its event system. Invitations, confirmations, calendar items. The tie to real-world identities.

I don’t need another chat system, so I’d avoid Messenger. I only just learned about the ‘see first’ feature, for making the newsfeed more useful. I’d make minimal use of the newsfeed, because that’s a bottomless pit, controlled by an engagement-maximising algorithm.

I don’t want my engagement maximised.

Better Facebook replacements

What’s the state of the art? Urbit? Holochain? Those are the alternative decentralised network toolkits that interested me for a while. I should look at them again.

What I’m doing here, under the Operating Space name, is building a space to publish my technology writing. It’s a WordPress blog now, nothing fancy. The next steps for its evolution are: an advanced category system that presents multiple hierchical taxonomies. WordPress plugins will be the basis, to start with. No fancy tech needed, I believe.

I’ll load in my tech-related Pinboard bookmarks that I’ve collected over the years, so I can start with some real live content. Two taxonomies I’ll start with are John Lange’s Challenges of the future (loosely transhumanism-related stuff) and the six layers of The Stack by Benjamin H. Bratton. I’ll also need some sensible scheme that includes space stuff, because of course I’m going to post about space science on a blog named Operating Space.

What about an AI-curated newsfeed? I think a simpler solution will suffice. RSS with basic filtering.

I think the multi-category thing will be generally useful, for various blogs and sites. So a WordPress plugin is a fine choice, for purposes of maximising reach. I’ll deploy it on at least two other sites, which will operate as microblogs. One for ‘life’ stuff, like Facebook. One for game screenshots and art. And I need a miscellaneous one, for everything else, perhaps? Maybe that’ll be the master database from which every other source draws.

Love in the age of decentralised personal computing

How will the distributed network revolution impact online dating?

Services like OKCupid, Tinder and Match.com operate on centralised, client-server models. Daters sign up to a service and give it some personal information: photos, biography text, age, sex, location, and preferences. The service stores the info, and gives the user an interface for checking out profiles of other, algorithmically-chosen, suitable daters and starting to chat with them. They typically run on revenue from advertising, and/or charge fees from users.

Commercial online dating services offer security, not through encryption, but being responsible for kicking out miscreants. They set and enforce rules for decent conduct, to tackle problems like fake profiles, inappropriate photos, scams, stalking, harassment, and catfishing. Bad offline behaviour, too, may be subject to their disciplinary measures: Facebook bans all sex offenders, and several dating apps (e.g Tinder and Bumble) require the use of a Facebook profile for identity verification. OKCupid banned some dude for involvement in neo-Nazi/alt-right activities.

The centralised structure of these services is not merely a technical implementation detail, but the basis for enforcing the social orderliness that makes these platforms worth using in the first place. That is, some degree of safety, through each of the platforms’ benevolent dictatorial oversight.

What would a distributed, decentralised platform for online dating offer? Secure, end-to-end encrypted messaging is a plausible feature. Assuming we want an alternative that appeals to an audience wider than a bunch of crypto-nerds, this isn’t enough to compete. How would it be made safe?

It’s a challenge! I think a decentralised system can tackle it. Eventually, it’ll even beat centralised ones.

I first started thinking about the shape of a possible solution in terms of Urbit. I more recently learned about Holochain, which also seems to have the right ingredients for a similar approach. Either of those platforms can straightforwardly support a peer-to-peer, free-for-all of unfiltered, encrypted communication. This is clearly insufficient as a protocol to support even dozens of strangers socialising. From Urbit literature:

Bringing people together is an easy problem for any social network. The hard problem is keeping them apart. In other words, the hard problem is filtering. Society is filtering.

I propose this decentralised dating approach: daters and matchmakers are peers on a network.

In a centralised dating platform, there’s just one matchmaker. It owns and runs the platform. Its job is to virtually introduce potential matches to one another, and keep the platform safe (by setting and enforcing rules).

In a decentralised system, any peer on the network can set themselves up as a matchmaker. Daters on the network would pick and choose one or several matchmakers, entrusting them with the sorts of responsibilities that users of OKCupid, Tinder, etc. do with those platforms. Namely:

  • Save a copy of my dating profile — perhaps one conforming to a provided schema.
  • Show me other profiles.
  • Put me in contact with other suitable people connected to you (–suitability as determined by some rules of engagement: profile info matching our expressed preferences. But the personal touch added by a personal matchmaker opens the doors to possibilities beyond algorithmic box-checking…)

Who are these matchmakers? Some could be unpaid volunteers, just there to help get dates for their friend: matching them with friends, or friends-of-friends. Or fellow hobbyists, or fellow churchgoers, or fellow professionals in some field, or associates of any other sort.

(Matchmakers, crucially, would not control the user interfaces. Users ultimately control their own UIs. They’re modifiable programs that sits on the users’ own machines.)

This system is also open to the possibility of commercial matchmaking operations. They’d compete on the basis of their respective reputations for offering a high-quality service, and differentiated target audiences. They could also co-operate, perhaps by merging together their respective pools of clients. One would expect commercial information-sharing of this sort to be regulated by data-protection laws. But what about when it’s a non-commercial operator? It seems that non-legislative means will be needed: protocols, filtering, and reputation systems for encouraging trustworthy matchmaking standards.

But perhaps much of this will prove unnecessary when we’ve got robust distributed social networking, one key factor being an identity system. Holochain is building its own distributed public key infrastructure. When you join Urbit, you get a new alien pseudonym. Probably a planet like ‘~mighex-forfem‘, which is a ‘permanent’ personal identity (and eventually, an asset with a price tag).

These potentially can serve as the basis for a range of multi-purpose reputation systems. They would provide assurances that could relieve some of the burdens from matchmakers, and users choosing matchmakers. And, perhaps, sometimes make dedicated matchmakers redundant? I suspect decentralised networking will make many centralised dating sites obsolete, but perhaps I’m too conservative in my estimations. It could make ‘online dating’ as such obsolete: absorbed into general-purpose social networking.

The system of ‘daters’ and ‘matchmakers’ could also be applied to non-dating contexts, e.g. professional networking. This is what one would expect from general-purpose social networking. Bumble has already expanded its functionality to include networking for business  and friendship. It may well try to grow and subsume the functionality of Facebook, LinkedIn, and Meetup.com. There’s no limit to the potential voraciousness of any of these platforms. For the as-yet most highly-evolved apex of this trend, see WeChat (video). WeChat is centralised.

Meanwhile in decentralised tech love

LegalFling. An app that records sexual consent on a blockchain. Ridiculous, sounds like a joke, but here we are.

Luna. A blockchain-based dating app. Seems more convoluted and centralised than the scheme I’ve outlined, but doesn’t seem completely stupid. Maybe it’ll work.

Marriage recorded on blockchain. This sounds like another joke, but it really makes sense. One can imagine a cryptocurrency that automatically reroutes funds sent to either of two wedded wallets into a couple’s shared wallet. And then, that wallet’s contents being split up according to a smart contract, when a divorce is marked on the chain. No lawyer required!

The decentralised social protocol Scuttlebutt explains itself with a love story (video).

 

Hacking on Holochain: first impressions

Here’s an exciting player in the ascendant decentralised computing space: Holochain. It’s a ‘post-blockchain’ platform for apps that communicate peer-to-peer, with secure user identities and cryptographically-validated shared data.

This week, key Holo people and creative collective darVOZ are running a sprint-athon in London. This is where I met them (people in both groups) for the first time, including Holo primary architect Art, and Connor, developer of Holo apps like Clutter (decentralised Twitter clone). And I got my own paws into developing in the system.

It’s alpha software, open source (of course), with dev tools that are already suitable for tinkering. They provide testing tools and seem to encourage a test-driven approach. Holo apps have configuration in JSON and code in JavaScript. Running instances of the app run the JS code in their VM. They also can provide a web UI.

Holo development involves writing and reading from the app’s DHT, which is a append-only data structure that’s automatically shared among connected peer apps that have the same ‘DNA’, which is a hash of the app’s code (Holo loves biological metaphors). Proper handling of this DHT seems to be the new core discipline that Holo demands from developers, and the key to unlocking its peculiar powers.

I’ve only scratched the surface, and I intend to contribute more to the effort of porting  Federated Wiki to Holochain, which is in progress. Then I’ll see how I can incorporate it into my little side project, the glorious Operating Space initiative (part of which is: this blog).

Using nginx to give your Urbit page a nice URL

Here’s the newest component of my little media empire, a chat room:

chat.operatingspace.net

It runs on Urbit, which is a fascinating, complex project which I’ll sum up here as: a decentralised, programmable social network. This blog post is a tutorial for something I just learned how to do: set up nginx to give a nice URL to a page on my Urbit ship.

Prerequisites

Details for getting here are beyond the scope of this post, but here are some helpful links:

  • We have some cloud hosting (I use a 2048 MB server on Vultr)
  • We have a domain name (I use Hover) pointing to our server
  • We’ve got an Urbit ship running on our server (see: Install Urbit)
  • We’ve got an nginx server running there too
  • The Urbit ship is serving a web page

(Those first two links are affiliate links.)

So we’ve got two servers, nginx and Urbit, running. We can see our urbit’s web interface by going to http://$ourdomain.net:8080. We can get to the page of interest by appending /pagename or /page/path to that url.

E.g. operatingspace.net:8080/chat/

Goal: get rid of the ‘:8080’ there, and use ‘chat’ as a subdomain instead of a path.

Procedure

We need nginx to be listening on port 80, the standard web port (so there’s no need for any port number to be used in our url).

Edit the nginx config file, which is at /etc/nginx/nginx.conf.

Inside the http block, add a server block like this:

http {
  # (There'll be some stuff already here. Ignore it.)

  # (add this:)
  server {
    listen  80;
    server_name  nice-subdomain.your-domain.net;

    location = / {
      proxy_pass  http://localhost:8080/your-page;
    }

    location / {
      proxy_pass  http://localhost:8080;
    }
  }
}

Substitute ‘nice-subdomain’ with your preferred name, and substitute ‘your-domain’ and ‘your-page’ with the appropriate names according to your setup.

Save the file and reload nginx with our new config:

$ service nginx reload

That should be all. Getting here was a lot of trial and error for me, so I hope this post saves someone all that trouble. I am informed that a future Urbit update will make this all quite unnecessary, but some of us like to be early-adopters :)

If following these steps hasn’t worked:

If I’ve missed something, please let me know so I can fix this post (while it’s not yet obsolete). Feel free to contact me — pop into the op-space chat room!

Bonus: secure the connection with SSL

For maximal coolness, ensure a secure connection between client and server. You can get a free, automatically-updating certificate, and enable https on your page, with Let’s Encrypt. The process is very streamlined with Certbot.

Cryptocurrencies

Notes on the Bitcoin scam

1. Why call Bitcoin a scam? One could appeal to a commonsense notion like the impossibility of making money out of nothing, which is what this system looks like it’s doing. This is, of course, a very simplified picture. To argue in this way is to invite accusations of ignorance of important details.

2. Those with an investment to protect are motivated to counter claims that would undermine it. Criticism of any sort of technology can undermine its market value. Bitcoin isn’t unique in this aspect, but it exemplifies it strongly. It’s a peculiar sort of technology, because it’s useless unless it’s considered valuable.

3. Bitcoin is a simulation of money, implemented as a decentralised internet protocol. It simulates a particular kind of money, one that’s ultimately limited in supply, like gold. Bitcoin is supposed to be, in some ways, a superior sort of money, accessible to anyone with a computer and internet access, outside of the control of governments and banks.

4. Unlike gold, Bitcoin has the feature of being transferable over the Net. Those of us who are privileged to have access to modern financial institutions are accustomed to this kind of convenient feature, using ordinary money.

5. Bitcoin’s implementation is based on cryptography. Owning Bitcoins means having access (by means of a cryptographic key) to a ‘wallet’ (an identity, named or anonymous or pseudonymous) on the system, and that wallet having some amount of coins assigned to it. Each wallet’s assignment of coins is determined by the transaction history (coins sent and received) recorded in the distributed ledger called the Blockchain.

6. The Blockchain is duplicated across many entities in the Bitcoin system. No one entity controls it. It’s public information, and that’s the system’s whole transaction history (so much for privacy). People can earn Bitcoin rewards through a process called ‘mining’, which is a competition for securely adding new transaction information to the Blockchain. The mining process involves computationally-difficult calculations. This means it consumes lots of energy. This cannot be made more efficient. That’s how it was designed. It’s designed to require greater and greater sacrifice.

7. The energy/environmental implications of mining is one major target for fundamental criticism of Bitcoin. See: Charlie Stross’s arguments. He also, alongside Jamie Dimon, CEO of JP Morgan Chase, likens it to a ‘distributed Ponzi scheme’. He predicts a burst of the bubble.

8. Stross’s criticism of Bitcoin has been longstanding. Here’s a response, defending Bitcoin and attacking Stross as ignorant and unimaginative.

9. Falling confidence in government institutions seems to be correlated with rising Bitcoin value. See:

To be continued …