Ostrich emoji

Many folks want one.

to be continued

Converting Kindle books to EPUB format

…is not easy. But it’s possible. At least, at the time of writing, and the handful of ebooks I tried so far. There are many such guides online, often incomplete. Or more accurately, out of date, because Amazon subsequently added more stumbling blocks. I shall document what worked for me, a combination of steps from various guides. Some steps may be redundant!

Continue reading “Converting Kindle books to EPUB format”

Simple static wiki [WIP]

I’m setting up a lightweight wiki for dumping notes on miscellaneous topics up to a website as a set of arbitrarily-linked pages.

Here’s my first pass solution, using the Eleventy/11ty static site generator.

https://github.com/sackeyjason/mcwiki

11ty converts the markdown into HTML files (in folders too). It also does templating, which I’ve yet to get my head around.

The main wiki-ish thing I’m looking for is quick linking. So I’ve added a shortcode for this.

Here’s my (only) bit of config for that:

module.exports = function(eleventyConfig) {
  // Liquid Shortcode
  eleventyConfig.addShortcode("link", function(title) {
    const slug = title.toLowerCase()
      .split(" ")
      .join("-");
    return `<a href="/${slug}">${title}</a>`;
  });
};

So, {% link 'personal wiki' %} gets turned into an HTML link: <a href="/personal-wiki">personal wiki</a>

The syntax isn’t ideal. It’s kind of clunky. I’d rather have [[this style of link]], the double-square-brackets markup, used in FedWiki, Zim, and Roam, invented by UseMod, I think.

But that syntax isn’t part of any of the template languages that 11ty supports. Perhaps this can be achieved via a plugin.

Another feature I’m looking to include in this thing:

https://mobile.twitter.com/swyx/status/1186485327516577793

Fixing all broken links with stub pages, automatically.


I’m not commited to sticking with the 11ty way of doing this. Here are my requirements:

  • content source is a flat directory of text (markdown) files
  • output is a website usable without JavaScript in the browser… so probably not React
  • easy linking between pages

I’m not wedded to static HTML generation (but the portability of this approach makes it valuable). I’m not allergic to databases nor dynamic serverside stuff.

Hosting will be on Github Pages for now.

 

 

 

Beginning development with function and a DB on Begin

I wrote about my impressions with this software development platform:

View at Medium.com

I expressed some confusion about certain obstacles and described some workarounds. Ryan from the project replied on Twitter and helped me towards figuring out the issues, and I’m quite satisfied with proper solutions.

I amended my post with the answers. But I think it’d be useful to have a post with only the specific ‘right’ answers, without the less-than-optimal workaround hacks there to confuse people. So, TODO.

One other mystery comes to mind, I’m not sure if the platform would support ‘unlimited’, ‘open’ url paths, e.g.

wiki.example.com/welcome-visitors/hypertext/web/blog/social-media/facebook/twitter/miasma

Where each of those terms between ‘/’s are pages, to be displayed lined up, Fedwiki-style.

An investigation, TODO. Along with Authentication.

Forum

Presenting the newest part of the overflow.space/Operating Space publishing infrastructure: a forum!

Keep an eye on it for new content from me. It’ll integrate with the blog in yet-to-be-determined ways. You could also join… it’s in beta.

Software crisis

The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

Edsger Dijkstra, The Humble Programmer (EWD340), Communications of the ACM

https://en.wikipedia.org/wiki/Software_crisis

Computers and socialism

Project Cybersyn was a Chilean project from 1971–1973 during the presidency of Salvador Allende aimed at constructing a distributed decision support system to aid in the management of the national economy. The project consisted of four modules: an economic simulator, custom software to check factory performance, an operations room, and a national network of telex machines that were linked to one mainframe computer.

https://en.wikipedia.org/wiki/Project_Cybersyn

Slava Gerovitch, From Newspeak to Cyberspeak: A History of Soviet Cybernetics
(MIT Press, 2002)

The history of Soviet cybernetics followed a curious arc. In the 1950s it was labeled a reactionary pseudoscience and a weapon of imperialist ideology.

[Psychologist Mikhail Iaroshevskii against cybernetics originator Norbert Wiener:]

“accused Wiener of reducing human thought to formal operations with signs, and labeled cybernetics a “modish pseudo-theory” fabricated by “philosophizing ignoramuses” and “utterly hostile to the people and to science.” He went on to cite Wiener’s well-known remark that the computer revolution was “bound to devalue the human brain” in the same way that the industrial revolution had devalued the human arm. While Wiener meant his comment to be a liberal critique of capitalism, and called on having “a society based on human values other than buying and selling,” Iaroshevskii apparently interpreted it as a misanthropic escapade. “From this fantastic idea,” he wrote, “semanticists-cannibals derive the conclusion that a larger part of humanity must be exterminated.”

http://nautil.us/issue/23/dominoes/how-the-computer-got-its-revenge-on-the-soviet-union

Then, the story goes, the USSR falls in love with computers, puts them to work processing economic data. Bureaucratic dysfunction prevents the outputs of such processing to support good decision-making. Then it collapses.

Wonder if much of the early Soviet anti-cybernetics stuff is translated.

Draft: Article 13

Note: I wanted to get this article finished and published before the protests, and before the vote. Oh well.

Last Saturday across Europe there were demonstrations and protests against proposed new EU copyright legislation. The concerning legislation concerns ‘online content-sharing service providers’, which means sites like Facebook, Twitter, YouTube, and regular web hosting companies. Right now, companies running such sites are generally not liable for copyright infringement that folks get up to using their systems.

If online services were always liable for copyright-infringing sharing performed by their users, running them would be much riskier and more expensive. Social networking and much online infrastructure, cloud services and such, would arguably not be able to exist as we know them.

So, our enlightened governments, having been convinced of the value these services provide in powering the digital economy and relieving our fellow citizens’ abysmal boredom, have exempted ‘online content-sharing service providers’ from regular burdens of copyright compliance. Under certain conditions.

To keep this special exemption, to keep out of the courts, sites need to work with copyright-owners to help reduce the infringing uses of their services. They need to follow certain practices like ‘notice and take down‘ procedures.

As a condition for limited liability online hosts must expeditiously remove or disable access to content they host when they are notified of the alleged illegality.

https://en.wikipedia.org/wiki/Notice_and_take_down

The massive quantity of content uploaded to big services includes, apparently, quite a lot of piracy. So copyright-owners understandably send lots and lots of takedown notices every day. They even churn them out automatically, guided by content-identification software. Copyright lawyer-bots. The sheer amount of notices is too much for online services to deal with manually, but deal with them they must! So, they too use automation on their side of the process. A decision that a certain piece of content, identified as problematic by someone‘s algorithm, should be removed or left alone is, frequently, decided by someone else’s algorithm, another machine, rather than humans.

New law expands the requirements for platforms to keep their limited liability status.

Click to access A-8-2018-0245-AM-271-271_EN.pdf

They must ‘demonstrate that they have’:

made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information;

Article 17: 4. (b)

Big tech was against it. Civil rights advocates didn’t like it either. Now it’s law, or soon enough it will be.


Repost: The end of software development

Originally posted on Medium, Apr 2, 2017

Computers, to be useful for any particular task, need software. The practice of creating that software, developing it, programming it, happens to be considered as a specialised one. That wasn’t always the case. Microcomputers (PCs from the 70s and 80s) used to boot into a programming environment, such as some variety of BASIC. The user manual that came with the Commodore 64 included programming instructions. The expectation was that, to a much greater extent than now, computer-users would be programmers. Programming, as in writing code, was part of standard procedure for operating a computer.

What’s changed?

The perception of the situation, at least. The division of labour here has intensified. Software development became a role separate and distinct from regular, productive uses of a computer. Everyone (including the developer) uses software created by others. That software may be generally available, for free or commercially. Or it might be developed bespoke.

An organisation that requires bespoke software for its business has several options. It can outsource the task to an external agency. It may decide to undertake the project in-house. It may have developers on staff ready to go. Or it may employ some, or train some.

These options aren’t really so distinct. A bespoke software project involving external developers will necessarily be a collaboration between the companies. It’ll involve training existing staff, because, clearly, they’ll need to learn to be able to use the new software. It didn’t exist before.

In theory, software development comes to an end when the project is done. The software’s functionality satisfies the requirements. Perhaps one day we’ll have all the software we need, and the role of ‘software developer’ will be obsolete.

Back to reality…

Software is never finished. It is only abandoned.

Software projects tend to go through multiple phases of iterative development and use. Requirements evolve, so software needs to be adaptable. The substance of software is highly malleable, changeable stuff. Any possible program can, with the necessary code-changes, be transformed into any other. How easily that process can be done is a function of structural design and complexity. A well-designed, simpler system is more flexible.

One way to gain flexibility is to provide user-operable configuration.

Software is, to varying extents, configurable. That means a user may adjust its functionality within a set of defined (by the developer) parameters. This activity is generally considered part of ordinary usage of software. It doesn’t directly accomplish the purpose of the software. It serves the goal in a secondary way, if the adjustable options can be set to more preferred ones.

Configurability is a double-edged sword. The more configurable a system is, the more potential it has for users to adapt it to better serve their needs, without the need for specialised development skills. But a more powerful configurable system is more complex. Highly-complex system configuration becomes a specialised skill unto itself. Systems like Drupal allow different users to be restricted to subsets of the vast, intimidating array of available configuration features, for the sake of mental health, as well as for system security.

Maximal configurability is where the specialised developer works, i.e. a programming environment that permits unrestricted transformation of a system.

The task of incorporating new functionality, when exceeding the scope of configuration, of course falls to the programmer. The complexity of a system’s configuration options will be, to some extent, reflected in the structure of the program code. A more complex system is more difficult to adjust without breaking stuff. That means development work becomes more risky and expensive.

Reducing a system’s configurability, by removing unwanted options, makes a system simpler to use, potentially enhancing productivity and reducing training costs. The program-level aspect of this might involve deleting code, reducing the program’s overall size and complexity. For a developer, this is a very pleasing notion.

Feature creep and bloat

Consider this well-known dysfunction of software development, the tendency for a project to grow in scope excessively.

Growth is good when it means a software system gaining more capabilities in a healthy way. So I’ve qualified my description of ‘creep’ and ‘bloat’ to include the concept of excess. But what does that really mean?

We may use ‘feature creep’ or ‘scope creep’ to refer to a phenomenon in the course of the development process where new requirements are added, which isn’t a problem in and of itself. Problems arise when extra resources are inadequately allocated, and are thus stretched to excess. Some additional requirements and resource-stretching is to be expected in real-world software development. Keeping that within manageable limits falls to the discipline of project management. Totally eliminating ‘creep’ isn’t the point — adaptable, flexible software is what we’re trying to make, and that takes an adaptable, flexible development team.

Mature software systems that have grown well beyond their original version may be characterised as ‘bloated’. One might cite, in particular cases, objective, technical reasons for this designation. E.g.:

  • it’s too resource-intensive; the software uses excessive processing, memory, bandwidth, etc.
  • with rising complexity, it’s grown too difficult to use
  • it’s code has grown too complex, stalling further development

These are all matters of judgment. Alas, the question of software bloat does not admit of clear, unambiguous answers derived from some universally-accepted calculation of factors.

Increasingly resource-intensive software can be run on better hardware.

Difficult-to-use software can be delivered with additional training.

A tangled, rusty codebase can be refactored, given sufficient development resources.

That is, if the project owner has the necessary resources to spend.


Zawinski’s Law: “Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.” Coined by Jamie Zawinski (who called it the “Law of Software Envelopment”) to express his belief that all truly useful programs experience pressure to evolve into toolkits and application platforms (the mailer thing, he says, is just a side effect of that).

“We won’t have the resources to develop and maintain a mail-reader in our bee colony-monitoring application”. That line of argument might well be convincing to the director of the bee-management institution. Especially if we have supporting demonstrable calculations, as seems plausible in this case.

Dedicated teams are building mail software. Our bee system and some other mail system can be made to cooperate. Instead of duplicating their efforts, we want to make use of them, through APIs.

The efficiency argument seems clear.

So we have good reasons to resist the temptation to add non-core functionality to some system, where that functionality is arguably better-served by other software. Why does that temptation arise in the first place?

Modern operating systems provide filesystems. So, a user can draw a picture with a graphics program, save it to a file, then send that file to someone else using an email program. The developers of the graphics and email programs didn’t need to directly cooperate for that to happen.

This exemplifies the idea that with proper infrastructure provided by the lower-level system, a non-specialist software-user can take multiple programs and work with them in combination. When they can’t, then they need to call up specialist developers, systems integrators.

Sufficiently decentralised architecture will let non-specialist users combine a set of simple tools to achieve their desired system functionality. That’s the next frontier for the computing world, and seems to belong to the disciplines of designing operating systems and networks. As they evolve, certain sorts of specialised software development work will become obsolete. Then we’ll move to a new level of complex system-building.

Anticipated challenges

It’s not enough to merely invent or discover a better basic system architecture. You also need to convince other developers to cooperate, to write their software so it plays nicely in that new environment. How is that done? They need some inventive. The new system can’t be just a little bit better than existing alternatives with which developers are already quite comfortable, thankyouverymuch. It needs to be a vast leap in technical capability.

Or, you could pay them. Microsoft developers write programs for Microsoft’s Windows systems. They tried to encourage developer support for their phone platform by means of cash incentives. That initiative was unsuccessful. Maybe if they’d spent more money, it would have worked. Who can tell?

Most software development organisations lack the funds for this crude approach. But many companies besides Microsoft (and including them) invest much in the struggle for hearts and minds of software users, and that subset who are developers. Information technology is rife with rival schools of thought and the politicking which is necessary for their propagation.

Ted Nelson has discussed this extensively and entertainingly:

Every faction wants you to think they are the wave of the future and because there are no objective criteria, as in religion there are no objective criteria, there are thousands of sects and splinter groups.

When we’re trying to build something, shouldn’t our technological choices be determined by objective, factual, criteria? Certainly, we may agree on many facts about the present technological landscape. What about the future? We’re building the future. Its shape is indeterminate. It’s made of malleable stuff, and the process of its shaping is one of much creative freedom.

Or, if you’re a hardcore cynic, it used to be. The technologically-creative formations of yesterday shape and guide and contrain our present-day options.


Further reading:

Unix philosophy – Wikipedia
The Unix philosophy, originated by Ken Thompson, is a set of cultural norms and philosophical approaches to minimalist…en.wikipedia.org

… and watching:

Cole Thomas The Consummation The Course of the Empire 1836