The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Working on a Local Single Page Application.

If only my editor’s highlighter supported template literals.

I’ll surely post about it more detail as I get it fully built, but I thought I’d write about it as I’ve been working on it.

As I’ve written before (diehealthy.org: 19 September 2020: “How I Track Games to Buy”) about how I track games to buy, using bookmarks, it occurred to me that I’d like something a little more defined than using bookmark titles to store data. And when I say a little, I mean that. I don’t want a relational database (though there is one built in the browser, if I want to use it). I don’t want a server to configure.

I do want a simple web application, often called a SPASingle-Page Application (Wikipedia: “Single-page application”). But as I said, no server. That makes it an LSPA, or Local Single-Page Application. And single-page really means single file, as in one HTML document that contains all the markup, all the code, and all the styles in one package.

The secret of the modern browser is that it has a ton of functionality that it doesn’t get credit for. While (unfortunately) the behavior of localStorage in the file: schema is undefined, at present Firefox makes it a per-file access, so as long as you persist the filename and path, you get the storage back. To be a little more sure of things, you can export the JSON data as a file, and import it from a file.


I’ve used one-off HTML files for other projects before, including years and years ago for some of my Computer Science classes where choice-of-language was wide-open, but it’s been awhile. In general, the browser is a nice platform to write for, but it’s underdeveloped in terms of making these kind of one-file applications widespread. To be fair, there are concerns about users downloading random HTML files and opening up vulnerabilities, but the general shape of browser security seems to guard decently against it such that enabling more local, serverless, in-browser applications would be useful.

People use spreadsheets for all sorts of data storage and simple applications because it’s got all those tools. They could be doing basically the same thing with a browser. (That’s in fact what I am doing with a browser.) In some cases, the numerical prowess of a spreadsheet will make their task easier. In other cases, the web-awareness of the browser makes my task a lot easier.

One place where a spreadsheets take the one-file ideal slightly further: they store the data in the application. Fair enough.


I looked at various libraries to bootstrap building the editing side of things from a JSON schema. There are a bunch of them, but none seemed very easy to integrate or to do what I wanted with it. It took me less time to build the equivalent for my own purposes than I spent looking at and trying to understand the umpteen JSON-to-forms Javascript libraries. And for mine I don’t add dependencies like underscore.js or jQuery.

On the other hand, I’ve spent a bit of my times dusting my ability to write Javascript, wondering what’s canonical these days. There are proper classes with constructors now (but you don’t have to use them). There are things like Map()s that are better than plain objects in some ways, but aren’t as nice to use in other ways.

To save a file, you have to:

  1. Create an anchor (A).
  2. Create a Blob.
  3. Create an object URL for the Blob.
  4. Add the URL as the anchor’s href attribute.
  5. Add the desired filename as the anchor’s download attribute.
  6. Add the anchor to the document.
  7. Call click() on the anchor (the actual download occurs).
  8. Clean up.

Seems like a lot of extra work for a very usual thing. (A roughly similar process to load from a file, except using an input with type of "file" and some other specifics.)


Anyhow, the one feature I’m relying on an extension for that Bookmarks have out of the box is the ability to get the title and URL in a single action. Mozilla Addons: Hiroaki Nakamura: “Format Link” is an add-on I already use to do that for other cases. But it seems like it’s something browsers should support, given how much we all use the web. We still need computers that understand our most-used forms of data as logical objects, but until then there’s nice extensions to help us.

With that ability, the main pieces of data for tracking a game are available with a paste, which isn’t too much more than simply adding a bookmark. The rest of the data was already stuff I was filling in by hand, but it will soon be into my application rather than cramming it all in the bookmark’s title.

Anything else you’d want from a server-provided service can be built locally and only using Javascript. Given I don’t expect to have hundreds of thousands of games to track, I don’t even need to use a relational database. The browser can handle filtering, sorting, search.

For heavier uses, like media databases, solely relying on a LSPA might not be enough power or might not be able to handle some things like creating thumbnails, but for many other uses, it’s a powerful model that I’d like to see more support and frameworks for people to make use of, especially non-programmers or people with only a little knowledge.

What is a Website?

Questioning when a website (in this case Wikipedia) is more than a mere website.

The question comes to mind of real-world places, like the Grand Canyon, libraries, street corners you know, museums. And institutions, great institutions (in the abstract, anyway) like the U.S. Congress and the U.S. Supreme Court and grand institutions of learning like M.I.T. and Harvard University.

We have a certain outlook for real-world places that root abstract concepts. But on the web we still refer to the greats as mere websites.

Wikipedia is a website, yes. But is it not one of several behemoths, great beasts of the modern netscape (err, not the company obviously, though they did loom in their day). Great institutions with all signs of the lasting legacy of the Harvards and M.I.T.s and so on.

There is a certain leveling and democracy in Alice’s Blog being on the same footing with a Wikipedia. But at the same time, it seems we should be looking for new names for great Internet-based institutions. That we should be able to call Wikipedia a website, but also call it something which evokes its importance and lasting nature.

We have another term, web application. It fits certain sites. But when I think of an application, I think of a shell that provides functionality. I don’t associate the data of the application with being what it provides. If Wikipedia is an application that provides encyclopedic articles, well, where’s the competing application that relies on the same data set?

And there can be, don’t get me wrong. You can download Wikipedia’s database and write an application (web-based or platform-based) and pull those articles up (you can also download MediaWiki, the software powering Wikipedia). Others have come up with some innovative ways to, e.g., pull articles over DNS. The main application-like part of Wikipedia is its editing functionality.

So maybe Wikipedia is both a website and a web application. At least in part. But that still doesn’t account for the community behind it. Or that its most essential nature is as a repository of articles.

You could try portal or property or destination after web. Maybe some other term. But I think an important step, one that will eventually happen, is to drop the web. That at some point the articles of Wikipedia will be the headliner, and whatever built-in editing and display they want on the web will be the website. There may be platform-based alternatives (or alternative web applications) to provide the editing and display.

This is already partially true for how Wikipedia and the other Wikimedia Foundation (the organization behind Wikipedia) sites handle things like images. Image files and other media that are embedded in Wikipedia actually live on the Wikimedia site and may be reused across language versions and on other Wikimedia Foundation sites.

But that trend can be extended to other uses, and once enough uses for a system exist, the web frontend is truly a frontend rather than the raison d’être for the backend. It reminds me of the story behind the GNOME Sudoku application; apparently the author wrote a solver for Sudoku puzzles, and it grew an interface up around it. Sometimes that process works in the other direction.